Neural Network Learning
Theoretical Foundations
$63.99 (P)
- Authors:
- Martin Anthony, London School of Economics and Political Science
- Peter L. Bartlett, Australian National University, Canberra
- Date Published: August 2009
- availability: Available
- format: Paperback
- isbn: 9780521118620
$
63.99
(P)
Paperback
Other available formats:
Hardback, eBook
Looking for an examination copy?
If you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact collegesales@cambridge.org providing details of the course you are teaching.
-
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics.
Read more- Contains results that have not appeared in journal papers or other books
- Presents many recent results in a unified framework and, in many cases, with simpler proofs
- Self-contained: it introduces the necessary background material on probability, statistics, combinatorics and computational complexity
- It is suitable for graduate students as well as active researchers in the area (parts of it have already formed the basis of a graduate course)
Reviews & endorsements
"This book gives a thorough but nevertheless self-contained treatment of neural network learning from the perspective of computational learning theory." Mathematical Reviews
See more reviews"This book is a rigorous treatise on neural networks that is written for advanced graduate students in computer science. Each chapter has a bibliographical section with helpful suggestions for further reading...this book would be best utilized within an advanced seminar context where the student would be assisted with examples, exercises, and elaborative comments provided by the professor." Telegraphic Reviews
Customer reviews
Not yet reviewed
Be the first to review
Review was not posted due to profanity
×Product details
- Date Published: August 2009
- format: Paperback
- isbn: 9780521118620
- length: 404 pages
- dimensions: 229 x 152 x 23 mm
- weight: 0.59kg
- availability: Available
Table of Contents
1. Introduction
Part I. Pattern Recognition with Binary-output Neural Networks:
2. The pattern recognition problem
3. The growth function and VC-dimension
4. General upper bounds on sample complexity
5. General lower bounds
6. The VC-dimension of linear threshold networks
7. Bounding the VC-dimension using geometric techniques
8. VC-dimension bounds for neural networks
Part II. Pattern Recognition with Real-output Neural Networks:
9. Classification with real values
10. Covering numbers and uniform convergence
11. The pseudo-dimension and fat-shattering dimension
12. Bounding covering numbers with dimensions
13. The sample complexity of classification learning
14. The dimensions of neural networks
15. Model selection
Part III. Learning Real-Valued Functions:
16. Learning classes of real functions
17. Uniform convergence results for real function classes
18. Bounding covering numbers
19. The sample complexity of learning function classes
20. Convex classes
21. Other learning problems
Part IV. Algorithmics:
22. Efficient learning
23. Learning as optimisation
24. The Boolean perceptron
25. Hardness results for feed-forward networks
26. Constructive learning algorithms for two-layered networks.-
General Resources
Find resources associated with this title
Type Name Unlocked * Format Size Showing of
This title is supported by one or more locked resources. Access to locked resources is granted exclusively by Cambridge University Press to instructors whose faculty status has been verified. To gain access to locked resources, instructors should sign in to or register for a Cambridge user account.
Please use locked resources responsibly and exercise your professional discretion when choosing how you share these materials with your students. Other instructors may wish to use locked resources for assessment purposes and their usefulness is undermined when the source files (for example, solution manuals or test banks) are shared online or via social networks.
Supplementary resources are subject to copyright. Instructors are permitted to view, print or download these resources for use in their teaching, but may not change them or use them for commercial gain.
If you are having problems accessing these resources please contact lecturers@cambridge.org.
Sorry, this resource is locked
Please register or sign in to request access. If you are having problems accessing these resources please email lecturers@cambridge.org
Register Sign in» Proceed
You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com catalogue page for details of the print & copy limits on our eBooks.
Continue ×Are you sure you want to delete your account?
This cannot be undone.
Thank you for your feedback which will help us improve our service.
If you requested a response, we will make sure to get back to you shortly.
×