Hostname: page-component-77f85d65b8-jkvpf Total loading time: 0 Render date: 2026-03-27T09:29:51.218Z Has data issue: false hasContentIssue false

Approximation theory of the MLP model in neural networks

Published online by Cambridge University Press:  07 November 2008

Allan Pinkus
Affiliation:
Department of Mathematics, Technion – Israel Institute of Technology, Haifa 32000, Israel E-mail: pinkus@tx.technion.ac.il

Abstract

In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of the more popular and practical of the many neural network models. Mathematically it is also one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are approximation-theoretic in character. Most of the research we will discuss is of very recent vintage. We will report on what has been done and on various unanswered questions. We will not be presenting practical (algorithmic) methods. We will, however, be exploring the capabilities and limitations of this model.

Information

Type
Research Article
Copyright
Copyright © Cambridge University Press 1999

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable