Neural Network Approximation

Neural Networks (NNs) are the method of choice for building learning algorithms. Their popularity stems from their empirical success on several challenging learning problems. However, most scholars agree that a convincing theoretical explanation for this success is still lacking. This article surveys the known approximation properties of the outputs of NNs with the aim of uncovering the properties that are not present in the more traditional methods of approximation used in numerical analysis. Comparisons are made with traditional approximation methods from the viewpoint of rate distortion. Another major component in the analysis of numerical approximation is the computational time needed to construct the approximation and this in turn is intimately connected with the stability of the approximation algorithm. So the stability of numerical approximation using NNs is a large part of the analysis put forward. The survey, for the most part, is concerned with NNs using the popular ReLU activation function. In this case, the outputs of the NNs are piecewise linear functions on rather complicated partitions of the domain of $f$ into cells that are convex polytopes. When the architecture of the NN is fixed and the parameters are allowed to vary, the set of output functions of the NN is a parameterized nonlinear manifold. It is shown that this manifold has certain space filling properties leading to an increased ability to approximate (better rate distortion) but at the expense of numerical stability. The space filling creates a challenge to the numerical method in finding best or good parameter choices when trying to approximate.


Introduction
Approximation using Neural Networks (NNs) is the method of choice for building numerical algorithms in Machine Learning (ML) and Artificial Intelligence (AI). It is now being looked at as a possible platform for computation in many other areas. Although NNs have been around for over 70 years, starting with the work of Hebb in the late 1940's (Hebb 1949) and Rosenblatt in the 1950's (Rosenblatt 1958), it is only recently that their popularity has surged as they have achieved state-of-the-art performance in a striking variety of machine learning problems. Examples of these are computer vision (Krizhevsky, Sutskever & Hinton 2012), employed for instance in self-driving cars, natural language processing (Wu, Schuster, Chen, Le, Norouzi, Macherey, Krikun, Cao, Gao, Macherey et al. 2016), used in Google Translate, or reinforcement learning, such as superhuman performance at Go (Silver, Huang, Maddison, Guez, Sifre, Van Den Driessche, Schrittwieser, Antonoglou, Panneershelvam, Lanctot et al. 2016, Silver, Schrittwieser, Simonyan, Antonoglou, Huang, Guez, Hubert, Baker, Lai, Bolton et al. 2017, to name a few.
Nevertheless, it is generally agreed upon that there is still a lack of solid mathematical analysis to explain the reasons behind these empirical successes. As a start, the understanding of the approximation properties of NNs is of vital importance since approximation is one of the main components of any algorithmic design. A rigorous analysis of what special properties NNs hold as a method of approximation could lead to both significant practical improvements (Bronstein, Bruna, LeCun, Szlam & Vandergheynst 2017, Le-Cun, Bengio & Hinton 2015 and a priori performance guarantees for computational algorithms based on NNs. At the heart of providing such a rigorous theory is understanding the benefits of using NNs as an approximation tool when compared with other more classical methods of approximation such as polynomials, wavelets, splines, and sparse approximation from bases, frames, and dictionaries. Indeed, most applications of NNs are built on some form of function approximation. This includes not only learning theory and statistical estimation, but also the new forays of NNs into other application domains such as numerical methods for solving partial differential equations (PDEs).
An often cited theoretical feature of neural networks is that they produce universal function approximants (Cybenko 1989, Hornik, Stinchcombe, White et al. 1989 in the sense that, given any continuous target function f and a target accuracy > 0, neural networks with enough judiciously chosen parameters produce an approximation to f within an error of size . This universal approximation capacity has been known since the 1980's. But surely, this cannot be the main reason why neural networks are so effective in practice. Indeed, all families of functions used in numerical approximation such as polynomials, splines, wavelets, etc., produce universal approximants. What we need to understand is in what way NNs are more effective than other methods as an approximation tool. The purpose of the present article is to describe the approximation properties of NNs as we presently understand them, and to compare their performance with other methods of approximation. To accomplish such a comparative analysis, we introduce, starting in §5, the tools by which various methods of approximation are evaluated. These include approximation rates on model classes, n-widths, metric entropy, and approximation classes. Since NN approximation is a form of nonlinear manifold approximation, we make this particular form of approximation the focal point of our exposition. The ensuing sections of the paper examine the specific approximation properties of NNs. After making some remarks that apply to general activation functions σ, we turn our attention to the performance of Rectified Linear Unit (ReLU) networks. These are the most heavily used in numerical settings and fortunately also the NNs most amenable to analysis.
Since the output of a ReLU network is a continuous piecewise linear function (CPwL), it is important to understand what the class of outputs of a ReLU network depending on n parameters looks like in terms of their allowable partitions and the correlation between the linear pieces. This topic is addressed in §3. This structure increases in complexity with the depth of the network. It turns out that deeper NNs give a richer set of outputs than shallow networks. Therefore, much of our analysis centers on deep ReLU networks.
The key takeaways from this paper are the following. For a fixed value of n, the outputs of ReLU networks depending on n parameters form a rich parametric family of CPwL functions. This manifold exhibits certain space filling properties (in the Banach space where we measure performance error), which are both a boon and a bottleneck. On one hand, space filling provides the possibility to approximate with relatively few parameters larger classes of functions than the classes that are currently approximated by classical methods. On the other hand, this flexibility comes at the expense of both the stability of the algorithm by which one selects the right parameters and the a priori performance guarantees and uncertainty quantifications of performance when using NNs in numerical algorithms. This points to the need for a comprehensive study of the trade-offs between stability of numerical algorithms based on NNs and their numerical efficiency.
This exposition is far from providing a satisfactory theory for approximation by NNs, even when we restrict ourselves to ReLU networks. We highlight several fundamental questions that remain unanswered. Their solution would not only lead to a better understanding of NN approximation but would most likely guarantee better performance in numerical algorithms. These issues include: • matching upper and lower for the rate of approximation of standard model classes when using ReLU networks • how to precisely describe the types of function classes that benefit from NN approximation. • how to numerically impose stability in parameter selection; • how the imposition of stability limits the performance of the network;

What is a Neural Network?
This section begins by introducing feed-forward neural networks and their elementary properties. We begin with a general setting and then specialize to the case of fully connected networks. While the latter networks are generally not the architecture of choice in most targeted applications, their architecture provides the most convenient way to understand the trade-offs between approximation efficiency and the complexity of the network. They also allow for a clearer picture of the balance between width and depth in the assignment of parameters.
In its most general formulation, a feed-forward neural network N is associated with a directed acyclic graph (DAG), called the architecture of N , determined by a finite set V of vertices and a finite set of directed edges E, in which every vertex v ∈ V must belong to at least one edge e ∈ E. The set V consists of three distinguished subsets. The first is the set I of input vertices. These vertices have no incoming edges and are placeholders for independent variables (i.e. network inputs). The second is the set O of output vertices. These vertices have no outgoing edges and will store, for given inputs, the corresponding value of the dependent variables (i.e. the network output). The third is the set of hidden vertices H = V\ {I, O}. For a given input, hidden vertices store certain intermediate values used to compute the corresponding output. The vertices and edges also have the following adornments: (1) With every v ∈ V \ I, there is an associated function σ v : R → R, called an activation function, and a scalar b v ∈ R, called a bias.
(2) For every e ∈ E, there is a scalar w e ∈ R, called a weight.
In going forward, we often refer to the vertices as nodes. The weights and biases are referred to as the trainable parameters of N . For a fixed network architecture, varying the values of these trainable parameters produces a family of output functions. The key to describing how these functions are constructed is that to each vertex v ∈ V \ I we associate a computational unit called a neuron. This unit takes as inputs the scalar outputs x v from vertices v ∈ V \O with an edge e = (v , v) ∈ E terminating at v, and outputs the scalar (2.1) The word neuron comes from the fact that (2.1) can be viewed as a simple computational model for a single biological neuron. A neuron associated to a vertex v ∈ V \ I observes signals x v computed by upstream neurons associated to v , takes a superposition of these signals, mediated by synaptic weights w e , e = (v , v), and outputs x v which is then seen by the downstream neurons. For all neurons associated to vertices v ∈ O, the activation function σ v is the identity. The neuron associated to the i-th input vertex v ∈ I, i = 1, . . . , d, where d := |I|, observes a scalar incoming (i.e. externally provided) signal x i and outputs x i , which is then seen by the downstream neurons.
We view the network scalar inputs x i , i = 1, . . . , d, as an independent variable x = (x 1 , . . . , x d ) ∈ Ω ⊂ R d , and define the output function S N , S N : Ω → R d of the network N by (2.2) Thus, S N is a function mapping Ω ⊂ R d into R d , called the output of N . Note that for a fixed network architecture G = (V, E), the outputs S N form a family of functions, determined by the trainable parameters {w e , b v }, e ∈ E, v ∈ V \ I.

Fully Connected Networks
The preceding is a very general definition of neural networks and encompasses virtually all network architectures used in practice. In this article, however, we restrict our study to rather special examples of such networks, the so-called fully connected networks. The architecture of such a network is given by a directed acyclic graph whose vertices are organized into layers. Each vertex of every layer is connected via outgoing edges to all vertices from the next layer and to no other vertices from any other layer, see Figure 2.1. The zero-th layer, called the input layer, consists of all d := n 0 input vertices I, called inputs, where the i-th input, receives scalar signal x i from outside the network. The combined input x := (x 1 , . . . , x d ) forms the independent variable of the function S N . The input layer is followed by the hidden vertices H, organized in L hidden layers, with the j-th layer consisting of n j hidden vertices, j = 1, . . . , L. The integer n j is called the width of the j-th layer. Finally, the (L + 1)-st layer, called the output layer, consists of all d := n L+1 output vertices O, called outputs. The output vector of such a fully connected network is the value S N (x) ∈ R d of the function S N for the input x.
As is customary, we specify that there is a single activation function σ that is used at each hidden vertex v ∈ H, i.e., σ v = σ for all v ∈ H. Recall that we always take the activation σ v at the output vertices v ∈ O to be the identity. In this way, each coordinate of S N (x) is a linear combination of the x v 's at layer L plus a bias term, which is a constant.
2.2. The Set Υ W,L (σ; d, d ) We will almost always consider only fully connected feed-forward NNs whose hidden layer widths are all the same, namely, n 1 = · · · = n L = W . Note that we can embed any fully connected feed-forward NN into a network with constant width W := max j=1,...,L n j by inserting (W − n j ) additional zero bias vertices to layer j and adding new edges with weights set to 0 between these vertices and those in the next layer if these vertices are in the first layer, we also add new edges with weights set to 0 between them and the input vertices). We use this fact frequently in what follows, sometimes without mentioning it. We refer to W as the width of the network and to L as its depth. In such networks, each vertex v from a hidden layer can be associated with a pair of indices (i, j), where j is the layer index and i is the row index of the location of v. We commonly refer to all vertices from a fixed row as a channel, and those from a fixed column as a layer. It is useful to introduce for every vertex v from the hidden layers the function z v := z i,j which records how the value at this neuron depends on the original input x = (x 1 , . . . , x d ) before the activation σ is applied. It follows that, σ(z v (x 1 , . . . , x d )) := σ(z i,j (x)) := X (j) i , i = 1, . . . , W ; j = 1, . . . , L, (2.5) which is the value of the i-th coordinate of the vector X (j) defined in (2.4).
For a fully connected feed-forward network N with width W , depth L, activation function σ, input dimension d, and output dimension d , we define the set Υ W,L := Υ W,L (σ; d, d ) as the collection of all output functions S N that we obtain by varying the choice of the trainable parameters of N . Recall that S N is a mapping from R d (or Ω ⊂ R d ) to R d . For notational simplicity, we often omit the dependence of Υ W,L on σ, d and d when these are understood from the context. Figure 2.1 shows the graph associated to a typical network that outputs functions from Υ W,L (σ; d, d ) with d = 2, W = 3, d = 1.
Figure 2.1. The graph associated to the outputs Υ 3,L (σ; 2, 1) of a fully connected network with input dimension 2, width 3, L hidden layers and output dimension 1.
Notice that Υ W,L is closed under addition of weights and biases in the output layer. This follows immediately from (2.3). However, it is not closed under addition of functions because we can find two outputs S N 1 , S N 2 from Υ W,L with S N 1 + S N 2 ∈ Υ W,L . This will become apparent even in our discussion of one layer ReLU networks, see §3. Therefore Υ W,L is not a linear space. Each function S N ∈ Υ W,L is determined by n(W, L) := (d + 1)W + W (W + 1)(L − 1) + d (W + 1) (2.6) parameters consisting of the entries of its weight matrices W (1) , . . . , W (L+1) and bias vectors b (1) , . . . , b (L+1) . We note in passing that it is possible that distinct choices of trainable parameters end up describing the same outputs. We order the parameters of N and organize them into a vector θ, where θ = θ N ∈ R n . The output S N is then given by (2.7) where M : R n → Υ W,L (σ; d, 1) is the map from network weights and biases to output functions. In this way, we view Υ W,L as a parametric manifold.
Here, we are using the term 'manifold' in a very loose sense because we are not attributing any of the topological or differential properties usually associated with this term. The mapping M is completely determined once we have fixed the architecture and the activation function σ. Therefore, having made these choices, designing approximation methods for a target function f boils down to choosing parameters θ =: a(f ) when f or information about f is given. In this way, any neural network based approximation method consists of determining a mapping a : f → a(f ) that assigns to each potential target function f a sequence of parameters a(f ) ∈ R n . The approximation to f is then given by A(f ) := M (a(f )). (2.8) Our main focus in this paper is to understand the approximation power of NNs and thus we work under the assumption that we have full access to the target function f . However, in the last two sections, we do make forays into the more realistic (numerical) settings where we are only provided (partial) information about f in terms of data observations, or we are only allowed to query f to gain information. This separation between the approximation setting and the numerical setting is important since it may be that we could approximate f well if we had unlimited access to f , but in reality we are limited by the information provided to us.
Fully connected feed-forward NNs are an important approximation tool that is amenable to theoretical analysis. In practice, the most common choice of activation function σ is the so-called rectified linear unit σ(t) = ReLU(t) := t + := max {0, t} .
This will constitute the main example of activation function studied in this article.

Fundamental Operations with Neural Networks
In this section, we discuss some fundamental operations that one can implement with NNs. Recall that Υ W,L = Υ W,L (σ; d, d ) is the set of functions that are outputs of a NN with the activation function σ, input dimension d, output dimension d , and L hidden layers each of fixed width W .
Let us begin by pointing out that deep neural networks naturally allow for two fundamental operations -parallelization and concatenation -which we will often use.
Parallelization: If the NNs N j have width W j , depth L, input dimension d, output dimension d , and an activation function σ j , j = 1, . . . , m, then the parallelization of these networks is a new network PAR(N 1 , . . . , N m ) with width W = W 1 +. . .+W m , depth L, input dimension d and output dimension d . Its graph is obtained by placing the hidden layers of N j on the top of each other. The parallelized network can output any linear combination S = m j=1 α j S j , where S j ∈ Υ W j ,L (σ j ; d, d ), j = 1, . . . , m. As described above, the network PAR(S N 1 , . . . , S Nm ) does not have full connectivity since the nodes of N j are not connected to the nodes of N i , i = j. However, we can view the resulting network as a fully connected network by completing it, that is, by adding the missing edges and assigning to them zero weights.
To describe network concatenation, let us agree that, given m functions f j : R d j → R d j+1 , we will define their composition f m •· · ·•f 1 : R d 1 → R d m+1 by (f m • · · · • f 1 ) (x) := f m (f m−1 (· · · f 1 (x))) . (2.9) If f : R d → R d , we also introduce the notation where the composition is performed m − 1 times.
Concatenation: If the NNs N j have width W 0 , depth L j , input dimension d j , output dimension d j+1 , and activation functions σ j , j = 1, . . . , m, then the concatenation of these networks is a network CONC(N 1 , . . . , N m ) with width W 0 , depth L = m j=1 L j , input dimension d 1 and output dimension d m+1 . Its graph is obtained by placing the hidden layers of these networks side by side with full connectivity between the hidden layers of N j and N j+1 . The concatenated NN can output any composition S = S m • S m−1 • · · · • S 1 , where the functions S j ∈ Υ W 0 ,L j (σ j ; d j , d j+1 ), j = 1, . . . , m. It does this by assigning weights and biases, associated to edges connecting the last hidden layer of an N j to a node of the first hidden layer of the neighbor N j+1 , using the output weights and biases of N j and input weights and biases of N j+1 .
Parallelization and concatenation of neural networks allow us to perform the following operations between their outputs.
To prove this, let N be the NN which outputs the function S. To output the function T , it is enough to alter the weights and biases of the first hidden layer of N . Namely, if a neuron from this layer computes σ(w·x+b), w ∈ R d , b ∈ R, we replace it with σ(aw · x + w · c + b). Here, x · x denotes the inner product of two vectors x, x of the same dimension. The remaining layers stand the same.

One Layer Neural Networks
The function S N ∈ Υ W,1 (σ; d, 1) produced by a single hidden layer, fully connected feed-forward neural network N with activation function σ, d inputs and one output has the representation where W is the width of the first (and only) hidden layer and b 0 is the bias of the output node. The above can equivalently be written as where δ z denotes the mass one atomic measure at the point z. The correspondence between purely atomic Borel measures and Υ W,1 (σ; d, 1) is useful in addressing various structural properties of this set via functional analytic arguments. As an example of this, we discuss briefly the density question of whether for each continuous function f , defined on a compact set Ω ⊂ R d , we have dist(f, Υ W,1 (σ; d, 1)) C(Ω) → 0, W → ∞, (2.12) where for the current discussion the distance is measured in the uniform norm f C(Ω) := sup x∈Ω |f (x)|. This question is discussed in detail in (Pinkus 1999), see also (Cybenko 1989), (Petersen 2020). Here we only point out some key results. Note that (2.12) does not hold for every activation function σ. For example, if σ = P is a univariate polynomial of degree m, then σ(w · x + b) with w ∈ R d , b ∈ R is a multivariate polynomial in x = (x 1 , . . . , x d ) of total degree m and hence Υ W,1 (σ; d, 1) ⊂ X, where X is a linear space of fixed finite dimension. Thus, (2.12) does not hold.
A sufficient condition: If σ is a continuous function on R such that for each finite, signed regular Borel measure µ = 0 on Ω the function is not identically zero, then the density condition (2.12) holds. This condition can be used to prove the following examples of activation functions for which (2.12) holds.
• Sigmoidal activation function: A function σ, defined and continuous on R, is called a sigmoidal function if lim t→∞ σ(t) = 1, and lim For each such σ the density statement (2.12) holds, see (Cybenko 1989) for one of the first proofs in this case.

ReLU Networks
In this section, we summarize what is known about the outputs of NNs with ReLU activation (ReLU networks). We begin by making general remarks that hold for any ReLU network and then turn to special cases, especially those that form our main interest of study in this paper. Perhaps the most important structural property of ReLU networks is that any output of such a network is a continuous piecewise linear function. To describe this precisely, we start with the following definitions.
Definition 3.1. A polytope partition of R d is a finite collection P = {P j } of convex closed d dimensional polytopes (not necessarily bounded) which are exhaustive and have disjoint interiors P o j , that is, Each such convex polytope is the intersection of a finite number of closed half spaces. We refer to the polytopes P j of such a partition as cells.
Definition 3.2. A function S : R d → R is a continuous piecewise linear function (CPwL) if S is globally continuous and there is a polytope partition P = {P j } on which S is locally affine, that is, We then say that S is subordinate to the polytope partition P.
We denote by Σ n,d := Σ n,d (CPwL) the collection of all CPwL functions S : R d → R that are subordinate to some polytope partition with at most n cells. This collection is a nonlinear set. For example, if S 1 and S 2 are subordinate to different partitions of size n then the sum S 1 + S 2 is typically not in Σ n,d . Going forward in this paper, we do not study Σ n,d but only use it for comparison purposes. Note that if a CPwL function S is subordinate to P then it is also subordinate to any refinement of P.
In the special case when d = 1, polytope partitions of R are simply decompositions of R into intervals with disjoint interiors, and thus Σ n,1 is in fact the set of all univariate continuous linear free-knot splines with at most n − 1 break points.
Theorem 3.3. Let N be a ReLU network with d inputs, one output node, and m hidden neurons. Then, the output S N of N is a CPwL function subordinate to a partition P N with at most 3 m cells, i.e., #P N ≤ 3 m .
Proof: Let us denote by z 1 (x), . . . , z m (x) the pre-activations of the network's neurons, that is the values stored at the neurons for input x before ReLU is applied. For every activation pattern where for the purpose of this formula sgn(0) := 0. By construction, each Ω ν is the (possibly empty) collection of all inputs x ∈ R d at which the network neurons have a given pattern of being on, off, or zero, prescribed by ν. A simple inductive argument, see Lemma 7 in (Hanin & Rolnick 2019), shows that Ω ν is a convex polytope. Moreover, defining P ν to be the closure of Ω ν , we see that the collection ∅} is a polytope partition of R d and that S N is a CPwL function subordinate to this partition.
Having established that each output of a ReLU network is a CPwL function, it is of interest to give bounds for the number of cells in such a partition. The above theorem gives a bound 3 m . However, many of the cells Ω ν , defined in (3.1), are either empty or have dimension smaller than d. We shall see as we proceed in this section, that this bound can be improved in the cases of interest to us. At this stage, let us just mention the following almost trivial result.
Claim. Consider a fixed architecture for neural networks with m neurons as above. Let S(·; θ) be the outputs of a ReLU network with parameters θ ∈ R n . Then, outside a set of measure zero in R n , any selection of parameters results in an S(·, θ) which is subordinate to a partition with at most 2 m cells.
This claim is proved by showing that outside of a set of measure zero in parameter space R n , all cells Ω ν defined in (3.1) are empty or have dimension < d whenever one of the components ν j of ν is zero.
Our purpose in the remainder of this section is to explore the properties of both the polytope partitions created by ReLU networks and the complexity of the CPwL functions that they output. We start in §3.1 by studying in detail ReLU networks with input and output dimension 1, postponing a discussion of higher input dimensions to §3.2.

Univariate ReLU Networks
In this section, we consider ReLU networks with input and output dimensions both equal to one, that is, d = d = 1. In this case, the polytope partitions of R are simply decompositions of R into a finite collection of intervals with disjoint interiors, and the CPwL functions subordinate to such partitions are customarily referred to as continuous linear free-knot splines.

Deep Univariate ReLU Networks
According to the discussion at the start of §3, any function from the set Υ W,L := Υ W,L (ReLU; 1, 1) is a CPwL function on R. It is of interest to understand exactly which CPwL functions are in this set. We shall see that such a characterization is rather straightforward when L = 1, but the situation gets more complicated as L gets larger. When L = 1, any selection of weights and biases produces as output a CPwL function S with at most W breakpoints. Indeed, S can be expressed as S = b 0 + W j=1 a j η j (t), where the functions η j (t) = (±t + b j ) + . Obviously the bound W cannot be improved. Although Υ W,1 does not contain all of Σ W +1,1 , it does contain all of Σ W,1 , see (3.2).
When L > 1, the situation gets much more complicated. Even though there is no precise characterization of the set of outputs, we can provide some important insight. When L grows, two important things happen: (i) the number breakpoints of functions from Υ W,L can be exponential in L; (ii) not every CPwL function with this large number of breakpoints is in Υ W,L , in fact, far from it.
We first address (i). Fix W and let S ∈ Υ W,L = Υ W,L (ReLU; 1, 1). We define m(L) as the largest number of breakpoints that any S ∈ Υ W,L can have. We know that m(1) = W . Moreover, once the parameters are chosen for the first layer, any output S has breakpoints in a fixed set Λ of cardinality at most W . We can bound m(L + 1) in terms of m(L) as follows. Each S ∈ Υ W,L+1 can be expressed as There is a set Λ, #(Λ) ≤ m(L) such that each of the S k have their breakpoints in Λ. Fix k and consider the function [S k ] + . It has two types of breakpoints. One are those it inherited from Λ and the second is the set Λ k of new breakpoints that arose after the application of ReLU. We have #(Λ k ) ≤ #(Λ) + 1 ≤ m(L) + 1, k = 1, . . . , W . Hence, S has at most m(L) + W (m(L) + 1) breakpoints.It follows that This recursion with the starting value m(1) = W gives the bound m(L) ≤ (W + 1) L , L = 1, 2, . . . . This bound can be improved somewhat at the expense of a more involved argument. This potential exponential growth of the breakpoints as a function of the number of neurons can in fact be attained. A simple example, first noted by Telgarsky (Telgarsky 2016), is to compose the hat function H (0,1/2,1) on [0, 1], see (3.4), with itself (L−1) times. The resulting function S := H •L is the saw tooth function with 2 L−1 teeth, see Figure 3.2. Since H ∈ Υ 2,1 (ReLU; 1, 1), it follows from Composition that S ∈ Υ 2,L (ReLU; 1, 1). While a function in Υ W,L (ReLU; 1, 1) can have an exponential (in L) number N L of breakpoints, one should not get disillusioned into thinking that this set of functions is anywhere close to Σ N L ,1 , which was the point in (ii). The reason for this is that there are linear dependencies between the linear pieces in the case of a large number of breakpoints. There is not yet a good understanding of exactly which CPwL functions are in Υ W,L (ReLU; 1, 1) when L is large. A possible starting point to unravel this is to consider special cases such as breakpoints in [0, 1] at the dyadic integers j2 −n , j = 0, 1, . . . , 2 n , with n > 1.

Multivariate ReLU Networks
We now turn to studying the properties of ReLU networks with input dimension d > 1, starting with those networks that have one hidden layer. Deeper multivariate ReLU networks are discussed in §3.2.2.
3.2.1. Multivariate ReLU Networks with one hidden layer A ReLU network N with input dimension d > 1, output dimension 1, and one hidden layer of width W outputs a function of the form where b 0 , a j ∈ R, j = 1, . . . , W , and z j , j = 1, . . . , W , is the function computed by the j th neuron before applying ReLU, Let us record the following useful fact.
This follows by removing the zero weight vectors and normalizing the remaining w j 's by adjusting the constants b j and a j . We use this representation of functions in Υ W,1 (ReLU; d, 1) in going forward. Given the collection of W unit vectors w j ∈ R d and biases b j ∈ R from (3.8), we define the hyperplanes and the collection H := {H 1 , . . . , H W }, associated to the network N . This collection is an example of a hyperplane arrangement, a classical subject in combinatorics (Stanley et al. 2004). We now describe how the hyperplane arrangement H associated to N determines the polytope partition of R d to which S N is subordinate in the sense of Definition 3.2. Because of our assumption that each w j has unit norm, only activation patterns with entries ν j ∈ {±1} lead to cells with nonempty interiors. For any such activation pattern ν = (ν 1 , . . . , ν W ), we may write in the notation of (3.1), as an intersection of half-spaces. Thus, in the special case of Υ W,1 the cells in partitions P of the output functions have a simple global description as the connected components of R d when the hyperplanes are removed. The partition P associated to H is the collection of the closures P ν := Ω ν for which Ω ν = ∅. Each cell P ν is a convex closed polytope. By a special case of a classical result of Zaslavsky (Zaslavsky 1975), the number of cells #P in P satisfies In fact, Zaslavsky's theorem shows that away from a co-dimension 1 set of weights and biases (i.e. when the hyperplanes are in general position), this upper bound is attained. In summary, any function S ∈ Υ W,1 (ReLU; d, 1) is a CPwL function subordinate to a partition P, generated by an arrangement of W hyperplanes determined by the weights and biases of the network N . However, it is imporant to note that, unlike the case of one hidden layer with input dimension 1, not every CPwL functions subordinate to a polytope partition arising from a hyperplane arrangement is the output of a one layer ReLU network. There are several ways to see this that we now discuss.
First of all, let us show that Υ W,1 (ReLU; d, 1) does not contain any compactly supported function on R d once d > 1. To see this, consider a function S of the form (3.8) and suppose that S has a compact support on R d . For each j = 1, . . . , W , there is a ball B j ⊂ R d outside of the support of S that intersects the hyperplane H j but none of the other hyperplanes H i , i = j. We have, where L is an affine function. Since L and η j are linearly independent on B j , this implies a j = 0. Hence, all a j are zero and S is a constant. Since S was assumed to have compact support, this constant is zero.
Another way to see that Υ W,1 (ReLU; d, 1) does not contain all CPwL functions subordinate to a given hyperplane arrangement is the following. Once H = {H 1 , . . . , H W } is chosen, thereby determining the partition P, the outputs of Υ W,1 that are subordinate to P are all contained in a linear space of dimension 2W + 1. This follows from the representation (3.8). Indeed, since we've assumed that ||w j || = 1 for each j, there are only two choices of (w j , b j ) for the functions z j (x) = w j · x + b j , j = 1, . . . , W , such that However, when d > 1, Zaslavsky's theorem shows that the number of cells in P can grow as fast as CW d when W ≥ d. Hence, in general, the set of all CPwL functions subordinate to P is a linear space with dimension much larger than 2W + 1.
The following lemma gives a simple way of checking when a CPwL function subordinate to a partition generated by an arrangement of W hyperplanes is in Υ W,1 (ReLU; d, 1). Before formulating the lemma, let us note that if T is a CPwl function subordinate to P, then on any cell P ν of P, the gradient ∇T is a constant vector. It follows that ∇T is a piecewise constant vector valued function subordinate to P.
Lemma 3.4. Let P be a partition of R d generated by a hyperplane ar- and w j is a unit vector, j = 1, . . . , W . Let T be CPwL function that is subordinate to P. Then T has the representation T = S + L, S ∈ Υ W,1 (ReLU; d, 1), L -globally affine, if and only if the following condition holds: (A) For each j = 1, . . . , W , there is a real number a j such that for every x ∈ R d on the hyperplane H j , and on no other hyperplane, the jump in ∇T across H j at x, equals a j w j .
Proof: First, let T = S + L with S and L as above. We know that the function S ∈ Υ W,1 (ReLU; d, 1) has the representation (3.8) with the w j 's being unit vectors. Given x ∈ R d that belongs to H j and to no other hyperplane, the jump in ∇T at x is the same as that of a j ∇η j at x, which is a j w j . This shows that T satisfies (A). For the converse, suppose that T is any CPwL that is subordinate to P and that T satisfies condition (A). We define S := W j=1 a j η j ∈ Υ W,1 (ReLU; d, 1), where the a j are given by (A) and η j (x) := (w j · x + b j ) + . Consider the function (T − S) which is piecewise linear subordinate to the partition P. We claim that this function is a globally affine function. Indeed, otherwise there would be two adjacent cells which share a d − 1 dimensional boundary (which is part of some H j ) and the jump of ∇(T − S) across this boundary is not zero. But both T and S have the same jump a j w j of their gradient across this boundary. This is a contradiction and proves the lemma.

Deep Multivariate ReLU Networks
The discussion at the beginning of §3 showed that any S ∈ Υ W,L (ReLU; d, 1) is a CPwL function subordinate to a partition P of R d into convex polytopes. The partition we produced to show this was not determined by a hyperplane arrangement. It turns out, as we shall see in §3.3.3, that S is always subordinate to some partition given by a hyperplane arrangement. However, the latter partition is not a minimal partition to which S is subordinate. In other words, unlike the case of L = 1, the minimal polytope partition of R d to which S is subordinate is not simply given by the cells of a hyperplane arrangement.
Given S ∈ Υ W,L (ReLU; d, 1), we do not know the best bound for the number of cells in a minimal convex polytope partition to which S is subordinate, but we can give some bounds. Recall that from (3.1) we have the bound 3 W L . We also stated that in the generic case this bound can be improved to 2 W L . Indeed, in the generic case, this partition consists of the closures of those convex sets which have a nonempty interior. We continue to write z j (x) for the CPwL function computed by the j th neuron in N before ReLU is applied, and we have assumed for simplicity that for every neuron z j the sets have co-dimension at least 1. It is important to note that the H j 's are no longer hyperplanes since the functions x → z j (x) are not affine. Instead, H j is the zero level set of z j and, following the language in (Hanin 2019), we refer to the H j as bent hyperplanes and as a bent hyperplane arrangement. We can now describe the cells in the partition P, or equivalently the cells that are the connected components of R d \H.
To understand this setting more clearly, let us consider a neuron z in the second hidden layer of N . Note that the function x → z(x) is the output of an element of Υ W,1 . Hence, it is CPwL subordinate to the partition defined by the hyperplane arrangement H (1) = {H 1 , . . . , H W } created by the neurons z 1 , . . . , z W in the first hidden layer of N . On each cell C of the arrangement H (1) , the function x → z(x) is affine. Let H z denote the bent hyperplane associated with this neuron z from the second layer. We see that H z ∩ C is given by the (possibly empty) intersection of a single hyperplane with C. However, because x → z(x) is a different affine function on different cells, its zero set H z may "bend" at the boundary between two cells and is not given globally by a single hyperplane. More is true: while in every cell H z coincides with a single hyperplane, globally, it may have several connected components.
Just as in the case of univariate ReLU networks considered in §3.1, the number of cells defined by the bent hyperplane arrangement H in a deep ReLU network with any input dimension can grow exponentially with depth. In fact, see (Montufar, Pascanu, Cho & Bengio 2014, Theorem 5), there are ReLU networks of depth L and width W ≥ d giving rise to partitions with at least Similarly to the univariate case, the exponential growth in the number of pieces of the CPwL functions produced by deep networks is a consequence of composition.
Let us summarize what we know about the sets Υ W,L (ReLU; d, 1). Each S ∈ Υ W,L (ReLU; d, 1) is a CPwL function on a finite partition of R d into convex polytopes. The number of cells in the partition for S can be very large compared to the number of parameters n(W, L). For example, when L = 1 the number of cells can be of order W d , and as L grows the number of cells can grow exponentially with respect to L. Although the number of cells is large, not every CPwL function subordinate to such a partition is in Υ W,L (ReLU; d, 1) since there is linear dependency imposed on the affine pieces. However, every CPwL function subordinate to a partition into convex polytopes is eventually in the Υ W,L (ReLU; d, 1) spaces, provided we take L and W large enough, see CPwL1 and CPwL2 in §3.3.3. Let us also repeat the fact that every convex polytope is the intersection of a finite number of half spaces given by a suitable hyperplane arrangement. Finally, by refining partitions, we have that every S that is in one of the spaces Υ W,L (ReLU; d, 1) is a CPwL function on a partition given by a hyperplane arrangement but the number of these hyperplanes may be huge.

Properties of deep ReLU networks
Deep ReLU networks have a variety of remarkable properties that make their outputs a powerful approximation tool. We describe some of these properties in the present section. We study the sets Υ W,L := Υ W,L (ReLU; d, 1), where W is generally fixed and L is allowed to vary. We begin by introducing a special class of ReLU networks that are effective in the construction of numerical approximation methods.

W,L
We describe in this section a set of NNs that we call special networks, following , which designate certain channels for specific tasks. We introduce the notation η i,j for the function x → η i,j (x) of the initial input x at the (i, j) th node. Usually, η i,j = [z i,j ] + , that is, η i,j is the CPwL function computed by the (i, j) th neuron after ReLU is applied, but in special networks we sometimes do not apply the activation ReLU at certain nodes. However, as we shall see, the outputs of a special network, when restricted to a bounded domain, are still functions in Υ W,L .
In a special network, we reserve the top d channels to simply push forward the input values of x. Namely, channel i ∈ {1, . . . , d} has where x = (x 1 , . . . , x d ) is the initial input. This allows us to use x as an input to the computation performed at any later layer of the network. We refer to such channels as source channels (SC). In a special network, we also designate some channels, called collation channels (CC), to simply aggregate the value of certain intermediate computations. In a collation channel the ReLU activation may or may not be applied. The key point here is that nodes from both collation and source channels may be ReLU free. Therefore, such networks are not true ReLU networks.
We denote the set of functions S which are the outputs of a special network of width W and depth L by Υ W,L . A useful observation made in ) is that the functions that are outputs of a special network, when restricted to a bounded domain, are in Υ W,L , that is, This is proved using the following observations.
• Given any configuration of weights and biases in any collation channel of a special network and any fixed compact set K of inputs, we may choose a sufficiently large value b i,j associated to the (i, j) th node so that z i,j (x) + b i,j > 0 for all x ∈ K. Then, we construct the true ReLU network by assigning to this node the function η i,j , given by The effect of b i,j on any subsequent computation is then eliminated by adding an extra bias (to the bias present from the special network) for any neuron from the next layer to which the output η i,j (x) is passed to. We perform this procedure to every ReLU free node from the collation channels. • Similar treatment as above is done for all nodes in all source channels.
• The ReLU network that has been constructed has the same output as that of the special network we started with.
This specific trick works only when K is compact. Alternatively, at the expense of increasing the width, we can create a true ReLU network of width W = W 0 + 2d + 2k, where W 0 + d + k is the width of the special network with d source and k collation channels by using the identity t = t + − (−t) + . This approach works for arbitrary inputs but at the expense of increasing the width of the network.
In what follows, we use extensively special networks to derive some important properties of deep networks since they facilitate many constructions.

Some important properties of deep ReLU networks
As noted in Observation from §3.2 in the case of one layer networks, the vector w consisting of all incoming weights into any hidden node of a ReLU network, if nonzero, can be taken to be of Euclidean norm w 2 = 1. Indeed, this follows from the equality and the fact that the factor w 2 can be absorbed by the outgoing weights. Next, we return to the Addition Property. Earlier we have shown that we can add functions in Υ W,L (σ; d, d ) by increasing the width of the network using the method of parallelization. Here, we want to observe that addition can also be performed by increasing depth and not significantly enlarging width. Here is a statement to that effect.
Addition by increasing depth: If S j ∈ Υ W,L j , j = 1, . . . , m, then for any α j ∈ R we have where L := L 1 + · · · + L m . In this statement all S j 's are viewed as functions on [0, 1] d (or any bounded rectangle R ⊂ R d ).
Indeed, if N 1 , . . . , N m , are the ReLU networks that produce the S j 's, then we create from these the following special network. First, we augment each of the N j 's by adding d source channels, and one collation channel. We denote this new augmented network by N j . Next, we place the hidden layers of the augmented networks side by side, connect the source channels of the N j 's and place (with appropriate weights) the outputs of N j , j = 1, . . . , m − 1, in the collation channel. Finally, the desired sum is the output of the concatenated network (with appropriate weights). As a result, we obtain a special network with width W + d + 1 and depth L = L 1 + . . . + L m . The result follows from the containment (3.10).
Another operation on the output functions of ReLU networks that can be easily performed with increasing depth is to take their minimum or maximum. For this, let us first observe that given t, t ∈ R, we have Hence, min{t, t }, max{t, t } ∈ Υ 3,1 (ReLU; 2, 1). We can extend the above minimization to an arbitrary number of inputs.
We discuss the case of minimum only, since the case of maximum is almost the same. We start with proving (3.13). In our construction, we will use the fact that We first use Parallelization to construct for each k ≥ 1 a neural network N k with 2 k inputs and 2 k−1 outputs that creates a vector in R 2 k−1 with components min{x j , x j+1 }, j = 1, . . . , 2 k − 1, j-odd, by stacking on the top of each other the networks that produce min{x j , x j+1 }. Since each of the networks in the stack has width 3, we end up with a network with width W k = 3 · 2 k−1 and depth L k = 1. We concatenate the networks N k , . . . , N 1 in this order, by feeding the output of N j as input to N j−1 . It is easy to see that the concatenated network N k outputs min{x 1 , . . . , x 2 k }, has depth L = k and varying widths. We can augment the network by adding extra nodes and edges to each layer so that we end up with a network of width W = 3 · 2 k−1 . For general m, we let k := log 2 m and definex j = x j , 1 ≤ j ≤ m and x j := x m , m < j ≤ 2 k . Applying the above to this new sequence gives the result (3.13). To show (3.12), we feed z j (x) into the first hidden layer of N k by assigning appropriate input weights and node biases.
At the end, if we want to output the ReLU of min/max, we just add another hidden layer to perform the ReLU.
Another way to compute the above min/max is via increasing the depth and keeping the width relatively small by utilizing a recursive formula, first used in (Hanin 2019).
We discuss the case of maximum only (the case of minimum is treated likewise). Let µ 1 (x) := z 1 (x) and µ k (x) := max{z 1 (x), . . . , z k (x)}, k ≥ 2. We use the recursion formula and discuss the case R = [0, 1] d . For the case of a general rectangle R one needs to add appropriate biases. Our construction is the following: • the first d channels of the network push forward the variables x 1 , . . . , x d .
Their nodes can be viewed as ReLU nodes since t + = t for t ≥ 0. • the (d + 1) st channel computes in its first node (z 1 (x) − z 2 (x)) + . Note that if we wanted to, we could stop and output µ 2 (x) at this stage. The j th node of this channel, j = 2, . . . , m−2, computes (µ j (x)−z j+1 (x)) + , which is then given as an input to the (j + 1) st node. The final layer L = m − 1 will hold µ m−1 (x) and hence can output µ m (x).
To show the last statement, we add a hidden layer after the last hidden layer of the NN from the construction above to perform the ReLU of the max/min. Of course, we could augment the resulting network by adding nodes and connections so that we have a fully connected feed-forward NN.
Note that if we want to compute the min/max of m linear functions z j , j = 1, . . . , m, viewed as functions on the whole R d , we can do this if we take W = 2d + 1, since any channel i, 1 ≤ i ≤ d, in the above construction doubles in order to be able to forward the input t = t + − (−t) + .
More general statements hold when instead of computing the min/max of affine functions we have to find the min/max of outputs of neural networks.
We use Parallelization to construct the first L 0 hidden layers of the network N that outputs S Then, from the L 0 -th layer we can output any of the S j , j = 1, . . . , m. We concatenate this with the network in MM1 which has log 2 m hidden layers to complete the construction of N . Clearly, the resulting network has varying width, where the first L 0 layers are with width W 1 + W 2 + · · · + W m , while the last log 2 m layers have width 3 · 2 log 2 m −1 . We augment this network by adding extra nodes and edges. At the end, our network has width W = max{W 1 + W 2 + · · · + W m , 3 · 2 log 2 m −1 }.
In the case of W j ≥ 3, W 1 + . . . + W m ≥ 3 · 2 log 2 m −1 , which gives that It is also possible to do the minimization by increasing the depth of the network while keeping the width relatively the same. In order to construct a neural network N which shows that S ∈ Υ W,L , we utilize Concatenation in place of Parallelization and we use special networks. Let N j be a network of width W 0 and depth L j which outputs S j , j = 1, . . . , m. To each of the networks N j , we add d source channels to push forward the original inputs x 1 , . . . , x d and one collation channel that we will use to update computations towards outputting S. Let us denote these special networks by N j . We now explain how to construct N . The first L 1 hidden layers of N consist of those of N 1 . The collation channel simply pushes forward zero for these layers. We concatenate N 1 with N 2 by placing S 1 in the collation channel of N 2 and then pushing it forward, and by placing the outputs of the source channels of N 1 , multiplied by appropriate weights (those that enter the first layer of N 2 ), into the first hidden layer of N 2 . If m = 2, we can complete the construction by placing a last hidden layer which takes S 1 from the collation channel and S 2 as an output from N 2 and computes (S 1 −S 2 ) + , (S 2 ) + , and (−S 2 ) + . We augment the resulting network with additional nodes, if necessary, so that we have a special network with width W . This network outputs S, has depth L = L 1 + L 2 + 1, and width W . If m > 2, we continue by concatenating with N 3 . The collation channel is now occupied by T 2 := min{S 1 , S 2 }. If m = 3, then we complete as before by adding a layer to compute (T 2 − S 3 ) + , (S 3 ) + , and (−S 3 ) + . Continuing this way we obtain the desired network.

General CPwL functions
We have observed earlier that if P is a partition into a finite number of cells obtained from a hyperplane arrangement, then not every CPwL function subordinate to this partition is in the set Υ W,1 (ReLU; d, 1). In particular, the latter set does not include CPwL functions with compact support. This can be remedied by slightly increasing the depth of the network. More precisely, let ∆ be any simplex in R d and x * be any point in its interior. Since ∆ is a convex polytope with d + 1 facets, there exist d + 1 affine functions z j : R d → R such that z j (x * ) = 1 and Thus, the tent function vanishes outside of ∆ and satisfies T (x * ) = 1. For example, when d = 1 this is the hat function on R. The construction MM1 ensures the following.
With these remarks in hand, let us now turn to the question of whether every CPwL function is in one of the spaces Υ W,L (ReLU; d, 1). We make the following observations. CPwL1: If S is a CPwL function on R d , then S ∈ Υ W ,L (ReLU; d, 1), with L = log 2 (d + 1) , W − sufficiently large. This is proved in (Arora, Basu, Mianjy & Mukherjee 2016) by using the fact that any CPwL function S can be written as a linear combination of piecewise linear convex functions, each with at at most (d + 1) affine pieces, that is, with s j := #(S j ) ≤ d + 1, for some affine functions z 1 , . . . , z k . To show that S ∈ Υ W ,L (ReLU; d, 1), we use MM1 to show that for every j = 1, . . . , p, the function max i∈S j z i is in Υ W,L (ReLU; d, 1) with W = 3 · 2 log 2 s j −1 , L = log 2 s j .
For the proof, we can assume that s j = d + 1 for all j by artificially writing an index already in S j several times, so that we end up with networks with the same depth L = log 2 (d + 1) . Using Parallelization, we then stack these networks to produce S ∈ Υ W ,L (ReLU; d, 1) with L = log 2 (d + 1) and W = 3p2 log 2 (d+1) −1 .
At the other extreme, one may wish to keep the width W of the ReLU network as small as possible at the expense of letting the depth of the network grow. In this direction, we have the following result.
CPwL2: If S : R → R is a CPwL function defined on a rectangle R ⊂ R d , then S ∈ Υ d+2,L (ReLU; d, 1) for L suitably large.
To show this, we use the representation (3.16) for S and utilize MM2 to view each of the functions max i∈S j z i as an output of a network N j . that produces Υ d+1,s j −1 (ReLU; d, 1). We concatenate the networks N j , j = 1, . . . , p, by placing them next to each other and connecting their source channels. We add a collation channel where we store the consecutive outputs ε j max i∈S j z i from N j . The resulting network computes S, has width W = d + 2 and depth at most L = pd.
A result along these lines was proven in (Hanin 2019), where it was shown that the output of any ReLU network can be generated by a sufficiently deep ReLU network with fixed width W = d + 3.

Finite element spaces
One of the most popular methods of approximation used in numerical analysis is the Finite Element Method (FEM). This method employs certain linear spaces of piecewise polynomials. In its simplest case, one partitions a given polyhedral domain Ω ⊂ R d into simplicial cells with some requirements on the cells to avoid hanging nodes and small angles. The linear space X(P) of CPwL functions subordinate to such a partition P is used for the approximation. A natural question is if and how we can use ReLU NNs in place of X(P).
Rather than address this question in its full generality, we consider only a very special setting that will be sufficient for our discussion of NN approximation given in later sections of this paper. The reader can consult (He, Li, Xu & Zheng 2020) and (Opschoor, Petersen & Schwab 2019) for a more far reaching exposition of the relation between FEM and NNs.
For our special example, we begin with Ω = [0, 1] d , and given n ≥ 1, we consider the uniform partition Q n of Ω into n d cubes with sidelength 1/n. We denote by V n the set of vertices of the cubes in Q n . There are (n + 1) d such vertices. Each cube Q ∈ Q n can in turn be partitioned into d! simplices using the so-called Kuhn triangulation with northwest diagonal. This gives a partition K of Ω into n d d! simplices. Let X(K) be the space of all CPwL functions defined on Ω and subordinate to K. This is a linear space of dimension N = (n + 1) d . A basis for X(K) is given by the nodal functions {φ v , v ∈ V n }, which are the CPwL functions defined on Ω, subordinate to K, and satisfy where δ is the usual Kronecker delta function. Each S ∈ X(K) has the representation FEM spaces: Let X(K) be the finite element space in d dimensions described above, and let d * := (d + 1)!. Then the following holds: To prove these statements, we first observe that each nodal basis function φ v can be expressed as where D v is the set of simplices in K that have v as one of their vertices. There are (d + 1)! such simplices when v is an internal vertex and less than that for vertices on the boundary of Ω. The function z ∆ is the linear function which is one at v and vanishes on the facet of ∆ opposite to v. If |D ν | < (d + 1)!, we add artificially some of the functions z ∆ that are already in D ν so that we end up with (d+1)! not necessarily different functions, since later we will do parallelization that requires the depth of certain networks to be the same. It follows from MM1 that φ v ∈ Υ W ,L (ReLU; d, 1), W := 3 · 2 log 2 d * −1 , L = 1 + log 2 d * . Using Parallelization, we stack the networks N v that output the φ v 's to obtain a network with width W = (n + 1) d W that can output any linear combination of the φ v 's. Hence, X(K) is contained in Υ W,L (ReLU; d, 1) for the advertised values of W and L. Note that since X(K) is generated by parallelization of (n + 1) d networks of width W , the number of parameters used in the resulting network is O((n + 1) d ).
To prove the second containment, we first observe from MM2 that each of the nodal basis functions φ v ∈ Υ d+1,|Dv| (ReLU; d, 1). We then Concatenation of the networks N v used to produce φ v and add a collation channel for the computation of S, see (3.18) (as it is done for the CPwL2 construction). The resulting network has width W = d + 2 and depth at most (n + 1) d d * .
Several remarks are in order concerning this result. First, note that the number of parameters used in both NNs is comparable (up to a factor depending on d ) to the dimension (n+1) d of X(K). The most important point to stress is that when using the set Υ W,L in place of a piecewise linear FEM space, we are using a much larger nonlinear family as an approximation tool. Indeed, the set Υ W,L not only contains the FEM space X(K) based on the initial choice of partitioning, but it also contains an infinite number of such FEM spaces corresponding to an infinite number of possible ways to partition Ω. In fact, the NN approach is even more than a simple generalization of the Adaptive Finite Element Method (AFEM), where one is allowed to adaptively choose partitions (from a restricted family of partitions). It will be shown in §8.7 that these NNs provide a provably better approximation rate to various Sobolev and Besov classes than that provided by the FEM spaces. While this seems like a tremendous advantage for NNs over FEMs, one must address (in the specific problem setting) how one (near) optimally chooses the parameters of these NNs.
In the case when FEMs are used to numerically solve linear elliptic PDEs, one can employ the Galerkin method which finds a (near) best approximation to the solution to the PDE by projecting onto X(K). This is well understood and quantified in both theory and practice through theorems that bound error and establish stable numerical implementation.
When we eventually discuss quantitative theorems for NN approximation, we shall see that the known results point to a tremendous potential increase in approximation efficiency (error versus number of parameters needed) when using NNs for the numerical solution of elliptic problems. Whether this advantage can be maintained in concrete stable numerical implementation is less clear.

Width versus depth
An underlying issue when choosing a NN architecture to be used in a numerical setting is whether to increase the width or the depth of the NN when one is willing to allocate more parameters to improve accuracy. Suppose, we fix a bound n on the number of parameters to be used and ask which of the sets Υ W,L depending on at most n parameters should we employ in designing a numerical algorithm. All other issues being the same, the general consensus is that in practice deeper networks are preferable. We make some comments to explain this preference from the point of view of the enhanced approximation capacities of deeper networks.
First, we have shown that addition of the output functions of a NN can be implemented by either increasing width (parallelization) or depth (concatenation) with a controlled increase in the number of parameters. However, certain operations like composition and forming minimums can only be implemented by increasing depth. So, for example, if we fix a width W = W 0 sufficiently large to accommodate d source channels and a couple of collation channels, then we can seemingly implement as outputs from Υ W 0 ,L all functions that occur as outputs of shallower networks with a comparable number of parameters. The only rigorous statement given to this effect was for ReLU networks with d = 1. In this case, it was proved in ) that for any fixed W 0 ≥ 4, we have Υ n,1 (ReLU; 1, 1) ⊂ Υ W 0 ,L (ReLU; 1, 1), (3.20) provided the elements in these sets are viewed as functions on [0, 1](or any finite interval [r, e]), and L n/W 2 0 , where the constants in are absolute constants. Note that the number of parameters n(W 0 , L) determining the set Υ W 0 ,L is n(W 0 , L) W 2 0 L n, and therefore comparable to the number of parameters in Υ n,1 . However, Υ W 0 ,L is richer, since it contains, for example compositions. So, in this special case depth beats width. This leads us to formulate the following general question.
Problem 2: Are shallow networks always contained in deep networks of fixed width W 0 with the same number of parameters? More precisely, is it true that if we fix the depth L and the width W 0 , we have the inclusion Υ W,L (ReLU; d, 1) ⊂ Υ W 0 ,L 0 (ReLU; d, 1) whenever L 0 and W satisfy n(W 0 , L 0 ) n(W, L), where the constants in depend at most on d?
The results given above show that Problem 2 has a positive answer if we are not concerned about the number of parameters. Indeed, each function in Υ W,L (ReLU; d, 1) is a CPwL function, and therefore, according to CPwL2, is in Υ d+2,L (ReLU; d, 1) for L suitably large. So, the key issue in Problem 2 is the control on the number of parameters.

Interpolation by neural network outputs
A common strategy for approximating a given target function f is to interpolate some of its point values. Although this is often not a good method for approximation, it is important to understand when we can interpolate a given set of data, and how stable is this process. A satisfactory understanding of interpolation using NNs is far from complete. The purpose of this section is to frame the interpolation problem and point out what is known. We begin by considering interpolation by NNs with an arbitrary activation function σ and later specialize to ReLU activations.
Let Υ W,L (σ) = Υ W,L (σ; d, 1) be the set of outputs of NNs with activation σ, input dimension d, output dimension 1, width W and depth L. Given a finite set of data points (x (i) , y i ), with x (i) ∈ R d , y i ∈ R, i = 1, . . . , D, a natural question is whether there is an S ∈ Υ W,L (σ) which interpolates the given data in the sense that (3.21) This is the existence question for data interpolation. In the case interpolants exist, let us denote by S I := S I (W, L; σ, d) the set of functions S ∈ Υ W,L (σ; d, 1) which satisfy the interpolation conditions (3.21). Given that we are going to use the set Υ W,L (σ; d, 1) for interpolation, the first question to ask is: Question: Determine the largest value D * := D * (W, L; σ, d) such that the interpolation problem has a solution from Υ W,L (σ; d, 1) for all data sets of size D * .
One expects that D * should be closely related to the number of parameters used to describe Υ W,L (σ; d, 1).
There seems to be only one general theorem addressing the interpolation problem for general activation functions σ. It applies to the case of single hidden layer networks, that is, L = 1, and is discussed in detail in the survey article (Pinkus 1999), see Theorem 5.1 in that paper. (3.22) The following sections discuss the interpolation problem for ReLU activation where more results are known.
3.5.1. Interpolation for Υ W,1 (ReLU; 1, 1) The interpolation question is easiest to answer for ReLU networks with d = L = 1. In this case, we know that the set Υ W,1 (ReLU; 1, 1) is almost the same as the space Σ W,1 = Σ W,1 (CPwL) of CPwL functions subordinate to a partition of R into W intervals (W − 1 breakpoints), see (3.2). Interpolation by functions in Σ W,1 is well understood, see (de Boor 1978). In the case of Υ W,1 (ReLU; 1, 1), we claim that D * (W, 1; ReLU, 1) = W + 1. (3.23) To show this, we will use the representation (3.8) for functions S from this set. Consider data points {(t (i) , y i )}, i = 1, . . . , D, with t (1) < · · · < t (D) . We first show that when D = W + 1 interpolation is not only possible, but there are infinitely many S ∈ Υ W,1 (ReLU; 1, 1) for which We take any points ξ j that satisfy the interlacing property and consider the function We establish that interpolation is possible by induction on W . When W = 1, we choose c := y 1 and a 1 so that c + a 1 (t (2) − ξ 1 ) = y 2 . For the induction step, let S 0 (t) := c + W −1 j=1 a j (t − ξ j ) + satisfy the first W interpolation conditions. We define a W so that we have Then, S(t) := S 0 (t) + a W (t − ξ W ) + ∈ Υ W,1 (ReLU; 1, 1) and satisfies all of the interpolation conditions. This shows that the interpolation conditions can always be satisfied and that the set S I is infinite since we have infinitely many choices for the ξ i 's. Finally, we want to see that interpolation at W + 2 points is generally not possible. For this, we use the following proposition which will also be useful when we discuss the Vapnik-Chervonenkis (VC) dimension of NNs.
Proof: We can without loss of generality assume that y 1 > 0. We prove the proposition by induction on n. We first consider the case n = 3. If S ∈ Υ 1,1 (ReLU; 1, 1), then we have the representation We assume the first representation since the second one is treated in a similar way. In this case, we note that: , and therefore cannot satisfy the three interpolation conditions. • if ξ ∈ (t (1) , t (2) ), then in order for S to satisfy the first two interpolation conditions, we would need c > 0 and a < 0. So, the function S is then a non-increasing function of t and thus S(t (3) ) ≤ S(t (2) ), which shows that S cannot satisfy the third interpolation condition. • if ξ ≥ t (2) then S cannot satisfy the first two interpolation conditions, since S is constant on (−∞, ξ]. Now, we consider the induction step. Suppose that we have proved the proposition for a value of n ≥ 3 and consider n + 1 interpolation points. If S is any function in Υ n−1,1 (ReLU; 1, 1), then where S 0 ∈ Υ n−2,1 (ReLU; 1, 1), and has break points ξ 1 < . . . < ξ n−2 , with ξ n−2 < ξ n−1 . We again consider only the first possibility, since the other is handled similarly. We show that S cannot satisfy the interpolation conditions by considering the following two cases: • if t (n) < ξ n−1 , then S 0 (t (j) ) = S(t (j) ) = y j , j = 1, . . . , n, and S 0 would contradict the induction hypothesis. Hence this case is not possible.
This completes the proof of the proposition.
3.5.2. Interpolation for Υ W 0 ,L (ReLU; 1, 1) It is also possible to produce an interpolant to given data by using deep networks with a fixed width. This of course follows from (3.20) together with what we have just proved. However, we wish to give a direct construction because it will be used later in this paper.
Proposition 3.6. Given D points 0 ≤ t (1) < t (2) < · · · < t (D) ≤ 1 and values y j ∈ R, j = 1, . . . , D, there is an S ∈ Υ 3,D−1 (ReLU; 1, 1) which interpolates this data, namely, Proof: We have shown above that there is an S of the form (3.24) with W := D − 1, that satisfies the interpolation conditions (3.25). We view S as a function on [0, 1] and construct a special network that outputs any such S. The first channel of this special network is a source channel that pushes forward the input t and the last channel is a collation channel. This last channel is initialized with 0 at layer 1 and then successively collects the sums j−1 i=1 a i (t−ξ i ) + at layers 2, . . . , D−1, respectively, while the middle channel successively produces the terms a j (t − ξ j ) + , at layers j = 1, . . . , D − 1, using the inputs t from the source channel. We can then output S from layer D − 1.

Interpolation from
We turn now to results that hold for general d ≥ 1. There is a simple way to derive interpolation results for arbitrary d > 1 from those for d = 1. Let be any finite collection of data sites. A simple measure theoretic argument shows that there exists a unit vector v ∈ R d for which the points t (j) ∈ R, given by t (j) := v · x (j) , j = 1, . . . , D, are all distinct. If g is any univariate function which satisfies g(t (j) ) = y j , j = 1, . . . , d, We utilize this observation to prove the following.
While the above proposition is of theoretical interest, it is not used in practice because the ridge function interpolant does not reflect the local flavor of the data. A more common scenario is to construct via ReLU networks a dual basis {φ j }, j = 1, . . . , D, for the data sites, that is, a basis that satisfies the conditions The goal is to construct a locally supported dual basis. In that case, the interpolation operator is a bounded projection onto span{φ j } whenever the data sites are in Ω. The norm of this projector, to a large extent determines the approximation properties of interpolation at these sites.
We have already discussed such dual bases in the context of FEM, where for the Kuhn simplicial decomposition K of Ω = [0, 1] d , we showed that the space X(K) spanned by the nodal basis {φ v }, see (3.17), is contained in Υ W,L (ReLU; d, 1) for certain choices of W and L with the number of parameters n(W, L) comparable to the dimension of X(K). In this case, the nodal basis form a partition of unity v φ v ≡ 1 on Ω and the projection operator P X is of norm one. It follows therefore that where we insert the best approximation S to f from X(K) to obtain the last inequality. This allows one to deduce estimates for NN approximation from those known in FEM and also to exhibit simple linear operators which achieve these bounds.

VC dimension of ReLU outputs
An important ingredient in understanding the approximation power of ReLU networks is the Vapnik-Chervonenkis (VC) dimension of the sets of their outputs Υ W,L (ReLU; d, 1). This topic is by now well studied, see (Bartlett, Harvey, Liaw & Mehrabian 2019) for a summary of the most recent results.
Here, we shall only discuss the results on VC dimension that are important for approximation. Let F be a collection of real valued functions defined on Ω ⊂ R d . We say that a set {x (1) The maximum value of n for which there exists such a collection of n points that are shattered by F is called the Vapnik-Chervonenkis (VC) dimension of F and is denoted by VC(F), see (Vapnik 1989).
In the case that F is one of the sets Υ W,L (ReLU; d, 1), then the VC dimension of F is the largest value of n for which there exist n points such that for any assignment of signs ε i ∈ {−1, +1}, there is an S ∈ F such that ε i S(x (i) ) > 0, i = 1, . . . , n. (3.26) This follows from the fact that whenever S ∈ F, then S + c, c ∈ R, is also in F. We sometimes use property (3.26) instead of the original definition of shattering for the output of NNs in going forward. Let us note that the definition of VC dimension of F only requires the existence of one set of points where shattering takes place. When proving upper bounds on the error of approximation, it is useful to know precisely which collections of points can be shattered. The reader will see how this issue arises when we use VC dimension in proving approximation results.
We are interested in describing the VC dimension of the set F of outputs of ReLU networks in terms of the number n(W, L) of their parameters. Let us now consider what is known in the special cases of interest to us.
Lemma 3.8. We have the following upper and lower bounds for the VC dimension of Υ W,1 (ReLU; d, 1), W ≥ 1: Proof: (i) The VC dimension in this case is at least W + 1 because we can interpolate any W + 1 data by an element from Υ W,1 (ReLU; 1, 1), see (3.23). On the other hand, the VC dimension is at most W + 1 because of Proposition 3.5. (ii) This upper bound can be found in (Bartlett et al. 2019). (iii) The lower bounds in the case d ≥ 4 can be derived from known lower bounds for the VC dimension of the collection C = {R} of sets R which are the union of W closed half spaces. Indeed, whenever points P 1 , . . . , P m are shattered by C, then they are shattered by Υ W,1 . To see this, suppose that Λ is any subset of these points. We can construct S ∈ Υ W,1 that is positive on this set and zero on the remaining points as follows. Let R j , j = 1, . . . , W , be the closed half spaces whose union contains only the points from Λ and none of the rest of the P j 's, j = 1, . . . , m. Each of these half spaces can be represented as will be positive on the points in Λ and zero on the rest of the points P j , provided we take ε small enough. It follows that and hence the lower bounds stated in (iii), follow from the lower bounds on VC dimension of C given in (Csiskos, Kupavskii & Mustafa 2019). (iv) The lower bounds in this case follow from the fact that we can interpolate any data at any (W + 1) data sites, see Propostion 3.7 and (3.23).
It appears that the VC dimension of Υ W,1 (ReLU; d, 1) when d = 2, 3 is not completely determined because of the discrepancy between the upper and lower bounds in the above lemma.
3.6.2. VC dimension of Υ W 0 ,L (ReLU; d, 1) Next, we consider the case where W 0 is fixed but sufficiently large, depending only on d, and L is allowed to vary. Note that in this case the number of parameters of the network n(W 0 , L) W 2 0 L. The following theorem gives bounds on the VC dimension of such networks.
Theorem 3.9. Let W 0 be fixed, and sufficiently large depending only on d. There are fixed constants c 1 , C 1 , depending only on d, such that (3.27) The upper bound in this theorem follows from Theorem 8 in (Bartlett et al. 2019). The remainder of this section will provide a proof of the lower bound in a form which will be used later in this paper to prove certain approximation results. Related lower bounds are stated in Theorem 3 of (Bartlett et al. 2019).

Bit extraction using ReLU networks
We discuss in detail a very specific way to prove the lower bound in Theorem 3.9. This particular construction, called bit extraction, is useful in proving upper bounds for approximation using deep neural networks. For a fixed W 0 and C, depending only on d, the set Υ W 0 ,Cn (ReLU; d, 1) not only shatters N = n 2 equally spaced points x (1) , . . . , x (N ) ∈ Ω := [0, 1] d , but for certain bit data y j , it contains an S such that S(x (j) ) = y j , j = 1, . . . , N .
In order to avoid certain technicalities, we present this result only in the case d = 1. The full implementation for d ≥ 2 can be found in (Yarotsky 2018) and (Shen, Yang & Zhang 2019).

Moreover, we have
(3.28) The novelty in this theorem is that while the number of parameters in the NN is Cn, the number of data points is N +1 = n 2 +1. The theorem provides the lower bound in (3.27) for the VC dimension. Indeed, at any point t 2i not of the form t jn , we can assign any data y 2i ∈ {0, 2} because of property (ii). Thus, we can shatter these points using the set Υ 11,15n+2 (ReLU; 1, 1). Since there are at least cn 2 such points, with c an absolute constant, we have the lower bound in (3.27) in the case d = 1 with W 0 = 11. The general case of d > 1 also easily follows from this by using the method of proof used in Proposition 3.7.
Before we present the proof of Theorem 3.10, which is a bit laborious, we introduce some notation, make several observations, and present the general idea of the proof. First, note that for each i = 0, 1, . . . , N −1, there is a unique representation Next, recall that any t ∈ [−1, 1], can be represented as where the bits B k (t) ∈ {−1, 1} of t are found using the familiar quantizer function with χ I denoting the characteristic function of a set I. The first bit of t, B 1 (t) = Q(t) and has the residual R 1 (t) := 2t − B 1 (t) ∈ [−1, 1]. We find the later bits and residuals recursively as Given our assigned bit sequence {ε i }, i = 0, . . . , N − 1, available to us from the values y i , i = 0, 1, . . . , N , we define the numbers The idea of proving Theorem 3.10 is to produce a function S from the set Υ 11,15n+2 (ReLU; 1, 1), such that for each i = 1, . . . N , i = j(i)n + k(i), that in addition satisfies (3.28). We construct S by showing that each of the functions are each outputs of ReLU networks of an appropriate size.
To do this, let δ = 2 −N and define: • the CPwL function J = J N which has breakpoints at each of the points ξ j := j/n, j = 1, . . . , n − 1, ξ j := (j + 1)/n − δ, j = 0, . . . , n − 1, (3.32) and no other breakpoints, and takes the value j on the interval [ξ j , ξ j ]. We also require J(1) = n. Note that J has the property Next, we would like to implement quantization by a neural network. However, the function Q is not continuous, and so we cannot exactly reproduce Q. Instead, we use a surrogatê We define the surrogate bitsB ν (t) for t ∈ [−1, 1] by usingQ in place of Q in the recursive definition of B ν , described in (3.29). Because of the choice of δ,B ν can be used in place of B ν to compute the bits of t whenever t has the representation For such a t, we haveB ν (t) = B ν (t), ν = 1, . . . , N − 1. Finally, we introduce • the CPwL function Y which has exactly the same breakpoints as J, see (3.32), and satisfies The function S will be the output of a special neural network of width W = 11 and depth L = 15n + 2, which is a concatenation of four special networks that we describe below. The top channel of each of these networks is a source channel which simply passes forward the input t. Some of the other channels are collation channels and are occupied by zeros in their first layers so that they can be used later for passing forward certain function values.
We want to point out that our construction is probably not optimal in the sense that it does not provide a NN with the best possible minimal width and depth that outputs S. In addition, some of the channels in our NN are ReLU free. We have discussed earlier how we can construct a true ReLU network with the same outputs as a network that has ReLU free nodes.
In going further, we note that any CPwL function T with k break points is in Υ 3,k (ReLU; 1, 1), where the network used to output T has one source channel, one computational channel, and one collation channel that collects the successive terms we have computed, see the construction in Proposition 3.6.
Proof of Theorem 3.10: We can now give the proof of Theorem 3.10. The network N which outputs the function S of the theorem is a concatenation of five special networks N 1 , N 2 , N 3 , N 4 , N 5 . Each of them has a source channel as its first channel. It pushes forward the input t ∈ [0, 1]. The first of these networks outputs K(t), the second outputs Y (t), the third takes input K(t) and Y (t) and outputs a CPwL functionS which almost satisfies the theorem. Namely, it satisfies the interpolation conditions and it also satisfies (3.28) except for a small set of t values. The last two networks make a technical correction toS to obtain the desired S which satisfies (3.28) for all t ∈ [0, 1]. We now describe these five networks. All of them have width at most 11 and we make the width exactly 11 by adding zero channels. The depth of each network is also controlled so that the final network has depth L = 15n + 2.
First NN: This network, which we denote by N 1 , has depth 4n − 2 and for any input t ∈ [0, 1] outputs the function value K(t). From our remarks on interpolation, see Proposition 3.6, we know that J(t) is the output of a special ReLU network N 0 of width W = 3 and depth 2n − 1, where channel three is a collation channel. The CPwL function K is the output of a ReLU network N 1 of width W = 3 and depth 4n − 2, which is obtained by concatenating the network N 0 for J with itself and using nt − J(t) as the input to the second of these networks. The third channel is a collation channel, used first to build J(t). Once J(t) is computed, it sends this value as an input to the 2n-th layer. Then, it is zeroed out by assigning a weight 0, and subsequently used as a collation channel to build K(t). It follows from (3.33) that the output of this network is k(i) when the input is t i . We add eight other channels with zero parameters. These channels will be used later.
Second NN: The second network N 2 takes the input t from channel one and outputs the CPwL function Y (t) which belongs to Υ 3,2n−1 (ReLU; 1, 1). This network has depth 2n − 1 and only needs three channels, but we augment it with eight more channels. After a concatenation with the existing network N 1 , it uses Channel 2 to compute the terms involved in Y (t), while channels 3 and 4 push forward the values K(t) and the terms involved in Y (t), respectively. Channels 5 to 11 have all parameters zero. Note that after this concatenation, we have available to us as outputs t, coming from the source channel, K(t), kept in channel 3, and Y (t) kept in channel 4.
Third NN: This network takes as inputs K(t), Y (t) and outputs a functioñ S which satisfies the interpolation conditions and coincides with the desired S except for a small subset of [0, 1]. To describe this network, we shall use the CPwL function T with break points −1, 1, 2, defined as Note that T is the identity on the interval [−1, 1] and has the important property that for t ∈ [−1, 1] and for each 1 ≤ k < n, we have and is zero otherwise, since 3(ν − k) + = 0 when ν ≤ k and 3(ν − k) + ≥ 3 when ν > k. It follows from (3.37) that Now, for i = 0, . . . , N − 1, consider one of our points t i which is not a multiple of n, that is, Since we cannot produce B ν with a ReLU network, we use the surrogateB ν in its place. This leads us to define the following functioñ This function satisfies the interpolation conditions (3.31) since the bitŝ B ν (Y (t)) = B ν (Y (t)), ν = 1, . . . , n, whenever t is one of the points t i , i = 0, . . . , N , where interpolation is to take place. In addition, since for j = 0, . . . , n − 1, K(t jn ) = 0 and K(1) = J(0) = 0, we havẽ because of the definition of T . Next, we describe howS is an output of a ReLU network N 3 with inputs Y (t), K(t). The network N 3 is organised as follows. Channel 1 is left to be a source channel that forwards the value of t. Channels 2 and 3 are occupied with the values of K(t) and Y (t), forwarded to the next layers. Channel 4 computes (ν − K(t)) + in layer ν, for ν = 1, . . . , n. Channel 5 computes the residual R ν−1 (Y (t)) in layer ν, ν = 1, . . . , n (this is a ReLU free . . , n. Channels 6 and 7 implement the network forQ and compute ), for ν = 2, . . . , n. Channels 8, 9, and 10 implement T . The 11-th channel successively adds the T values in the sum (3.38), and therefore N 3 outputs S. In total, the entire network N 3 has (n + 1) layers and width 11.
We have already observed thatS(t i ) = y i and so the interpolation conditions are satisfied. The reader can imagine that we can take a max and min with an upper and lower CPwL to obtain the control (3.28). The network N 4 will do precisely that. So, the remainder of the proof is to give one such construction.
We claim that the outputS of N 3 already satisfies the inequalities where We verify this property when t ∈ [0, 1/n) ∩ Ω N = [0, 1/n − δ] since the verification on the intervals [j/n, We consider the following cases: and thus (3.39) is satisfied for these t.
and thus (3.39) is satisfied again.
It follows from the definition of T that with |η| ≤ 1, and therefore (3.39) is satisfied in this case as well.
In summary, the functionS satisfies the properties we want except for control on the small intervals that make up the complement of Ω N . We do not have a bound forS on these small intervals. Our last construction will be to take care of these intervals while leavingS unchanged outside of them.
To see how to do this, we concentrate on the interval [t (j−1)n , t jn ], for j = 1, . . . , n, and let I j := (t jn − δ, t jn ]. We know from our analysis that S(t) = y jn−1 = η j for t ∈ [t jn−1 , t jn − δ], where η j = ±1. Assume for now that η j = 1. Also, recall thatS(t) = 0 for t ∈ [t (j−1)n , t (j−1)n+1 − δ]. If M := S C(Ω N ) ≥ 1, we consider the CPwL function U j whose graph passes through the points (t (j−1)n , 0), (t (j−1)n+1 −δ, M ), (t jn−1 , M ), (t jn −δ, 1), and (t jn , 0), and is otherwise linear between these points. Then, U j (t) ≥S(t) on [t (j−1)n , t jn − δ], and thus on [t (j−1)n , t jn ] \ I j the function min{S, U j } =S . In addition, min{S, U j } will have values between 0 and 1 on I j . This is the correction we want on I j . We then define U : where Λ + is the set of j's for which η j = +1. In a similar way we define a lower envelopeÛ for the j's such that η j = −1. We can then take The function S satisfies the conclusions of the theorem and we only have to see how it is outputted by a suitable neural network. Each of the functions U,Û have at most 4n + 1 breakpoints and hence are in Υ 3,4n+1 (ReLU; 1, 1).
Fourth and Fifth NNs: These are the networks N 4 and N 5 outputting U andÛ . We augment them with collation channels so that they have width 11. Since they already have a source channel (channel 1), there is no need to add such a channel.
The network N : We use a construction similar to the one in MM4 to output S by concatenating the networks forS, U , andÛ . Following the construction in MM4, we end up with a network with width W=11 (the same as the one forS) and depth L = 15n + 2 where we added the depth 7n − 2 of the network forS, the depths of the network for U andÛ , each of which is 4n + 1, and two more layers to perform the min and max. This completes the proof of the theorem.
Remark 3.1. We make some final remarks on the above construction. We have used the fact that the t i 's are N = n 2 + 1 equally spaced points. It would be interesting and useful to clarify for which other patterns of univariate points the construction can be done. We know that we cant increase the number of points significantly because of the upper bound in (3.27). Note that in the case d > 1, we can construct a similar interpolant for points Again, it would be useful to know exactly when we can interpolate certain patterns of values like the y i 's with Cn 2 points from R d . The constructions in (Yarotsky 2018) and (Shen et al. 2019) show that this is possible on equally spaced tensor product grids.

Classical model classes: smoothness spaces
In order to prove anything quantitative about the rate of approximation of a given target function f , one obviously needs to assume something about f . Such assumptions are referred to as model class assumptions. We say that a set K in a Banach space X is a model class of X if K is compact in X. The classical model classes for multivariate functions are the unit balls of smoothness spaces such as Lipschitz, Hölder, Sobolev, and Besov spaces. We give a brief (mostly heurestic) review of these spaces in this section. A detailed development of these spaces can be found in the standard references, see e.g. (Adams & Fournier 2003, Peetre 1976, Stein 1970, DeVore & Sharpley 1993, DeVore 1998.
We consider these spaces on the domain Ω := [0, 1] d . All definitions and properties extend to more general domains such as Lipschitz domains in R d . We use standard multivariate notation.

L p spaces
As a starting point, we recall that the L p (Ω) spaces consist of all Lebesgue measurable functions f for which |f | p is integrable. We define This is a norm when 1 ≤ p < ∞ and a quasi-norm when 0 < p < 1. When p = ∞, one usually takes X = C(Ω), the space of continuous functions on Ω with the uniform norm However, on occasion we, need the space L ∞ (Ω) consisting of all functions that are essentially bounded on Ω with We assume throughout that the reader is familiar with the standard properties of these spaces.

Sobolev spaces
We begin by defining smoothess spaces of continuous functions. If r is a positive integer then C r := C r (Ω), Ω = [0, 1] d , is the set of all continuous functions f defined on Ω, which have classical derivatives D α f for all α with |α| = r, where |α| := d j=1 |α j | = r. We equip this space with the semi-norm A norm on this space is given by The Sobolev spaces (of integer order) generalize the spaces C r by imposing weaker assumptions on the derivatives D α f . First, the notion of weak (or distributional) derivatives D α f is introduced in place of classical derivatives. Then, for any 1 ≤ p ≤ ∞, the Sobolev space W r (L p (Ω)) is defined as the set of all f ∈ L p (Ω) such that D α f ∈ L p (Ω) for all |α| = r. We equip this space with the semi-norm and obtain a norm on this space by f W r (Lp(Ω)) := |f | W r (Lp(Ω)) + f Lp(Ω) .

Besov spaces
The Sobolev spaces above are not sufficient because they only classify smoothness for integer values r. There is a long history of introducing smoothness spaces for any order s > 0. This began with Lipschitz and Hölder spaces and culminated with the Besov spaces that we define in this section.
Given a function f ∈ L p (Ω), 0 < p ≤ ∞, and any integer r, we define its modulus of smoothness of order r as where this difference is set to zero whenever one of the points x + kh is not in Ω. It is easy to see that for any f ∈ L p (Ω), we have ω r (f, t) p → 0, when t → 0. How fast this modulus tends to zero with t measures the L p smoothness of f . For example, the Lipschitz space Lip(α, p) for 0 < α ≤ 1 and 0 < p ≤ ∞ consist of those functions f ∈ L p (Ω) for which and the smallest M for which this holds is the semi-norm |f | Lip(α,p) . Again, we obtain a norm on this space by simply adding f Lp(Ω) to the semi-norm.
The Besov spaces generalize the measure of smoothness in two ways. They allow for r to be replaced by any s > 0 and they introduce a finer way to measure decay of the modulus as t tends to zero. This finer decay is controlled by a new parameter 0 < q ≤ ∞.
If f ∈ L p (Ω), 0 < p, q ≤ ∞ and s > 0, the space B s q (L p (Ω)) is defined as the set of functions f for which where r := s + 1. (4.1) Notice here that the L q norm is taken with respect to the Haar measure dt/t. The case q = ∞ is simply the supremum norm over t > 0. The norm on this space is . The Besov spaces are now a standard way of measuring smoothness. Functions in this space are said to have smoothness of order s in L p with q giving a finer gradation of this smoothness. We mention without a proof a few of the properties of these spaces that are frequently used in analysis.
First, notice that when s ∈ (0, 1) and q = ∞, these spaces are the Lip(s, p) spaces. However, the space B 1 ∞ (L p (Ω)) is not Lip(1, p) since ω 2 is used in place of ω 1 in the definition (4.1), thereby resulting in a slightly larger space. A second useful remark is that in (4.1) we could have used any r > s and obtained the same space and an equivalent norm. When we insert q into the picture, the requirement for f to be in the space B s q (L p (Ω)) gets stronger as q gets smaller, namely, we have the following embeddings: . We also have the well known Sobolev embeddings for Besov spaces.
There is a simple graphical way to describe these embeddings that we shall refer to in this paper. We use the upper right quadrant of R 2 to graphically represent smoothness spaces. We can write any point in this quadrant as (1/p, s) with 0 < p ≤ ∞ and s ≥ 0. We think of any such point as corresponding to a smoothness space with smoothness of order s measured in L p . For example, the space Lip(α, L p ) can be thought of as corresponding to the point (1/p, α), and all Besov spaces B s q (L p (Ω)), 0 < q ≤ ∞, are identified with the same point (1/p, s). In terms of this graphical description, given an L p (Ω) space, the smoothness spaces embedded into L p (Ω) are the ones that correspond to points (1/τ, s) that lie on or above the line with equation s = d(1/τ −1/p). Those corresponding to points strictly above this line are compactly embedded. These embedding results are summarized in Figure 4.4.

Atomic decompositions
An often used fact about Besov spaces is that functions in these spaces can be described by certain so-called atomic decompositions. Historically, this began with the Littlewood-Paley decompositions, see (Frazier, Jawerth & Weiss 1991). In the case of approximation by ReLU networks, the two most relevant decompositions are those using spline functions or wavelets. We discuss the case of spline decompositions. Details and proofs can be found, for example, in (DeVore & Popov 1988). Let r ≥ 1 be a positive integer and consider the univariate cardinal Bspline N r of order r (degree r − 1), which is defined by The function N r is a piecewise polynomial of degree r − 1, is in C r−2 (R), and is supported on [0, r]. With this normalization of the B-spline we have N r C(R) ≤ 1. The multivariate cardinal B-splines are defined as tensor products We do not indicate the dependence on r when it is known from context. Let D denote the collection of dyadic cubes in R d and let D k denote the dyadic cubes of side length 2 −k . We also use the notation D + := k≥0 D k . If I ∈ D k has smallest vertex 2 −k j with j ∈ Z d , we let The splines N I provide an atomic decomposition for many function spaces and, in particular, the L p , Sobolev, and Besov spaces. Consider, for example, Ω = [0, 1] d and denote by D k (Ω) the set of those I ∈ D k for which the support of N I nontrivially intersects Ω. Then each f ∈ L 1 (Ω) has a representation where the c I 's are linear functionals on L 1 , and D + (Ω) = k≥0 D k (Ω). The representation (4.4) is not unique since the N I 's are not linearly independent. However, we can fix the c I 's so that all properties stated below in this section are valid. We can characterize membership of f in a Besov space B s q (L p (Ω)) in terms of the decomposition (4.4), see Corollary 5.3 in (DeVore & Popov 1988). Namely, f ∈ B s q (L p (Ω)), 0 < s < min{r, r − 1 + 1/p}, and 0 < q, p ≤ ∞ if and only if f has the representation (4.4) with coefficients c I (f ) satisfying for 0 < q, p < ∞, with the obvious modifications when either p or q is infinity. Moreover, · is equivalent to the usual Besov norm. This fact is the starting point for proving many approximation theorems for functions in Besov spaces.

Interpolation of operators
Next, we mention how from known upper bounds for approximation error on a model class, we can derive new upper bounds on a spectrum of new model classes by using results from the theory of interpolation of operators. We assume the reader is familiar with the rudiments of the theory of interpolation spaces via the real method of interpolation, see either (Bergh & Lofstrom 1976) or (Bennett & Sharpley 1990). Given two Banach spaces X, Y with (for convenience) Y continuously embedded in X, the real method of interpolation generates a family of new Banach spaces (X, Y ) θ,q , 0 < θ < 1, 0 < q ≤ ∞, which interpolate between them. These spaces are defined via what is called the K functional for the pair where · X is the norm on X and | · | Y is a semi-norm on Y 1 . The space (X, Y ) θ,q , 0 < θ < 1, 0 < q ≤ ∞, then consists of all f ∈ X, such that where the L q norm is taken with respect to the Haar measure dt/t. The important fact for us is that for classical pairs (X, Y ) of spaces such as L p and Besov/Sobolev spaces, the interpolation spaces are known and can be used to easily extend known error estimates for approximation. We mention two typical approximation results. By U (Y ) we mean the unit ball of the space Y .
Extend 1: If Σ n ⊂ X is a set that provides the approximation error inf 1 When Y is not continuously embedded in X, we use · Y in the definition of K.
We prove only Extend 1 since the proof of Extend 2 is similar. We can take q = ∞ because this is the largest space Z for the given θ. If f ∈ U (Z), for t = ε n , there is a g ∈ Y (if the infimum is not achieved, the proof follows from some limiting arguments) that satisfies We know that there is an S ∈ Σ n which approximates g to accuracy ε n g Y . For this S, we have Here is a simple but typical example of Extend 1. If we establish a bound ε n for approximation of functions in U (Lip 1) with error measured in X = C(Ω), then we automatically get the bound α n for approximating functions from U (Lip α), 0 < α < 1, because Lip α = (C(Ω), Lip 1) α,∞ .

Evaluation of nonlinear methods of approximation
Before embarking on an analysis of the approximation performance of ReLU networks, we wish to place this type of approximation into the usual setting of approximation theory, and thereby draw out the type of questions that should be answered. As we have noted, approximation using the outputs of neural networks with a fixed architecture is a form of nonlinear approximation known as manifold approximation. Given a target function f in a Banach space X, the approximation is given by A n (f ) := M n (a n (f )), where the two maps a = a n : X → R n , M = M n : R n → X, select the n parameters of the network and output the approximation, respectively.
Of course, there are many methods of approximation. The question we address in this section is how could we possibly determine if approximation by NNs is in some quantifiable sense superior to other more traditional methods of approximation. Also, what are the inherent limits on the capacity of NNs to approximate, once the number n of parameters allocated to the approximation is fixed? To answer such questions, we introduce various traditional ways to compare approximation methods and say with certainty whether an approximation method is optimal among all methods of approximation, or perhaps among all approximation methods with a specified structure. How NNs do under such methods of comparison is not the subject of this section. That topic is dealt with in later sections of this paper.
To begin the discussion, we take the view that an approximation method is a sequence {0} =: Σ 0 ⊂ Σ 1 ⊂ · · · Σ n ⊂ · · · ⊂ X of nested sets to be used in approximating functions f from the Banach space X in the norm · X . Here n, in some sense, measures the complexity of Σ n . The typical spaces X used in practice are the spaces L p (Ω). However, at this point, we let X be any Banach space of functions on Ω with a norm · X . The various methods of approximation are divided into two general categories: linear and nonlinear. A method is said to be linear if, for each n, the set Σ n is a linear space of dimension n, that is, Σ n is the linear span of n elements from X. The standard examples are spaces of polynomials, splines, and wavelets. Note that the term linear does not refer to how the approximation depends on f ∈ X. It only refers to the structure of each Σ n , n ≥ 0. All other methods of approximation are referred to as nonlinear. For nonlinear methods, a linear combination of elements from Σ n may not lie in Σ n . There are three prominent examples of nonlinear approximation we wish to mention.
In the first, Σ n consists of piecewise polynomials (of a fixed and generally small degree r) on a domain Ω ⊂ R d . Let us denote by P r the linear space of polynomials of degree r. Here, we can use any notion of degree in d variables, such as coordinate degree or total degree. Given n, an element S ∈ Σ n is obtained by partitioning the domain Ω into n disjoint cells C j ⊂ Ω, j = 1, . . . , n, and assigning a polynomial P j ∈ P r to each cell. Thus, we have where χ C j is the characteristic function of the cell C j . The partitions are not fixed but allowed to vary within a class of partitions that can be described by n parameters. We have already seen an example of this in the case of free-knot CPwL functions of one variable, in which case the partition was allowed to consist of any n intevals. In the multivariate case, the allowable partitions are more structured and usually generated adaptively. The rough idea of this form of approximation is to use small cells where the target function is rough and large cells where the function is smooth.
From our description of the sets Υ W,L (ReLU; d, 1), we see that NN approximation fits into the above framework of piecewise polynomial approximation in the sense that each element in one of these sets is a CPwL function on a polytope partition, see §3. However, several notable distinctions arise. First of all, we have many fewer restrictions on the partitions that arise when compared to other piecewise polynomial methods of approximation such as FEMs, adaptive methods, free-knot splines, etc. Another important point is that Υ W,L (ReLU; d, 1) is not the collection of all CPwL functions subordinate to a fixed class of partitions. Here, choosing the parameters of the network specifies in tandem the partition and the CPwL. One view of how this is done is nicely explained in (Balestriero & Baraniuk 2020).
Another widely used example of nonlinear approximation is n-term approximation. Let B := {φ j , j ≥ 1} be an unconditional basis for X. The set Σ n := Σ n (B) in this case consists of all functions S ∈ X which are a linear combination of at most n of these basis elements. Thus, each S ∈ Σ n (B) takes the form where Λ ⊂ N is a subset of n indices. The index set Λ is allowed to change at each occurrence with the only restriction being that #(Λ) = n. One can generalize this setting if B is replaced by a frame or, more generally, a dictionary. Typical examples used in numerical analysis and signal/image processing are dictionaries of wavelets, curvelets, ridge functions, shearlets, and other families of waveforms. In this generality, n-term approximation is not numerically implementable because the dictionary is infinite. To circumvent this in practice, one uses a large but finite dictionary that is sufficiently rich for the problem at hand.
Neural network approximation fits most naturally into a third type of nonlinear approximation known as manifold approximation. In manifold approximation, the elements of S ∈ Σ n take the form where M n : R n → X, that is, Σ n = {M n (y) : y ∈ R n }. As noted earlier, a numerical implementation of manifold approximation is made by specifying a mapping a n : K → R n which, when presented with f ∈ K, describes the parameters of the point on the manifold used to approximate f .
Given an approximation method Σ := (Σ n ) n≥0 and f ∈ X, we let denote the error of approximation of f by elements from Σ n . Note that E n (f ) X gives the smallest error we can achieve using Σ n to do the approximation, but it does not address the question of how to find such a best or near best approximation. This is an important issue, especially for NN approximation, that we address later in this section. An often quoted property of NNs is their universality, which means that E n (f ) X → 0 as n → ∞, for all f ∈ X. This is a property possessed by all approximation methods used in numerical analysis. Universality is not at all special and certainly cannot be used to explain the success of NNs.

Approximation of model classes
We do not measure the performance of an approximation method on a single function f but rather on a class K ⊂ X of functions. In this case, we have the class error Here, K incorporates the knowledge we have about the function or potential functions f that we are trying to capture. For example, when numerically solving a PDE, K is typically provided by a regularity theorem for the PDE. In the case of signal processing, K summarizes what is known or assumed about the underlying signal, such as bandlimits or sparsity.
Note that E n (K) X represents the worst case error. It is also possible to measure error in some averaged sense. This would be meaningful, for example, when the set K is given by a stochastic process with some underlying probability measure. For now, we discuss only the worst case error.
A set K on which we wish to measure the performance of an approximation method is called a model class. We always assume that K is a compact subset of X. If the approximation process is universal, then E n (K) X → 0 as n → ∞ for every model class K. How fast it tends to zero represents how good the sets (Σ n ) n≥0 are for approximating the elements of K.
If we are presented with approximation processes given by Σ = (Σ n ) n≥0 and Σ = (Σ n ) n≥0 respectively, then given a model class K, we can compare the performance of these methods on K by checking the decay of E n (K, Σ) X and E n (K, Σ ) X as n → ∞. If the decay rate of E n (K, Σ) X is faster than that of E n (K, Σ ) X as n → ∞, we are tempted to say that Σ is superior to Σ at least on this model class. However, the question of the computability of the approximant is an important issue and has to be taken into consideration.
To drive home this latter point, the following example is germane. Given a compact set K ⊂ X and ε > 0, let S 1 = S 1 (ε) be a finite subset of K such that dist(f, S 1 ) X ≤ ε for all f ∈ K. For example, S 1 could be the set of centers of an ε covering of K. Going further, we can find a one dimensional manifold Σ 1 that is parameterized by t ∈ [0, 1] and passes through each point in S 1 as t runs through [0, 1], and thus E(K, Σ 1 ) X ≤ ε. The point of this simple observation is to emphasize that we must place some further restrictions on what we allow as an approximation method (Σ n ) n≥0 so that we can have a meaningful theory. What such restrictions should look like and what are their implications is the subject we address next.

Widths for measuring approximation error
The concept of widths was introduced to quantify the best possible performance of approximation methods on a given model class K. The best known width is the Kolmogorov width, which was introduced to quantify the best possible approximation when using linear spaces. If X n is a linear subspace of X of dimension n, then its performance in approximating the elements of the model class K is given by the error E(K, X n ) X defined in (5.1). If we fix the value of n ≥ 0, the Kolmogorov n-width of K is defined as where the infimum is taken over all linear spaces Y ⊂ X of dimension n. An n dimensional space which achieves the infimum in (5.1) is called a Kolmogorov space for K if it exists.
The Kolmogorov n-width of a model class K tells us the optimal performance possible for approximating K using linear spaces of dimension n for the approximation. It does not tell us how to select a (near) optimal space Y of dimension n for this purpose nor how to find a good/best approximation from Y once it is chosen. In recent years, discrete optimization methods have been discovered for finding optimal subspaces. They go by the name of greedy algorithms, see (Buffa, Maday, Patera, Prud'homme & Turinici 2012), (Binev, Cohen, Dahmen, DeVore, Petrova & Wojtaszczyk 2011), (DeVore, Petrova & Wojtaszczyk 2013. If X is a Hilbert space and Y is a finite dimensional subspace, then we can always find the best approximation from Y to a given f ∈ X by orthogonal projections. This becomes a problem when X is a general Banach space because linear projections onto a general n dimensional space Y may have large norm when n is large. Although a famous theorem of Kadec-Snobar says that there is always a projection with norm at most √ n, finding such a projection is a numerical challenge. Also, projecting onto such a linear space does not give the best approximation from the space because the norm of the projection is large. For classical model classes such as the finite ball in smoothness spaces like the Lipschitz, Sobolev, or Besov spaces, the Kolmogorov widths are known asymptotically when X is an L p space. Furthermore, it is often known that specific linear spaces of dimension n such as polynomials, splines on uniform partition, etc., achieve this optimal asymptotic performance (at least within reasonable constants). This can then be used to show that for such K, certain numerical methods, such as spectral methods or FEMs are also asymptotically optimal among all possible choices of numerical methods built on using linear spaces of dimension n for the approximation.
Let us note that in the definition of Kolmogorov widths we are not requiring that the mapping which sends f ∈ K into the approximation to f is a linear map. There is a concept of linear width which requires the linearity of the approximation map. Namely, given n ≥ 0 and a model class K ⊂ X, its linear width d L n (K) X is defined as where the infimum is taken over the class L n of all linear maps from X into itself with rank at most n. The asymptotic decay of linear widths for classical smoothness classes are also known. We refer the reader to the books (Pinkus 2012), (Lorenz, Makovoz & von Golitschek 1996) for the fundamental results for Kolmogorov and linear widths. When X is not a Hilbert space, the linear width of K can decay worse than the Kolmogorov width. Now, we want to make a very important point. There is a general lower bound on the decay of Kolmogorov widths that was given by Carl in (Carl 1981). This lower bound can be very useful in showing that a linear method of approximation is nearly optimal. To state this lower bound, we need to introduce the Kolmogorov entropy of a compact set K. Given ε > 0, compactness says that K can be covered by a finite number of balls of radius ε, see Figure 5.5. We define the covering number N ε (K) X to be the smallest number of balls of radius ε that cover K, and we define the entropy H ε (K) X of K to be the logarithm of this number H ε (K) X := log 2 (N ε (K) X ).
The entropy of K measures how compact the set K is and is often used to give lower bounds on how well we can approximate the elements in K and also how well we can learn an element from K given data observations. The Kolmogorov entropy of a compact set is an important quantity for measuring optimality, not only in approximation theory and numerical analysis, but also in statistical estimation and encoding of signals and images.
To formulate the lower bounds for widths in terms of entropy, we introduce the related concept of entropy numbers. Given n ≥ 0, we define the entropy number ε n (K) X to be the infimum of all ε > 0 for which 2 n balls of radius ε cover K, that is, ε n (K) X := inf{ε : N ε (K) X ≤ 2 n }. The decay rate of entropy numbers for all classical smoothness spaces in L p (Ω) are known.
Carl proved that for each r > 0, there is a constant C r , depending only on r, such that Thus, for polynomial decay rates for approximation by n dimensional linear spaces, this decay rate cannot be better than the decay rate for the entropy numbers of K. Let us note that for many standard model classes K, such as finite balls in Sobolev and Besov spaces, the decay rate of d n (K) X is much worse than ε n (K) X . A version of Carl's inequality holds for other decay rates, even exponential, and can be found in (Cohen, DeVore, Petrova & Wojtaszczyk 2020).

Nonlinear widths
Since NN approximation is a nonlinear method of approximation, the Kolmogorov widths are not an appropriate measure of performance. Many different notions of nonlinear widths, see the discussion in (DeVore, Kyriazis, Leviatan & Tikhomirov 1993), have been introduced to match the various forms of nonlinear approximation used in numerical computations. We shall discuss only nonlinear widths that match the form of approximation provided by NNs.
Recall that NN approximation is a form of manifold approximation, where the approximation set Σ n consists of the outputs of a neural network with n parameters. Thus, Σ n = M n (R n ), with M n being the mapping that describes how the output function is constructed once the parameters and architecture of the NN are set. Note that Σ n is a nonlinear set in the sense that the sum of two elements from Σ n is generally not in Σ n .
There are by now numerous papers that discuss the approximation by NNs. They typically provide estimates for E(K, Σ n ) X for certain model classes K. We will discuss such estimates subsequently in §7 and §8. We have cautioned that such results must be taken with a grain of salt since they do not typically discuss how the approximation would be found or numerically constructed. Our point of view is that it is not just an issue of how well Σ n approximates K, although this is indeed an interesting question, but also how a good approximation would be found. In other words, the parameter selection mapping a n is equally important.
When presented with an f ∈ K, one chooses the parameters of the NN to be used to construct the approximation to f . Typical algorithms in learning base this selection of parameters on some form of optimization, executed through gradient descent methods. For our analysis, we denote this selection procedure by the mapping a n : K → R n . So the approximation procedure is given by A n (f ) := M n (a n (f )). If we wish to establish some form of optimality of NNs, we should compare NN approximation with other approximation methods of this form.
Given any pair of mappings (not necessarily using NNs) a : X → R n and M : R n → X, we define the error for approximating f ∈ X by For any such a, M we have and we have equality when we choose a(f ) so that M (a(f )) is a best approximation to f (assuming such a best approximation exists) from Σ n . A first possibility for defining optimal performance of such methods of manifold approximation on a model class K would be to simply find the minimum of E a,M (K) X over all such pairs of mappings. However, we have already pointed out that this minimum would always be zero (even when n = 1) because of the existence of space filling manifolds. On the other hand, these space filling manifolds are useless in numerical analysis. Consider, for example, a one parameter space filling manifold. By necessity, a small perturbation of the parameter will generally result in a large change in the output, which makes parameter selection for fitting f impossible. The natural question that arises is what restrictions need to be imposed on the mappings a, M so that we have a theory which corresponds to reasonable numerical methods. We discuss this next.

Restrictions on a, M in manifold approximation
The first suggestion, given in (DeVore, Howard & Micchelli 1989), for the possible restrictions to place on the mappings a, M , was to require that they be continuous. This led to the following definition of manifold widths δ n (K) X , δ n (K) X := inf an,Mn E an,Mn (K) X , with the infimum taken over all maps a n : K → R n and M n : R n → X, where a n is continuous on K and M n is continuous on R n . Manifold widths are closely connected to other definitions of nonlinear widths, see the discussion in ). It turns out that even with these very modest assumptions on the mappings a, M , one can prove lower bounds for δ n (K) X when K is a unit ball of a classical smoothness space, e.g. Besov, Sobolev, Lipschitz, and these lower bounds show that manifold approximation is no better than other methods of nonlinear approximation such as n-term wavelet approximation or adaptive finite element approximation for these model classes. For example, if we approximate in L p (Ω), with Ω = [0, 1] d , and K is a unit ball of any Besov space that embeds compcatly into L p (Ω), then it was show in ) that δ n (K) Lp(Ω) ≥ Cn −s/d , n ≥ 1. (5.5) This should not be used to deduce that manifold approximation, in general, and NN approximation, in particular, offer nothing new in terms of their ability to approximate. It may be that their power to approximate lies in their ability to handle non-traditional model classes. Nevertheless, this should make us proceed with caution. A stark criticism of manifold widths is that its requirement of continuity of the mappings is too minimal and does not correspond to the notions of numerical stability used in practice. In other words, manifold approximation based on just assuming that a, M are continuous may also not be implementable in a numerical setting. We next discuss what may be more viable restrictions on a, M that match numerical practice.

Stable manifold widths
A major issue in the implementation of a method of approximation is its stability, that is, its sensitivity to computational error or noisy inputs. The stability we want can be summarized in the following two properties: (S1) When we input f into the algorithm, we often input a noisy discretization of f , which can be viewed as a perturbation of f . So, we would like to have the property that when f − g X is small, the algorithm outputs M (a(g)) which is close to M (a(f )). A standard quantification of this is to require that the mapping A := M • a is a Lipschitz mapping from K to X. Notice that in this formulation the perturbation g should also be in K.
(S2) In the numerical implementation of the algorithm, the parameters a(f ) are not computed exactly, and so we would like that when a, b ∈ R n are close to one another, then M (a) and M (b) are likewise close. Again, the usual quantification of this observation is to impose that M : R n → X is a Lipschitz map. This property requires the specification of a norm on R n which is controlling the size of the perturbation of a.
The simplest way to guarantee (S1)-(S2) is to require that both mappings a, M are Lipschitz, which means that there is a norm · Y on R n and a number γ ≥ 1, such that If a, M satisfy (5.6)-(5.7), then obviously (S1) and (S2) hold, where the Lipschitz constant in (S1) is γ 2 . Imposing Lipschitz stability on a, M leads to the following definition of stable manifold widths δ * n (K) X := δ * n,γ (K) X := inf where now the infimum is taken over all maps a : K → R n and M : R n → X that are Lipschitz continuous with constant γ.

Bounds for stable manifold widths
Both upper and lower bounds for stable manifold widths of a compact set K are given in (Cohen et al. 2020). These bounds are tight in the case when the approximation takes place in a Hilbert space. Approximation in a Hilbert space is often used in applications of NNs. Lower bounds for the decay of stable manifold widths in a general Banach space X are given by the following Carl's type inequality, see (5.3), which compares δ * n (K) X with the entropy numbers ε n (K) X . Specifically, for any r > 0, we have ε n (K) X ≤ C(r, γ)(n + 1) −r sup m≥0 (m + 1) r δ * m,γ (K) X , n ≥ 0. (5.8) This shows that whenever the stable manifold widths δ * n (K) X of a model class K tend to zero like O(n −r ), n → ∞, then the entropy numbers of K must have the same or faster rate of decay. Similar bounds are known when the decay rate n −r , n → ∞, is replaced by other decays, see (Cohen et al. 2020). In this sense, the stable manifold widths δ * n,γ (K) X cannot tend to zero faster than the entropy numbers of K.
The inequalities (5.8) give a bound for how well manifold approximation can perform on a model class K once Lipschitz stability of the maps a, M is imposed. One might speculate, however, that in general ε n (K) X may go to zero faster than δ * n (K) X . This is not the case when X = H is a Hilbert space, since in that case for any compact set K ⊂ H, we have the estimate δ * 26n,2 (K) H ≤ 3ε n (K) H , n ≥ 1, (5.9) proved in (Cohen et al. 2020). This is a very useful information since it is often relatively easy to compute the entropy numbers of a model class K.
In addition, it is also a very useful result for our, yet to come, analysis of NN approximation. The upper bound (5.9) is proved through three fundamental steps. The first is to select 2 n points S n := {f i } n i=1 from H such that the balls of radius ε := ε n (K) H centered at these points cover K. The next step is to use the Johnson-Lindenstrauss dimension reduction lemma to find a (linear) mapping a : S n → R 26n , for which According to the Kirszbraun extension theorem, see Theorem 1.12 from (Benyamini & Lindenstrauss 2000), the mapping a can be extended from S n to the whole H, preserving the Lipschitz constant 1. The last step is to define M on a(f j ), j = 1, . . . , 2 n , as M (a(f j )) = f j , j = 1, . . . , 2 n . Clearly and therefore M is a Lipschitz map with a Lipschitz constant 2 when restricted to the finite set a(S n ). Again, according to the Kirszbraun extension theorem, we can extend M to a Lipschitz map on the whole R 26n with the same constant 2. It is now easy to see that the approximation operator A := M • a gives the desired approximation performance since, with a suitable choice of j, we have Therefore, we have proved (5.9). Let us remark however that A is not very constructive and that it is generally difficult to create Lipschitz mappings a, M that achieve the optimal performance in stable nonlinear widths.

Weaker measures of stability
It may be argued that requiring Lipschitz stability is too strong of a requirement. Recall that Lipschitz stability is just a sufficient condition to guarantee the stability properties (S1)-(S2) that we want. In this direction, we mention that (S1)-(S2) will hold if a, M satisfy the following weaker properties with · Y some fixed norm on R n : (SP1) The mapping A := M • a, A : K → X is Lipschitz. We can even weaken this further to requiring only A(f )−A(g) X ≤ C f −g α X , f, g ∈ K, for some α ∈ (0, 1]. This is known as Lip α stability. (SP2) The mapping M : R n → X is Lipschitz, or more generally, Lip α with respect to · Y on R n .
While not directly needed for stability, the following property will play a role in our further discussions.
(SP3) The mapping a : K → R n is bounded with respect to · Y on R n . This property limits the search over parameter space.
It turns out that if the mappings a, M satisfy the weaker assumptions (SP1)-(SP3), then one can still prove a version of Carl's inequality, and thus, we still have limitations on the performance of these approximation methods in terms of entropy number lower bounds. Let us briefly indicate how lower bounds for performance are proved when the mappings a n , M n satisfy (SP1)-(SP3). For notational simplicity only, we take α = 1, the Lipschitz constant of both M n and M n • a n to be γ, and the image of K under a n to be contained in the unit ball of R n with respect to · Y .
We fix ε > 0 and let a n : K → R n , M n : R n → X, satisfy (SP1)-(SP3) and approximate the elements of K with the accuracy E an,Mn (K) X ≤ ε/3, for some ε > 0. (5.10) We now show that (5.10) implies a bound on the entropy of K. Let us denote by Pack ε := f 1 , . . . , f Pε(K) a maximal ε-packing of K, that is, a collection of points {f i } ∈ K, with min i =j f i − f j X > ε, whose size is maximal among all such collections. Now, define y i := a n (f i ), g i := (M n • a n ) (f i ) = M n (y i ), i = 1, . . . , P ε (K).
It follows that for i, j = 1, . . . , P ε (K), where we used (5.10). Since M n is γ Lipschitz, we obtain In other words, since y i Y = a n (f i ) Y ≤ 1, the collection y 1 , . . . , y Pε(K) is an ε 3γ -packing of the unit ball in R n . Well known volumetric considerations show that a maximal such packing can have at most (1 + 6γε −1 ) n elements, and therefore P ε (K) ≤ 1 + 6γε −1 n . Now, since the balls of radius ε centered at the f i are a covering of K, we have that N ε (K) ≤ P ε (K) ≤ 1 + 6γε −1 n = 2 n log 2 (1+6γε −1 ) . (5.11) For example, the above derivation shows that whenever there are mappings a n , M n satisfying (SP1)-(SP3), then we have the Carl inequality E an,Mn (K) X ≤ Cn −r , n ≥ 1 =⇒ ε n (K) X ≤ C n −r [log 2 n] r , n ≥ 1.

Optimal performance for classical model classes described by smoothness
Although the definition of manifold widths places very mild conditions on the mappings a, M , it still turns out that these conditions are sufficiently strong to restrict how fast δ n (K) X tends to zero for model classes built on classical notions of smoothness described by smoothness conditions such as Sobolev or Besov regularity. For example, if B s q (L τ (Ω)), with Ω = [0, 1] d , is any Besov space that lies above the Sobbolev embedding line for L p (Ω), then it is proven in ) that δ n (U (B s q (L τ (Ω)))) Lp(Ω) ε n (U (B s q (L τ (Ω)))) Lp(Ω) n −s/d , n > 0, with the constants in this equivalence depending only on d.
It turns out that the decay rate O(n −s/d ) can be obtained by many methods of nonlinear approximation such as adaptive finite elements or n-term wavelet approximation. The main message for us is that even with this mild condition of imposing only continuity on the maps a, M , we cannot do better than the rate O(n −s/d ) for these classical smoothness classes when using manifold approximation. In particular, this holds for NN approximation with the restriction of continuity on the mappings a, M associated to the NNs.

VC dimension also limits approximation rates for model classes
The results we have given above provide lower bounds on how well a model class K can be approximated by a stable manifold approximation. If we remove the requirement of stability, it is still possible to give lower bounds on approximation rates for model classes if the approximation method (Σ n ) n≥0 is made up of sets Σ n which have limited VC dimension. For such results, one needs some additional assumptions on the model class K. We describe results of this type in this section.
Suppose K is a model class in L p (Ω) with 1 ≤ p ≤ ∞. A common technique in proving lower bounds on the Kolmogorov entropy or widths of K is to exhibit a function φ ∈ L p (Ω) with compact support for which the normalized dilate Φ(x) := Aφ(λx), x ∈ Ω, (5.13) is in K, provided A and λ are chosen appropriately. The function Φ is called a bump function. By choosing λ large, one concentrates the support of Φ but of course this is at the expense of making A small in order to guarantee that the resulting φ is in K. The small support of Φ guarantees that the shifted functions Φ i (·) = Φ(· − x (i) ), i = 1, . . . , N , are also in K and these functions have disjoint supports, provided N is not too large and the x (i) 's are suitably spaced out in Ω. It then follows that for any assignment of signs Λ := (ε 1 , ε 2 , . . . , ε N ), ε i = ±1, i = 1, . . . , N , the function is also in K for a proper choice of B. One then uses the rich family of functions f Λ as Λ runs over the 2 N sign patterns to show that the Kolmogorov entropy of K must be suitably large. This strategy can be used to bound from below how well a model class can be approximated by sets with limited VC dimension. For illustration, we consider the simplest example where K = U (C r (Ω)), with r being a positive integer, and measure approximation error in the norm · C(Ω) . If we approximate the functions in K by using a set F with V C(F) ≤ m, then we claim that there is a constant C = C(r, d) > 0 such that We prove this claim in the case Ω = [−1, 1] d . Consider a non-negative bump function φ ∈ C r (R d ) which vanishes outside [−1/2, 1/2] d and has norm φ C(Ω) = φ(0), see Figure 5.6. The dilated function Φ of (5.13) is in K if we choose A so that Aφ(0)+A|φ| C r (Ω) λ r = 1. The support of Φ is contained in a d dimensional cube centered at 0 with side-length λ −1 . So, if N = λ d , we can make the Φ i 's appearing in (5.14) to have disjoint support by taking the x (i) suitably separated. Moreover, if B = 1, each of the f Λ ∈ K.
This argument can also be used to prove that there is an absolute constant C > 0 depending only on s, such that for K = U (B s q (L ∞ (Ω))), with s > 0, 0 < q ≤ ∞, we have dist(K, F) C(Ω) ≥ Cm −s/d , (5.16) whenever the VC dimension of F is at most m. We leave the proof to the reader.
Let us now specialize to the case where F is the output of a ReLU network. With an eye towards our bounds on VC dimension for the spaces Υ W,1 (ReLU; d, 1), see Lemma 3.8, and Υ W 0 ,L (ReLU; d, 1), see Theorem 3.9, we obtain the following lower bounds for NN approximation for W ≥ 1 and d ≥ 2, The logarithm in (5.17) can be removed when d = 1.
Notice that in the case of (5.17), the lower bound can be stated as C(s, d)[n log 2 n] −s/d , where n = n(W, 1) is the number of parameters used to describe Υ W,1 . Thus, in this case, save for the logarithm, we cannot achieve any better approximation rates than that obtained by traditional linear methods of approximation. We discuss later in §7 what rates have been proved in the literature for one layer networks.
In the case of (5.18), the lower bound is of the form C(s, d)n −2s/d , where n = n(W 0 , L) is the number of parameters used to describe the space Υ W 0 ,L . The factor 2 in the exponent leaves open the possibility of much improved approximation rates (when compared with classical methods) when using deep networks. We shall show in §8.7 that these rates of approximation are attained.
We close this section by mentioning that the use of VC dimension to bound approximation rates from below seems to be restricted to the case when approximation error is measured in the norm · C(Ω) . This makes one wonder if there is a concept analogous to VC dimension suitable for L p approximation when p = ∞.

Another measure of optimal performance: approximation classes
There is another important way to measure the performance of an approximation method Σ = (Σ n ) n≥0 by looking at the set of all functions which have a given approximation rate as n → ∞. Let λ = (λ n ) n≥0 be a sequence of positive real numbers which decrease monotonically to zero. We define A(λ) := A(λ, Σ) := {f ∈ X : ∃Λ > 0 E(f, Σ n ) X ≤ Λλ n , ∀n ≥ 0}, (5.19) and further define f A(λ) as the smallest number Λ for which (5.19) holds. The larger this set is, the better the approximation method Σ is.
The case when λ n := (n + 1) −r is the most often studied since it corresponds to the rates most often encountered in numerical scenarios. In this case, A(λ) is usually denoted by A r = A r (Σ) = A r (Σ, X).
A major chapter in approximation theory is to characterize the approximation classes A r for a given approximation method. The main theorems of approximation theory characterize A r for polynomial and spline approximation. Such characterizations are also known for some methods of nonlinear approximation.
As we shall see, we are far from understanding the approximation classes A r for NN approximation. However, some useful results on the structure of these classes can be found in (Gribonval, Kutyniok, Nielsen & Voigtlaender 2019).

Approximation using ReLU networks: overview
As we have already noted, the collection Υ W,L = Υ W,L (ReLU; d, 1) of outputs of a ReLU network with width W , depth L, and input dimension d is a nonlinear set of CPwL functions determined by n(W, L) parameters. Our interest in the next few sections is to summarize the approximation power of Υ W,L . In the process of analyzing this, we shall not address the question of whether there is a practical stable algorithm to produce the approximation, an issue we will discuss in §9.
One of the impediments to giving a coherent presentation of the approximation properties of the outputs of neural networks, as the number of parameters increases, is the great variety of possible architectures of the networks. Namely, when examining the approximation efficiency, we can fix W and let L change, or fix L and let W change, or let both change simultaneously. We can also vary the architecture by allowing full connectivity or sparse connectivity between layers. We may also impose further structure on the weight matrices, leading, for example, to convolution networks. Moreover, we can as well consider a variety of activation functions σ.
While each such setting is of interest, we primarily concentrate on two cases of ReLU networks. The first is the case that most closely matches classical approximation, the set Υ W,1 as W → ∞. We shall see that even this case is not completely understood. At the other extreme is the case when we take the width W to be some fixed constant W 0 and let L → ∞. This is a most illuminating setting in that we shall see a dramatic gain in approximation efficiency when the depth L is allowed to grow. This is commonly referred to as the power of depth.
To provide a unified notational platform, we use Σ n for the set Υ W,L under consideration, where n is equivalent to the number of parameters being used. For example, we can take Σ n = Υ n,1 or Σ n = Υ W 0 ,n since both of these sets depend on a number of parameters proportional to n. Our goal is to understand how the family Σ := (Σ n ) n≥0 performs as an approximation tool.
In what follows in this section, we consider the set Υ W,L restricted to the domain Ω := [0, 1] d . Recall that each function S ∈ Υ W,L is the output of a neural network with at most n(W, L) = (d+1)W +W (W +1)(L−1)+(W +1) parameters. We consider the error of approximation to be measured in an L p (Ω) norm, 1 ≤ p ≤ ∞. Therefore, for f ∈ L p (Ω), we are interested in the error of approximation when Σ n is one of the nonlinear sets Υ W,L and n n(W, L). In the case p = ∞, we assume that f is continuous and the error is measured in the · C(Ω) norm, and so the results hold uniformly in x ∈ Ω. Note that using L p (Ω), 1 ≤ p ≤ ∞, norms to measure error does not match the usual measures of performance of classification algorithms, where the main criteria is probability or expectation of misclassification, see (Bousquet, Boucheron & Lugosi 2005). This is an important distinction that we unfortunately will not address because of a lack of definitive results. It may be that this distinction is in fact behind the success of NNs in the learning environment.
The results we prove can be extended to approximation in L p (Ω) for 0 < p < 1, but this requires some technical effort we want to avoid. We concentrate on the three most important cases p = ∞ (the case of uniform approximation), the case p = 2 which is prevalent in stochastic estimates, and the case p = 1 which monitors average error. We always take the L p (Ω) spaces with Lebesgue measure. Let us also remark that the results we derive hold equally well for general Lipschitz domains taken in place of Ω = [0, 1] d . If we fix the value of p, the results we seek are of the following two types.

Model class peformance: For a model class
Our interest is to describe the decay of this error (with estimates from above and below) as n → ∞.
There are two types of model classes K that are of interest. The first are classical smoothness classes such as the unit ball of a Lipschitz, Hölder Sobolev, or Besov spaces, see §4. In this way, we can compare the approximation properties of NNs with more standard methods of approximation and see whether NNs offer better performance on these classical model classes.
A second type of results of interest is to uncover new model classes K for which NNs perform well and classical methods of approximation do not. Such new model classes would help clarify exactly when NN approximation is beneficial. Motivation for these new model classes should come from the intended application of NN approximation. Such results might explain why NNs perform well in these applications.

Characterization of Approximation Classes: A second category of results that is of interest would be to understand the approximation classes
A r (Σ, L p (Ω)) for NN approximation. Recall that these classes, see §5.10 for their definition, consist of all functions f whose approximation error satisfies E(f, Σ n ) Lp(Ω) ≤ M (n + 1) −r , n ≥ 0, (6.1) with the smallest M defining f A r . We would like to know which functions are in A r . While a precise characterization of these classes is beyond our current understanding of NN approximation, the results that follow give sufficient conditions for a function f to be in such a class. In contrast, for many types of classical approximation, both linear and nonlinear, there are characterizations of their corresponding approximation classes. Such characterizations require what are called inverse theorems in approximation theory. An inverse theorem is a statement that whenever f ∈ A r , we can prove that f is in a certain Banach space Y r .
Consider, for example, the case of approximation in L p (Ω). An inverse theorem is proved by showing an inequality of the form |S| Yr ≤ C(n + 1) r S Lp(Ω) , S ∈ Σ n , n ≥ 0.
For example, if we consider approximation by trigonometric polynomials of degree n in one variable, in the metric L p ([−π, π]), one inequality of this type is the famous Bernstein inequality for trigonometric polynomials which holds for any trigonometric polynomial of degree n. So r = 1 in this example, and Y 1 = W 1 (L p ([−π, π])).
Such inverse theorems are not known for NN approximation save for the case of Σ = (Υ n,1 (σ; 1, 1)) n≥0 for certain activation functions σ, including ReLU. Thus, there is quite a large gap in our understanding of NN approximation as compared to these more classical methods. It is of major interest to establish inverse inequalities for the elements in Σ n when Σ n is a set of outputs of a NN.

Approximation using single layer ReLU networks
In this section, we study approximation on the domain Ω := [0, 1] d by the family Σ := (Σ n ) n≥0 , where Σ 0 := {0} and Σ n := Υ n,1 (ReLU; d, 1), n = 1, 2 . . . , with input dimension d ≥ 1. These sets are rarely used in numerical settings since deeper networks are preferred. However, for theoretical reasons, it is important to understand their approximation properties in order to see the advantages of the deeper networks studied later in this paper. Much of the activity on NN approximation has been directed at understanding the approximation properties of these single hidden layer networks. Surprisingly, we shall see that most fundamental questions about approximation using Σ are not yet answered.
We have discussed in §3.2.1 the structure of Σ n . Each function S ∈ Σ n is a CPwL function in d variables x = (x 1 , . . . , x d ) of the form where η j is linear on the half space H + j := {x : w j · x + b j > 0} and is zero otherwise. Note that S ∈ Σ n is a CPwL function subordinate to a hyperplane partition P of R d into cells which are convex polytopes.
In spite of the simplicity of the representation (7.1), the set Σ n is quite complicated save for the case d = 1, see §3.1.1. First of all, the possible partitions P that arise from hyperplane arrangements are complex in the sense that the cells are not isotropic, the number of cells can be quite large, and there is not a simple characterization of these partitions. This is compounded by the fact that, as we have previously discussed, not every CPwL function subordinate to a partition given by an arrangement of n hyperplanes is in Σ n . For example, this set does not contain any compactly supported functions. This is in contrast to the typical applications of CPwL functions in numerical PDEs. Thus, Σ n is a complex, but possibly rich nonlinear family. We shall see that this complexity inhibits our understanding of its approximation properties. Keeping in mind the discussion in the previous section, there are three types of results that we would like to prove in order to understand the approximation power of Σ := (Σ n ) n≥0 , measured in the · Lp(Ω) norm, 1 ≤ p ≤ ∞.
Problem 3: Give matching upper and lower bounds for E n (K, Σ) Lp(Ω) when K is one of the classical model classes such as unit balls of Lipschitz, Hölder, Sobolev, and Besov spaces.
We shall see that, save for the case d = 1, this problem is far from being solved.
As we have previously stressed, the partitions generated by hyperplane arrangements are complex and not well understood, with cells that are possibly highly anisotropic. This suggests the possibility of being able to approximate functions which are not described by classical isotropic smoothness and leads us to expect new model classes that are well approximated by Σ.
Problem 4: Describe new model classes K of functions that are guaranteed to be well approximated by Σ. Some advances on Problem 4 have been made, centering on the so-called Barron classes that we discuss in §7.2.3.
Finally, the most ambitious approximation problem for Σ = (Σ n ) n≥0 is the following.
Problem 5: For each r > 0 and 1 ≤ p ≤ ∞, characterize the approximation class A r (Σ, L p (Ω)) consisting of all functions f ∈ L p (Ω) for which Nothing is known on this last problem when d > 1, and we are skeptical that any definitive result is around the corner for the case of general d.
In order to orient us to the type of results we might strive to obtain on these problems for general d, we begin in the next section by discussing the case d = 1, where we have the most extensive results and the best understanding of approximation from these spaces.

Approximation by single layer networks when d = 1
We begin by discussing the case d = 1 not only because it is the best understood, but also because it can orient the reader as to what we can possibly expect when engaging the case d > 1. Because approximation by Υ n,1 (ReLU; 1, 1) is essentially the same as free-knot linear spline approximation, results for the NN approximation are derived from the known results on free-knot splines. The latter are well explained in (DeVore 1998) and the literature cited therein, and summarized below.

Approximation of classical model classes when d = 1
Here, we measure approximation error in L p (Ω) with 1 ≤ p ≤ ∞ and domain Ω = [0, 1]. The classical model classes for L p (Ω) are finite balls in the Lipschitz, Hölder, Sobolev, and Besov spaces. The latter spaces are the most flexible for measuring smoothness and the approximation properties, for all of the other smoothness classes can be derived from them. So, we restrict our discussion to the model classes K = U (B s q (L τ (Ω))), 0 < q, τ ≤ ∞, which have smoothness of order s > 0. These spaces were introduced and discussed in §4.3, where we have noted that these spaces are compactly embedded in L p (Ω) when s > 1/τ − 1/p, i.e., when these spaces lie above the Sobolev embedding line, see Figure 4.4. They are not embedded in L p (Ω) if they lie below the embedding line.
The following theorem summarizes the results known about approximating Besov classes in the case d = 1.
Theorem 7.1. Let K = U (B s q (L τ (Ω))) be the unit ball of the Besov space B s q (L τ (Ω)). If 0 < s ≤ 2 and this space lies above the Sobolev embedding line for L p (Ω) then E n (K, Σ) p ≤ C(s, p, τ )(n + 1) −s , n ≥ 0. (7.2) Let us elaborate a little on what this theorem is saying. First, note that the sets K for which we obtain the approximation rate O((n + 1) −s ) allow the smoothness describing K to be measured in L τ (Ω), where τ = p. When τ ≥ p, the result does not need to exploit the nonlinearity of Σ n in the sense that the approximation rate can be obtained already by using linear spaces corresponding to fixing the breakpoints in Σ n to be equally spaced on [0, 1]. It is only when τ < p that we need to exploit nonlinearity.
A couple of simple examples may be in order. Consider approximation in C(Ω) and smoothness of order s = 1. Obviously, the space Lip 1 is compactly embedded in C(Ω) and the approximation rate is O((n + 1) −1 ), n → ∞, when K = U (Lip1). Note that Lip 1 is not a Besov space but is continuously embedded in B 1 ∞ (L ∞ (Ω)) and the latter space is covered by the theorem. Hence Lip 1 also is. We can obtain the approximation rate O((n + 1) −1 ) by taking the breakpoints equally spaced and thereby using a linear subspace of Σ n . The Sobolev space W 1 (L 1 (Ω)) is also contained in C(Ω), but not compactly. Nevertheless, its unit ball has the approximation rate O((n + 1) −1 ). The Sobolev spaces W 1 (L p (Ω)), p > 1, have unit balls that are compact in C(Ω) and the theorem gives that they also have the approximation rate O((n+1) −1 ), n → ∞. Recall that for f to be in Lip 1 requires that it has bounded derivative f L∞(Ω) < ∞, while f ∈ W 1 (L p (Ω)) only requires f ∈ L p (Ω). For example, the function f (t) = t α , 0 < α < 1, is in W 1 (L p (Ω)) if p > 1 is small enough, but this function is not in Lip 1. The way one gets good approximation of t α by Σ n is to put half of the breakpoints of the output S ∈ Σ n near 0 and the remaining half equally spaced in Ω. Thus, for these Sobolev spaces one truly needs the nonlinearity of Σ n . To achieve the optimal approximation rate, we need to choose the breakpoints to depend on f , and thus we cannot choose them in advance.
Finally, let us remark why we have the restriction s ≤ 2. We are approximating locally by linear functions. A function f with smoothness of order s > 2 would need to use locally polynomials of degree higher than one to improve its local error of approximation (think of Taylor expansions). Hence, when f has smoothness of order s > 2, we do not improve on the rate O((n + 1) −2 ), n → ∞, which we already have for functions with smoothness of order 2.

Approximation classes for d = 1
One of the crowning achievements of nonlinear approximation at the end of the last century was the characterization of the approximation classes for several classical methods of nonlinear approximation, including free-knot spline, n-term wavelet, and adaptive piecewise polynomial approximation.
The key to establishing these results was not only to give upper bounds for the error in approximating functions from Besov spaces but also to prove certain inverse theorems that say if a function f can be approximated with a certain rate O((n + 1) −r ), n → ∞, then f must possess a certain Besov smoothness. These inverse theorems should not be underestimated since they allow precise characterization of approximation classes.
In the case of approximation using CPwL functions, the inverse theorems were provided by the seminal theorems of Pencho Petrushev, see (Petrushev 1988). The approximation space A r = A r (Σ, L p (Ω)) is precisely characterized, provided 0 < r < 2 and 1 ≤ p ≤ ∞, with C(Ω) used in place of L ∞ (Ω) when p = ∞. In this case, A r is a certain interpolation space, see (DeVore 1998). Since we do not want to go too deeply into interpolation space theory here, we simply mention that A r is sandwiched between two Besov spaces of smoothness order r. More precisely, if 0 < r < 2, and 1 ≤ p ≤ ∞ are fixed, and τ * := (r + 1/p) −1 , then for all 0 < q ≤ ∞, we have Since this result may be difficult to digest at first glance, we make some comments to explain what these embeddings say. First, recall the relation of Besov spaces to the Sobolev embedding line, see Figure 4.4. For a fixed value of r, all spaces B r q (L τ (Ω)) appearing on the left side of the embedding (7.3) are compactly embedded in the space L p (Ω), where we are measuring error. The left embedding says that any function in one of these spaces is in A r , and hence has approximation error decaying at the rate O((n + 1) −r ). Note that these spaces get larger as we approach the embedding line. The right embedding says that we cannot allow τ to be smaller than τ * ; in fact if τ is smaller than τ * we do not even embed into L p (Ω). Besov spaces that appear on the embedding line itself may or may not be compactly embedded in L p (Ω), depending on q. They are compactly embedded if q is small enough.
Finally, we remark that we have the characterization of A r only if r < 2 for the same reason we had the restriction s ≤ 2 when discussing approximation of classical model classes in the previous section. Going a little further, note that if S n ∈ Σ n , n ≥ 1, then S n ∈ A r for all r > 0, but S n is not in any smoothness space of order s > 2. Moreover, any function f = k≥1 α k S k , α k ∈ R, will be in A r , 0 < r < ∞, if (α k ) k≥1 tends to zero sufficiently fast. Yet, f will not have any classical smoothness of order s > 2. So A r , r > 2, cannot be characterized by classical smoothness such as membership in a Besov space.

Results for d ≥ 2
Continuing with one hidden layer networks, let us now consider the case d ≥ 2. The difficulty in constructing effective approximations in this case is the fact that when d ≥ 2, the set of NN outputs Σ n = Υ n,1 (ReLU; d, 1) does not have locally supported nodal functions that are commonly used to build approximants. So, approximation methods are built on global constructions. It is not surprising therefore, that the strongest results are known in the case where the approximation is measured in the L 2 (Ω) norm, where orthogonality can be employed in the constructions. We discuss the L 2 approximation first.
7.2.1. Approximation in L 2 (Ω) For approximation in X = L 2 (Ω), it is known that when f ∈ W s (L 2 (Ω)), we have E n (f, Σ) L 2 (Ω) ≤ Cn −s/d f W s (L 2 (Ω)) , n ≥ 1, (7.4) provided s ≤ 2 + (d − 1)/2. The case d = 2 is given in (DeVore, Oskolkov & Petrushev 1997), and the general case is considered in (Petrushev 1998). We give a very coarse description of the ideas behind proving (7.4). In this discussion, it is useful to work with functions defined on the unit Euclidean ball Ω * ⊂ R d rather than on the cube [0, 1] d . One can move between these different domains via restriction and extension operators which are known to preserve Sobolev and Besov regularity. When g is a univariate function, then g(a · x), a ∈ R d is a ridge function of d variables (sometimes called a planar wave). If X n is a linear space of dimension n of univariate functions and Λ is a fixed subset of m unit vectors in R d , then the set of functions Y n,m := span{g(a · x) : g ∈ X n , a ∈ Λ} is a linear space of dimension at most mn.
The core of the proof of (7.4) is to show that if X n is effective in approximating univariate functions in L 2 , and if the set Λ is 'uniformly distributed' on the unit sphere in R d (the boundary of the unit ball of R d ), then Y m,n will provide an approximation to W s (L 2 (Ω * )) functions when choosing m = O(n d−1 ). We can take X n to be the space of univariate CPwL functions on an equally spaced partition. The resulting space Y n,m is contained in Υ mn,1 (ReLU; d, 1), and thereby proves (7.4). The proofs of these results are quite elaborate and technical.
If we wish to characterize the approximation performance of Σ on the model class K := U (W s (L 2 (Ω * )), then we would need to establish lower bounds for the approximation error E n (K) L 2 (Ω * ) that match those of (7.4). Such bounds are plausible but seem not to be known. However, there are lower bounds for approximating K by general ridge functions given in (Maiorov 1999), which give for our setting and d ≥ 2, the lower bound where C depends only on d.
While the results given above are less than satisfactory, because of the lack of matching upper and lower bounds, the situation becomes even worse when we seek results that show the benefits of the nonlinear structure of the sets Σ n , n ≥ 1. As we know from the case d = 1, nonlinear methods of approximation should allow smoothness to be measured in the weaker L τ (Ω) norms while retaining the same approximation order. Namely, the question is what are the approximation rates when K is the unit ball of a Besov space B s q (L τ (Ω)) that is above the Sobolev embedding line for L 2 (Ω). In contrast to the case d = 1, we do not know results that quantify the performance of Σ, for the Besov spaces that compactly embed into L 2 . 7.2.2. Approximation in L p , p = 2 When we consider approximation in L p (Ω), p = 2, we are only aware of results for p = ∞ given in (Bach 2017). These are only stated for the unit ball K of Lip 1 with approximation error measured in the norm of C(Ω), and take the form E n (K, Σ) C(Ω) ≤ C log 2 n n 1/d , n ≥ 1. Since we are now considering approximation in the space C(Ω), we can use the known upper bounds for the VC dimension of Υ n,1 (ReLU; d, 1) to derive lower bounds on the approximation error for K. If we apply Lemma 3.8 and employ arguments similar to those we used to prove (5.15), we obtain In other words, modulo logarithms, the approximation rate of K is n −1/d . It is of interest to remove these log terms.
We can derive bounds on the approximation rates for the model classes K α := U (Lip α), 0 < α < 1, from the known Lip 1 bound by using interpolation theory, see Extend 1 in §4.4, which gives the bound One expects that these results also extend to error estimates for approximation by Σ n of the unit balls of the smoothness spaces B s q (L ∞ (Ω)) for some range of s larger than one. However, these do not seem to be found in the literature. Equally missing are results for approximation in L p (Ω) when p = 2, ∞. Moreover, none of the known results reflect the expected gain from the fact that Σ is a nonlinear method of approximation.

Novel model classes for single layer approximation
As we have noted earlier, there is much interest in identifying new model classes for which NN approximation is particularly effective. One celebrated model class of this type was introduced by Andrew Barron in (Barron 1994). This model class and its corresponding approximation results are nicely explained in the exposition (Pinkus 1999). The most recent results on NN appproximation of this class of functions can be found in (Siegel & Xu 2020) and (Klusowski & Barron 2018). We limit ourselves to describing how these model classes fit into the themes of this article.
Given any domain Ω ⊂ R d , Barron introduced the model class K = K Ω consisting of all functions f ∈ L 2 (Ω) which have an extension to all of R d (still denoted by f ) whose Fourier transformf satisfies Notice that (7.5) imposes additional conditions over just requiring that f is square integrable. Namely, (7.5) requires the decay off (ω) as the frequency ω gets large. It is easy to check that this is equivalent to requiring that f has a gradient (in the weak sense) whose Fourier transform is in L 1 . Barron initially showed that for any sigmoidal activation function σ the approximation family Σ := (Υ n,1 (σ; d, 1)) n≥1 approximates the model class K in the norm of L 2 (Ω) with the following accuracy E n (K Ω , Σ) L 2 (Ω) ≤ C Ω n −1/2 , n ≥ 1. (7.6) This result was then shown to hold also for ReLU activation by using the fact that (ReLU(t) − ReLU(t − 1)) is a sigmoidal function. Barron's result has spirited a lot of generalizations and applications, and even the introduction of new Banach spaces, see (E, Ma & Wu 2019). Important generalizations of (7.6) were given in (Makovoz 1996), where it was shown that the above result for the class K Ω holds for approximation in L q , 1 ≤ q < ∞, and moreover, the rate of approximation can be improved to O(n −1/2−1/(q * d) ), where q * is smallest even integer ≥ q. Further improvements on approximation rates for Barron classes and their generalizations have been given through the years. We refer the reader to (Siegel & Xu 2020) for the latest information.
We will not dig too deeply into the known approximation rates for Barron classes and their generalizations here. Rather, we confine ourselves to some comments to properly frame these results in the context of nonlinear approximation. Let H be a Hilbert space. We say that a collection D := {φ} of functions from H is a dictionary if each φ has norm one and whenever φ ∈ D, then so is −φ. Given such a dictionary D, we consider the closed convex hull co(D) of D. A fundamental result in approximation theory is that whenever f ∈ co(D), then there exists g = n k=1 c k φ k with the φ k ∈ D, such that There is a constructive method to find such a g, known as the orthogonal greedy algorithm, see (DeVore & Temlyakov 1996).
To derive (7.6) from this, it is enough to show that K Ω is contained in the convex hull of the dictionary of all functions cσ(w · x + b) with w ∈ R d , b ∈ R and c a suitable normalizing constant. The proof of this fact can be found in (Barron 1993) and (Pinkus 1999). The improvements of Makovoz rest on the fact that in the case of sigmoidal functions, the dictionary elements σ(w · x + b) are very close to one another when the parameters w and b change slightly and so one can reduce the number of terms needed in the approximation when seeking an error ε.
Notice that neither the constant C Ω nor the form of the decay n −1/2 in (7.6) depend on d. This should be compared with approximation for Sobolev classes where the rates decrease and the constant explodes in size.
However, this must be viewed in the light that the condition for membership in K gets much stronger as d gets large. This class is analogous to requiring that f have a Fourier series (in d variables) whose coefficients are absolutely summable. Another important point is that the proof of (7.6) exploits nonlinear approximation since the n terms from the dictionary D used to approximate f are chosen to depend on f .

Approximation using deep ReLU networks
We now study in detail the approximation by the family Σ = (Σ n ) n≥0 of deep networks, with Σ 0 := {0} and Σ n := Υ W 0 ,n (ReLU; d, 1), n ≥ 1, where W 0 is fixed depending on d. The three main conclusions we uncover, following the order of our exposition, are: • When error is measured in an L p norm, 1 ≤ p ≤ ∞, deep NNs approximate functions in the classical model classes (such as Lipschitz, Hölder, Sobolev and Besov classes) at least as well as all of the known methods of nonlinear approximation, see §8.6. • For all classical model classes, deep NN approximation gives error rates dramatically better than all other standard methods of nonlinear approximation, see §8.7. • There are novel model classes, built on the ideas of self similarity, where NNs provide approximation rates not available by standard approximation methods, see §8.10.

Results obtained from basic decompositions
In this section, we describe what is perhaps the most common method of obtaining estimates for deep NN approximation. It is based on two principles. The first is to show that the target function has a decomposition in terms of fundamental building blocks with a control on the coefficients in the decomposition. These building blocks could be wavelets or some of their mathematical cousins, such as shearlets or ridgelets, or they could be global representations such as power series or Fourier decompositions. For functions f in classical smoothness spaces, we often know the existence of such decompositions with quantifiable bounds on the coefficients of f . The second step is then to show that each of these building blocks can be captured very efficiently (usually with exponential accuracy) by deep networks. These two principles can then be put together in order to give quantifiable performance for approximation using deep NNs. This technique appears often in the literature. A partial list of prominent papers using this method are (Yarotsky 2017), (Opschoor, Schwab & Zech 2019), (Bölcskei, Grohs, Kutyniok & Petersen 2019), (Gühring, Raslan & Kutyniok 2020), (Grohs, Perekrestenko, Elbrächter & Bölcskei 2019), (Petersen & Voigtlaender 2018), (Petersen 2020), , (Shen et al. 2019), (Lu, Shen, Yang & Zhang 2020).
We formalize the above mentioned procedure by considering any Banach space X and representing f ∈ X as f = k≥1 α k g k , where the α k 's are scalars, g k ∈ X, and g k X = 1. Then, we can bound the error in approximating f by its partial sum by We can exploit this simple observation in the context of neural networks as follows. If g k ∈ Υ W 0 −(d+1),n (ReLU; d, 1), 1 ≤ k ≤ n, then since the partial sum n k=1 α k g k ∈ Σ n 2 = Υ W 0 ,n 2 (ReLU; d, 1), see Addition by increasing depth in §3.3.2.
The bound (8.1) is quite crude and can be improved in many ways. For example, we can give a better control on depth needed, when each g k is a composition of the same univariate function T . We shall use this fact in what follows and so we formulate it in the following proposition.
Proposition 8.1. If T ∈ Υ W 0 −1,L 0 (ReLU; 1, 1), then any linear combination S = m i=1 α i T •i ∈ Υ W 0 ,mL 0 (ReLU; 1, 1). Proof: Let N be a neural network with width W 0 − 1 and depth L 0 , with input and output dimension one, whose output function is T . We concatenate N with itself (m − 1) times to obtain the network N * of width W 0 and depth mL 0 . Note that the kL 0 -th layer of N * can output T •k . We add one collation channel to N * , whose nodes pass value zero until layer (L 0 + 1), where its node collects α 1 T . This value is then passed forward until layer 2L 0 + 1, where α 2 T •2 is added, so that α 1 T + α 2 T •2 is now held in the node of this channel for layers, 2L 0 + 1, . . . 3L 0 . We continue in this way. Then, we output S from the mL 0 -th layer.
In deriving an estimate like (8.1), it is not necessary to assume that the functions g k ∈ Σ n , k = 1, . . . , n, but merely that the g k 's are approximated sufficiently well by Σ n , as we see in the next proposition.
In this section, we shall use the following theorem.
Proof: Let N 0 be the network which outputs ϕ, and let us denote by A * the W 0 × d matrix of input weights of N 0 , and by b * the biases of its first layer.
We build a special network N with width W = d + 1 + W 0 and depth L = nL 0 to output S. Its first d channels are source channels to push forward x 1 , . . . , x d . The next W 0 channels will be the channels of N 0 , and the final channel will be a collation channel to form the sum defining S.
The network N consists of n copies of N 0 placed next to each other. We feed the source channels to the j th copy of N 0 , j = 1, . . . , n. For this copy we use input matrix A * A j and bias (b * + A * b j ) for its first layer. The nodes of the collation channel forward zeroes up to layer L 0 + 1, where the output c 1 ϕ(A 1 x + b 1 ) of the first copy of N 0 is entered and then forwarded. The output of the j th copy is multiplied by c j through modification of the output weights of N 0 , and forwarded to the (jL 0 +1) st node of the collation channel if j < n, where it is added to the current sum in that channel and then the result is forwarded. When j = n the output of the n-th copy is outputted together with the content of the collation channel to produce S.

Approximation of products
We turn next to showing how to approximate certain simple building blocks with exponential accuracy using deep ReLU networks. These building blocks include monomials, polynomials, tensor products, and B-splines. An important tool in establishing such results is to show how one can approximate products of functions, which is our next item of interest.
Let H be the hat function introduced in (3.4). We begin with the well We define Let us note that we can also represent S n by These two representations of S n show that Proof: Since H ∈ Υ 2,1 (ReLU; 1, 1), in view of Proposition 8.1, there is a ReLU network of width 3 and depth L = n that outputs n k=1 4 −k H •k . If we add one more channel to push forward t, then we can also output S n (t) := t − n k=1 4 −k H •k (t). Since S(t) − S n (t) = − ∞ k=n+1 4 −k H •k (t), the bound (8.7) follows from whereas (8.8) follows from the fact that each H •k has Lipschitz norm 2 k . Let us mention that there are many functions other than t 2 for which explicit formulas like (8.4) hold. These will be discussed in §8.10. For now, we want to examine how we can capture higher order monomials from the above results. First, we begin by showing how we can implement multiplication using deep ReLU networks. We start with the simple formula (8.10) We can construct a neural network with input dimension 2 which outputs the function Π(·, ·) with high accuracy.
For n ≥ 1, we define for x 1 , x 2 ∈ [0, 1] the function (8.11) and prove the following properties of Π n .
Proposition 8.6. For each n ≥ 1, the function Π n ∈ Υ 5,3n (ReLU; 2, 1) and satisfies the inequalities and Proof: Let N be the network of width W = 4 and depth n which outputs S n , see Proposition 8.4. We now construct a network N which inputs (x 1 , x 2 ) and outputs Π n . First, we add a source channel to N to push forward x 2 (N already has a source channel to push x 1 ). Then, we place 3 copies of this extended network next to each other. We output the three terms from (8.10) in the collation channel of N , and produce Π n (x 1 , x 2 ).
In particular, C k ≤ ek, as long as n ≥ 1 + log 2 k.
Proof: For k ≥ 3, we construct a network of width (k + 2) which takes the inputs x 1 , . . . , x k−1 and outputs Π k−1 n (when k = 3, this is the network for Π 2 n ). Its first 3n(k − 2) layers are the same as the network that inputs x 1 , . . . , x k−1 and outputs Π k−1 n , except that we add an additional channel to push forward x k . We then follow this with the network for Π 2 n using as inputs x k and Π k−1 n (x 1 , . . . , x k−1 ). This network will have width W = k + 3 and depth L = 3(k − 1)n as desired.

Approximation of polynomials
In the following, we show that polynomials can be well approximated by the outputs of deep ReLU networks. We begin with monomials.
Approximation of monomials: We use the standard notation |ν| = d j=1 ν j for the length of ν. Theorem 8.7 shows that any monomial φ ν , with |ν| = m, is well approximated by deep ReLU networks. Namely, for each n ≥ 1, there is an S ν ∈ Υ 3+d,3(m−1)n (ReLU; d, 1) such that ( 8.17) Note that here we can keep the width of the network bounded by 3 + d rather than 3 + m because the x 1 , . . . , x d are repeated; we leave the details to the reader.

Approximation of polynomials:
all of the indices ν ∈ Λ satisfy |ν| ≤ m, then we can approximate P by the function S := ν∈Λ c ν S ν . Note that S is the output of the concatenation of the networks that output the S ν 's, see Addition by increasing depth in §3.3.2. Since all networks producing the S ν 's have already source channels that forward the values x 1 , . . . , x d , we need to add only a collation channel to collect the terms in the sum defining S, and therefore S ∈ Υ 4+d,3(m−1)n#(Λ) (ReLU; d, 1). We have the following estimate for the approximation error P − S C([0,1] d ) ≤ em · 4 −n ν∈Λ |c ν |, n ≥ 1 + log 2 m, (8.18) obtained from (8.17). There are several savings that can be made in the size of the network in such constructions by balancing the size of c ν with the size of the networks for the S ν when given a desired target accuracy. Constructions of NN approximations to polynomial sums have been employed to prove results on approximating real analytic functions using deep ReLU networks. We do not formulate those results here but rather refer the reader to the papers (Opschoor, Schwab & Zech 2019) and (E & Wang 2018) for statements and proofs.

Approximation of tensor products
Tensor structures are a very effective method for approximation in high dimensions. It is beyond the scope of this article to lay this subject out in its full detail. We simply wish to point out here that a rank one tensor product is well approximated in C(Ω), Ω = [0, 1] d , whenever the univariate components f j are well approximated. The starting point for this is the following simple proposition.
Proposition 8.8. If g j : [0, 1] → [0, 1], g j ∈ Υ W 0 ,L 0 (ReLU; 1, 1), W 0 ≥ 3, for j = 1, . . . , d, then the rank one tensor (8.20) can be approximated by S ∈ Υ dW 0 ,L 0 +3(d−1)n (ReLU; d, 1) to accuracy g − S C(Ω) ≤ ed · 4 −n , n ≥ 1 + log 2 d. (8.21) Proof: We can take S := Π d n (g 1 , . . . , g d ). We claim that S is an element of Υ dW 0 ,L 0 +3(d−1)n (ReLU; d, 1). Indeed, we stack the networks producing the g j 's, j = 1, . . . , d on the top of each other and end up with a network N 1 with width dW 0 and depth L 0 . Then, we concatenate it with the network N 2 producing Π n (y 1 , . . . , y d ). The latter has depth 3(d − 1)n and width 3 + d ≤ dW 0 . The concatenation is done by forwarding the output of the network producing g j as an input to the j-th channel of N 2 (recall that the first d channels of N 2 are source channels). We end up with a network with the desired width and depth. The inequality (8.21) follows from Theorem 8.7.
Remark 8.2. We make two remarks on the above proposition: • If 0 ≤ g j (t) ≤ M instead of 0 ≤ g j (t) ≤ 1, then by using Remark 8.1, we obtain an S in the same ReLU space but the accuracy of approximation is now lessened by the factor M d . • If the g j 's are not in the designated ReLU space, but are rather only approximated byĝ j : [0, 1] → [0, 1], j = 1, . . . , d, from the designated ReLU space to an accuracy ε, then the function S := Π d n (ĝ 1 , . . . ,ĝ d ) is in the designated ReLU space and we can write where the first term does not exceed the sum of the d errors ĝ 1 · · ·ĝ j · g j+1 · · · g d −ĝ 1 · · ·ĝ j+1 · g j+2 · · · g d C(Ω) ≤ ε. (8.23)

Approximation of B-splines
In our presentation of classical smoothness classes K of functions given in §4, we have stressed that the elements in K have certain atomic decompositions and their membership in K is characterized by the decay of their coefficients in such representations. Thus, if we can show that the atoms in such a decomposition are well approximated by NNs, then we can obtain bounds for NN approximation of K. The aim of the present section is to show how this unfolds when we choose B-splines as the atomic representation system. Let N r be the univariate B-spline defined in (4.2). Let us recall that N r is supported on [0, r] and is normalized so that N r C(R) = 1.
Proposition 8.9. Let r ≥ 2 and consider the B-spline There is a functionN ∈ Υ W,L (ReLU; d, 1) with width W = 6d and depth L = Cn, with C depending only on r and d, which satisfies (8.24) with the constant C (r, d) depending only on r and d. Moreover, the support ofN is contained in that of N .
Proof: This is proved by approximating in succession the functions (8.25) where N r is the univariate B-spline. Our results of the previous sections on approximating products were stated for approximation on [0, 1] d and now we want approximation on [0, r] d . This is done by using Remark 8.1 and changes the estimates by a constant depending only on r and d. We assume such changes without further elaboration in what follows. All constants C appearing in the proof depend at most on r and d.
Because of (8.17), we can approximate the function t r−1 by an element of Υ 4,3(r−2)n (ReLU; 1, 1) with an error that does not exceed C4 −n . By adding an extra layer for ReLU, we can approximate the function ρ r−1 by an S r from Υ 4,3(r−2)n+1 (ReLU; 1, 1) with accuracy (8.26) Next, we use S r to approximate the univariate B-spline N r by replacing ρ r−1 (k − t) by S r (k − t) in formula (4.2). The resulting function T r , see Theorem 8.3, is in Υ 6,3(r+1)(r−2)n+r+1 (ReLU; 1, 1) (note that the network producing the latter set has 1 source channel for t and one collation channel). In addition, we have (8.27) We next consider T + r := ReLU(T r ) which also satisfies (8.27), with the additional property T + r ≥ 0. Remark 8.2 with ε = C4 −n and M ≤ (1 + C/4) gives that N can be approximated byÑ = Π d n (T + r , . . . , T + r ) with accuracy N −Ñ C([0,r] d ) ≤ C4 −n , n ≥ 1 + log 2 d. (8.28) The approximantÑ ∈ Υ W,L (ReLU; d, 1) with width W = 6d and depth L = 3(r + 1)(r − 2)n + r + 2 + 3(d − 1)n. The network producingÑ has d source channels for each of the variables x i , i = 1, . . . , d, and d collation channels. Finally, we modify the functionÑ of (8.28) so that it vanishes outside [0, r] d . This is done by what should by now be a familiar technique to the reader. We construct a function S with support [0, r] d and S ≥ N r , using the method for the construction of nodal functions, see (3.19). More precisely, S := ReLU(min{ 1 , . . . , 2 d }) ∈ Υ d+1,2 d (ReLU; d, 1), see MM2 of §3.3.2, where j , j = 1, . . . , 2 d , are affine functions, each of which vanishes on one of the 2 d facets of the cube [0, r] d and is above the graph of the B-spline N r . Then, the functionN := ReLU(min{Ñ , S}) agrees withÑ whenÑ is nonnegative and vanishes outside [0, r] d . Therefore it satisfies all the properties of the theorem. Since both networks producingÑ and S have source and collation channels, it follows thatN ∈ Υ 6d,Cn (ReLU; d, 1) with C depending only on r and d.

Approximation of Besov classes with deep ReLU networks
With the results of the previous section on B-spline approximation in hand, we can now show that deep neural networks are at least as effective as standard nonlinear methods (modulo logarthms), such as adaptive FEMs or n-term wavelets, when approximating the classical smoothness spaces (Sobolev and Besov). The ideas used in the presentation below are put forward in the references (Ali & Nouy 2020), , (Gribonval et al. 2019).
We fix Ω = [0, 1] d and measure the approximation error in some L p (Ω) norm, 1 ≤ p ≤ ∞. Let K = U (B s q (L τ (Ω))) be the unit ball of a Besov space that lies above the Sobolev embedding line for L p (Ω), and therefore is a compact subset of L p (Ω). Classical methods of nonlinear approximation show that K can be approximated in L p (Ω) to accuracy O(n −s/d ), where n is the number of parameters used in the approximation. We now show how to achieve these rates when using Σ n := Υ W 0 ,n (ReLU; d, 1) with W 0 fixed and depending only on s and d. Note that the number of parameters needed to describe the elements in Σ n is at most C(s, d)n. Here, and later in this section, the constant C(s, d) changes at each occurrence.
Proof: We only treat the case 1 ≤ p < ∞ and leave to the reader to make the necessary changes for p = ∞. We fix p and s > 0. We can assume q = ∞ since this is the largest unit ball for the given τ , and τ < p. We can further assume that δ > 0 is arbitrarily small since the Besov spaces of order s get larger as we approach the Sobolev embedding line which corresponds to δ = 0.
To prove the theorem, it is sufficient to prove that it holds for n = 2 L with L a sufficiently large positive integer. We take r = s + 1 and let N denote the multivariate tensor product B-spline of order r. We recall the notation D for dyadic cubes, D(Ω) for dyadic cubes I ∈ D such that N I is nonzero on Ω, D k (Ω) for these cubes at dyadic level k (they have measure 2 −kd ), and D + (Ω) := ∪ k≥0 D k (Ω).
In preparation for the choice of the m(I), we first estimate how wellŜ approximates f . If we denote by S k := I∈D k (Ω) c I (f )N I , k ≥ 0, using (8.36) and the fact that N I Lp(Ω) ≤ C|I| 1/p , we obtain, For the definition of m(j, k), let us introduce the notation ε t := 2 log 2 (t + 1), t ≥ 0. (8.38) For every j ≥ J k , k ≥ 0, we choose m(j, k) to be the smallest non-negative integer such that This choice satisfies Then, we obtain where we used (8.33) for the last inequality. It then follows from (8.30) that We are left to show thatŜ ∈ Σ L β 2 L , which in turn proves the theorem. We know thatN I ∈ Υ 6d,Cm(I) (ReLU; d, 1). Since the network producingN I already has d source channels and a collation channel, our Addition by increasing depth of §3.3.2 gives thatŜ ∈ Υ 6d,CA (ReLU; d, 1), where ( 8.40) The index in the second sum in (8.40) has upper bound J + k , where J + k is defined by the equation (8.41) because m(j, k) = 0 when j ≥ J + k , see (8.39). Later, we shall use the fact that For j ∈ [J k , J + k ], m(j, k) takes its maximum value at j = J k which is where we used the definition of J k and (8.39). Therefore, we have the estimate , and the second sum used that Obviously, the first sum on the right does not exceed CL2 L , and so we concentrate on the second sum. We first want to see what the exponent is in that sum. From (8.42), we have Going further, we find and thus kd − ksτ + J + k τ = λ(Ls/d + ε k − δk). We substitute the latter relation into (8.43) and obtain, after change of index i = k − L/d and using (8.42), This gives the bound we want and proves the theorem.
Before proceeding on, we make the following remarks concerning the above theorem and its proof.
Remark 8.3. The above result is not quite as good as the results for approximating unit balls of Besov classes when using other methods of nonlinear approximation, see (DeVore 1998), because of the appearance of the logarithm. We should mention that in (Ali & Nouy 2020) the authors prove a similar to the above theorem result by using spline wavelets rather than B-splines as the main vehicle. In the next section, we show that when p = ∞ this logarithm does not appear and, in fact, we can prove much better rates of approximation. These can in turn be used to improve the above results for all Besov classes, as we shall discuss at the end of the next section. The determination of the best approximation rates for L p approximation when using the outputs of deep networks such as Σ n = Υ W 0 ,n (ReLU; d, 1) remains unsettled.

Super convergence for deep ReLU networks
In this section, we present some very intriguing results on approximation by NNs that show quite unexpected rates of approximation for certain classical model classes K described by smoothness. The initial results were given in (Yarotsky 2018) for the model classes U (Lip α), 0 < α ≤ 1 on Ω = [0, 1] d , and were later extended to more general model classes U (C s (Ω)), s > 0, in (Lu et al. 2020). Our aim in this section is to show how these super rates are established in the simple case of univariate functions in Lip 1, and leave the reader to consult the above references for the treatment for functions of d variables and higher smoothness. At the end of this section, we place these results into perspective and formulate some related questions.
Assume now that ε 0 , . . . , ε j−1 have been chosen and the corresponding y 1 , . . . , y j have been shown to satisfy (8.47). We now choose ε j so that 2y j+1 N = 2(y j +ε j ) N is closest to R(t j+1 ). Since R changes by at most 2/N in moving from t j to t j+1 , this choice will also satisfy (8.47). So, we are left to verify that y n = 0. Since n is even, y n = ε 0 + . . . + ε n−1 = 2m for some integer m. In addition, we have |2y n /N − 0| ≤ 2/N , and therefore we must have m = 0. Thus, we showed the existence of a sequence (ε i ) N −1 i=0 with the required properties. The proof of the theorem is completed.

Remarks on Theorem 8.11
We make some remarks on this theorem in order to put into perspective what it is saying. In this section, we take Σ := (Σ n ), where the sets Σ n used for approximation are Σ n = Υ W 0 ,Cn (ReLU; d, 1) with W 0 and C fixed and depending only on d.
At a first glance, this theorem is very surprising to numerical analysts and approximation theorists since it is giving a rate of approximation O(n −2 ), n ≥ 1, which is twice what standard approximation methods based on n parameters give. This indicates that the nonlinear manifold Σ n := Υ W 0 ,Cn has certain space filling properties in X = C(Ω). While this seems like a great advantage of this manifold, recall that there are always one parameter manifolds which are dense in X, albeit not as neatly described as Σ n . But then, we must throw in some caution. The theorem says that given f ∈ K, there is a mapping a : K → R n which selects the parameters a(f ) of the approximant that produces this exceptional approximation performance. From our remarks in §5 on manifold width, the mapping a cannot be continuous (note that the mapping M is always continuous as will be discussed in more detail in the next section). This shows a lack of numerical stability in the approximation process which yields Theorem 8.11. This means that we can expect that it will be very difficult to numerically find the parameters that attain the super convergence rate via a search over parameter domain. On the other hand, if we are willing to allow a long enough search time with an a posteriori error estimator, we might be able to find such parameters.
In spite of the negative comments just put forward, the theorem is intriguing and brings up several questions that we now discuss. The first natural question is in what generality does this super convergence hold. We have already mentioned that Yarotsky proved it for multivariate functions of d variables. He also proved a general result which gives that the theorem holds for Lip α spaces, 0 < α ≤ 1. A generalization of this theorem is provided in (Lu et al. 2020). It shows that the set K = U (Lip1) can be replaced by the unit ball K of C s (Ω), Ω = [0, 1] d , for any s > 0. However, in the latter presentation there is a loss of logarithm in that the proven approximation rate is E(K, Σ n ) C(Ω) ≤ ( log n n ) 2s/d , n ≥ 1. Next, let us remark that the results of §5.9 and Theorem 3.9 give that for the model classes K = U (C s (Ω)) we have the lower bound E(K, Σ n ) C(Ω) ≥ c 0 n −2s/d , n ≥ 1.
(8.49) So, at least for the Lipschitz spaces, we have matching upper and lower bounds, and therefore a satisfactory understanding of the approximation properties of deep NNs for these classes.

Super convergence for approximation in L p
The above results were limited to approximation in C(Ω), Ω = [0, 1] d , and the Sobolev spaces W s (L ∞ (Ω)). What happens when the approximation takes place in L p (Ω), 1 ≤ p < ∞, and what happens for general Besov spaces that compactly embed in L p ? We show in this section that we can obtain super convergence results in this case as well by using results from the theory of interpolation spaces.
Proof: This is proved by using the K-functionals of interpolation theory. To keep the presentation simple and to just show the ideas of how this is done, we limit ourselves to proving one result of the above form when d = 1 and s = 1 with the approximation taking place in L ∞ . Instead of Besov balls, we use the unit balls K τ := U (W 1 (L τ (Ω))), 1 ≤ τ ≤ ∞ of the Sobolev spaces. After presenting this example, we give in Remark 8.4 an outline of the proof of the general result stated in the theorem.
We know the following two estimates where the first was given in §7.1.1 (the approximant in this theorem can be viewed as element from Υ 3,n (ReLU; 1, 1)) and the second is the super convergence result of Yarotsky, see Theorem 8.11, with Σ n := Υ 11,16n+1 (ReLU; d, 1). From interpolation between the pair W 1 (L 1 (Ω)) and W 1 (L ∞ (Ω)) (this is where K-functionals are used, see (DeVore & Scherer 1979)), we know that whenever f ∈ K τ , for any t > 0 there is a function g ∈ W 1 (L ∞ (Ω)) such that (8.52) with M an absolute constant. We take t = 1/n in going further. Now, let S approximate (f − g) in L ∞ (Ω) with the accuracy of the first statement in (8.51), and let T approximate g with the acccuracy of the second statement. Then S + T ∈ Υ 11,17n+1 and (8.53) In this case τ * = 1 , so this is the desired inequality. Moreover, since f − (S + T ) Lp(Ω) ≤ f − (S + T ) L∞(Ω) , we also have Remark 8.4. We outline the changes necessary to prove the general case in the statement of the theorem. Now, we want to measure approximation error in L p (Ω), 1 ≤ p < ∞, not just C(Ω). Of course, the error of approximation in L p (Ω) of a function f is smaller than that in C(Ω). We use analogues of (8.51) for approximation in L p (Ω) and two Besov balls. The first is where we use Theorem 8.10 to get the approximation rate [log 2 n] β 0 n −s/d . Here, we can choose τ 0 > τ * so that we are as close to the Sobolev embedding line as we want (but not on it). The second inequality is the super convergence result for K 1 = U (Z 1 ), Z 1 = C s (Ω). For this, we use the generalization of Theorem 8.11, as given in (Lu et al. 2020), which gives the super approximation rate [log 2 n] β 1 n −2s/d . We now interpolate between Z 0 and Z 1 to obtain the theorem for approximation in the fixed L p (Ω) space. The reason we have the given restriction on θ is because we cannot take Z 0 directly on the Sobolev embedding line. Figure 8.7 may be useful for the reader to understand this theorem. Figure 8.7. Why we get general super convergence by using interpolation theory. All error rates E are modulo powers of logarithms when s > 1.

A summary of known approximation rates for classical smoothness spaces
Let us summarize what we know about the optimal approximation rates when approximating functions from Besov (and Sobolev) model classes using the outputs Σ n of deep neural networks, Σ n := Υ W 0 ,Cn (ReLU; d, 1), where W 0 is fixed, large enough, and depending only on d, and C depends on d and the model class. Given a value of p with 1 ≤ p ≤ ∞, recall that (8.54) We want to address what we know regarding the following problem.
Problem 6: For each model class K which is the unit ball of a Besov space B s q (L τ (Ω)) which lies above the Sobolev embedding line for L p (Ω), determine asymptotically matching upper and lower bounds for E n (K) Lp(Ω) , n ≥ 1.
Even for the most favorable case p = ∞, we only have a satisfactory answer to this question when 0 < s ≤ 1, in which case the optimal rate is n −2s/d , n ≥ 1. The above results on super convergence provide the upper bounds. The lower bounds follow from the derivation of lower bounds on approximation rates using VC dimension, given in §5.9. Going further with the case p = ∞, the above results only provide a complete description of approximation rates when s ≤ 1 because of the the appearance of a logarithm in the extension of Yarotsky's results given in (Lu et al. 2020).
When we move to the case p < ∞, the situation is even less clear. First, Theorem 8.12 does give a super rate. However, we have no corresponding lower bounds that come close to matching this rate because we can not use VC dimension theory for L p approximation. In summary, for all Besov spaces that compactly embed into L p (Ω), we obtain error bounds for approximation in L p (Ω) strictly better than classical methods. What is missing vis a vis Problem 6 is what are the best bounds and how do we prove lower bounds for approximation rates in L p (Ω), p = ∞.

Novel model classes
While the performance of NN approximation on the classical smoothness spaces is an intriguing question that deserves a full and complete answer, we must stress the fact that such an answer will not provide an explanation for the success and popularity of NNs in their current domains of application, especially in deep learning. Indeed, the problems addressed via deep learning typically have the feature that the functions to be captured are very high dimensional, that is, the input dimension d is very large. Since all of the classical model classes built on smoothness have large entropy and suffer the curse of dimensionality as d gets large, they are not appropriate model classes for such learning problems. This amplifies the need to uncover new model classes that do not suffer the curse of dimensionality, that are well approximated by outputs of NNs, and are a good match for the targeted application. We must say that little is formally known in terms of rigorously defining new model classes in high dimension, showing that they have reasonable entropy bounds, and then analyzing their approximation properties by NNs. However, several ideas have emerged as to how such model classes may be defined. We mention some of those ideas here with the intention to outline a road map of how to possibly proceed with defining model classes in high dimensions.
8.10.1. Comments on the curse of dimensionality First, let us say a few words about the curse of dimensionality. One frequently hears the claim that a certain numerical method 'breaks the curse of dimensionality'. There are two components to such a statement. The first is that the numerical problem under study is such that it can be solved in high dimensions without suffering adversely from dimensionality. The second is that a particular numerical method has been found that actually does the job.
In the setting of numerical methods for function approximation, the first statement has to do with the model class assumption on f , or the model class information that can be derived about f from the context of the problem. For example, when solving a PDE numerically, the model class information is usually given by a regularity theorem for the solution to the PDE. In other words, it is the model class K that determines whether or not the problem is solvable by a numerical method that avoids the curse of dimensionality.
Heuristically, it is thought that the crucial factor on whether or not a given model class K suffers from the curse of dimensionality is its Kolmogorov entropy in the metric where the error is to be measured, see §5.2 for the definition of this entropy and the entropy numbers ε n (K) X . There is not always a clear cut mathematical proof that entropy is indeed the deciding factor. This lack of clarity stems from our vagueness in describing what is an allowable numerical method. This returns us back to the use of space filling manifolds in approximation. We have already noted that such manifolds have the capacity to approximate arbitrarily well. But are they a fair method of approximation? Implementing such a manifold numerically as an approximation tool requires an inordinate amount of computation. So really, the computational time to implement the numerical method is an issue. This is well known in the numerical analysis community, but seems to be not treated sufficiently well in the learning community. The latter would involve statements about how many steps of a descent algorithm are necessary to guarantee a prescribed accuracy.
We have touched on this subject in §5.6, where we have introduced stable methods of approximation. The introduction of stability was made precisely to quantify when a numerical method could be implemented within a reasonable computational budget. Under the imposition of stability in manifold approximation, we have shown that indeed the entropy of K governs optimal approximation rates.
Regarding the second factor, the question is whether we can put forward a concrete numerical scheme which can approximate the target function with a computational budget which does not grow inordinately with the dimensionality d. In this sense, it is not only an issue of how well we can approximate a given f using a specific tool Σ := (Σ n ) n≥1 , but whether we can find an approximant within a reasonable computational budget.

Model classes in high dimension
With these remarks in hand, our quest is to find appropriate model classes for high dimensional functions which have reasonable entropy when d is large and yet match intended applications. In this context, it is allowable for the entropy of the model class to grow polynomialy with d, but not exponentially.
The search for appropriate high dimensional model classes has carried on independently of deep learning or NN approximation, since it has always been a driving issue whenever we are dealing with high dimensional approximation. We next mention some of the ideas that have emerged over the recent decades on how to possibly define high dimensional model classes and how these ideas intersect with NN approximation.
Model classes built on sparsity: The idea of using sparsity to describe high dimensional model classes appeared largely in the context of signal/image processing. The simplest example is the following. Assume {φ j } j≥1 , with φ j X = 1, is an unconditional basis in a Banach space X of functions of d variables. So, every f ∈ X has a unique representation where λ j are linear functionals on X and the convergence in (8.55) is absolute. Here, the reader may assume that X is an L p space to fix ideas. The space X defines the norm where we will measure performance (error of approximation). Given any q ≤ 1, let K q consist of all functions f ∈ X such that If one wishes to approximate functions from K q , the most natural candidate is n-term approximation using the basis (φ j ) j≥1 . Let Σ n be the (nonlinear) set consisting of all functions S = j∈Λ a j φ j , #(Λ) ≤ n. It is a simple exercise to show that E(K q , Σ n ) X ≤ C q n −1/q+1 , n ≥ 1. (8.57) Note that the Besov model classes take a form similar to (8.56) because of their characterization by atomic decompositions using splines or wavelets, see §4.3.1. There are numerous generalizations of this notion of sparsity. For example, one can replace the unconditional basis by a more general set D of functions, which form a frame or a dictionary. Even though they give approximation rates that do not depend on the number of variables d, model classes built on sparsity are not necessarily immune to the curse of dimensionality because the basis or dictionary is infinite. To avoid this, one has to impose other conditions on the sequence of coefficients (λ j (f )) j≥1 that allows one to truncate the sum to a finite set of indices when seeking an n-term approximation. This is often imposed by putting mild decay assumptions on these coefficients. The other central issue is whether the model class built on sparsity matches the intended application. That is, there should be some justification that the sparsity class is a natural assumption in the application area.
We have already seen an example of using sparsity in terms of a dictionary in discussing NN approximation when we introduced the Barron class. The Barron class appears as a natural model class when using shallow neural networks as an approximation tool. The neat thing about the Barron class is that its definition was not made in terms of a dictionary but rather classical notions such as Fourier transforms. Generalizations of Barron classes to deeper networks is given in (E et al. 2019). Then it was shown to be a sparsity class for a suitable dictionary of waveforms.
Model classes built on composition: Since NNs are built on the composition of functions, it is natural to try to define model classes based on such compositions. The basic idea is that the model class should consist of functions f with the representation f = g 1 • g 2 • · · · • g m , where g k , k = 1, . . . , m, are simple component functions. This approach is studied, for example, in (Mhaskar & Poggio 2019, Shen et al. 2019, Schmidt-Hieber 2020. The key question in such an approach is what assumptions should be placed on the component functions. One expects to build the model class in a hierarchical fashion by showing that when g 1 and g 2 are well approximated then so is their composition. Let us consider for a moment the simple setting of approximating in the univariate uniform norm · C(Ω) , Ω = [0, 1]. Given g 1 , g 2 and approximantsĝ 1 andĝ 2 , the simplest inequality for how wellĝ 1 •ĝ 2 approximates g 1 • g 2 is ≤ g 1 −ĝ 1 C(Ω) + ĝ 1 Lip 1 g 2 −ĝ 2 C(Ω) , (8.58) which points to the observation that formulations of such model classes will probably involve mixed norms.
Model classes built on self similarity: Let us continue with the last example of the composition g 1 • g 2 . If g 2 is a CPwL function (as is the case of outputs of ReLU NNs), then as the input variable t traverses [0, 1], the composition traces out scaled copies of g 1 or parts of it. For example, if g 2 is the saw tooth function H •L of Figure 3.2, then we trace out multiple copies of g 1 . The composition is, therefore, a self similar function. This self similarity is prevalent in outputs of deep NNs and has been used to show that certain functions such as the Weierstrass nowhere differentiable function are well approximated by outputs of deep NNs. There are even classes of functions, generated by dynamical systems, which are efficiently approximated by outputs of deep NNs. So, it is natural to try and build model classes using self similarity or fractal like structures, and then show that its members are well approximated by deep NNs. Examples of such univariate function classes are given in , including the so-called Tagaki class. In higher dimension, it is shown in (Dym, Sober & Daubechies 2019) that the characteristic functions χ S of certain fractal sets are also efficiently approximated by the outputs of deep networks. This may relate to the success of deep learning in classification problems.
Model classes built on dimension reduction: A common high dimensional model class with reasonable entropy is the set of functions with anisotropic smoothness. These functions depend non democratically on their variables, that is, certain variables are more important than others, see e.g. (DeVore, Petrova & Wojtaszczyk 2011). This is a dominant theme in numerical methods for PDEs, where notions of hyperbolic smoothness classes and numerical methods built on sparse grids or tensor structures arise.
Another prominent example is a model class viewed as low dimensional manifolds in a high dimensional ambient space. Since our approximation tool is itself a parameterised manifold, these model classes seem like a good fit for NN approximation. This is related to the viewpoint that the NN output is an adaptive partition/filter design as expressed in (Balestriero & Baraniuk 2020).

Stable approximation
Up to this point, we have mainly been interested in how well we can approximate a target function f by the elements of the sets Υ W,L (ReLU; d, 1). The results that we have obtained do not usually provide an actual procedure that could be implemented numerically. In this section, we discuss in more detail issues surrounding the construction of numerical approximation procedures and whether we can guarantee their stability. This section builds on the general discussion in §5 which the reader needs to keep in view.
Here, we measure error in the norm of X = L p (Ω), Ω = [0, 1] d , 1 ≤ p ≤ ∞. We let Σ n , n ≥ 1, be the ReLU sets Υ W,L (ReLU; d, 1) with the number of parameters n(W, L) n. As usual, the two main examples that we have in mind are when W = n and L = 1 and secondly, when W = W 0 is fixed (depending on d) and L = n. Let K be a model class in the chosen L p (Ω).
We have mentioned before that any approximation method is described by two mappings a n : K → R n ; M n : R n → Σ n , where a n chooses the parameters of the network for a given f ∈ K, and M n describes how the neural network takes a vector y ∈ R n of parameters and assigns the output M n (y) ∈ Σ n . Thus, the approximation to f is the function A n (f ) = M n (a n (f )). Notice that once we have decided to use NN with specific architecture for the method of approximation, the mapping M n is fixed and we do not get to choose it.
We now wish to understand two main issues: • Stability Issue 1: How does imposing stability restrictions on the mappings a n and M n affect the approximation rates we can obtain? • Stability Issue 2: How can we construct stable numerical algorithms for approximation?
We have already discussed Stability Issue 1 in some detail, see §5.5. We have seen that imposing stability limits the achievable approximation rates for NN approxcimation of a model class K in the sense that the decay rate cannot be better than the entropy numbers ε n (K) X of K. Of course, this does not say we can necessarily achieve (with NNs) an approximation rate equivalent to ε n (K) X . However, this does give a benchmark for optimal performance. This leads us to the following problem.
Problem 7: What are the optimal stable approximation rates for classical model classes such as Sobolev and Besov balls when using Σ := (Σ n ) n≥1 with Σ n := Υ W 0 ,Cn (ReLU; d, 1) as the approximation tool? In other words, we want matching upper and lower bounds for stable approximation of these model classes. A more modest question would be to replace stability by simply asking for continuity of these mappings.
Consider, for example, approximation in L p (Ω) with Ω = [0, 1] d of the Besov balls B s q (L τ (Ω)) that embed into L p (Ω). The entropy of such a ball is known and gives the lower bounds O(n −s/d ) for the best approximation rate by stable method of approximation. However, we have not provided stable mappings for NNs that achieve this approximation rate. A similar situation holds when we assume only continuity of these maps.

Stability of M n
As we have noted, when using NN approximation, the mapping M n is determined by the architecture of the NN. In this section, we discuss the stability of this mapping. We always take M n to be the natural mapping which identifies the output S ∈ Υ W,L (ReLU; d, 1) with the parameters that are the entries of the matrices and bias vectors of the NN, see §2.1. We identify these parameters with a point in R n in such a way that the parameters at layer appear before those for the next layer and the ordering for each hidden layer is done in the same way.
It is easy to see that the mapping M n : R n → C(Ω), Ω = [0, 1] d , is continuous. In fact, as we shall now show, it is a Lipschitz map on any bounded set of parameters, that is, M n is locally Lipschitz. To describe this, we need to specify a norm to be used for R n . We take this norm to be the ∞ (R n ) norm, that is, y n ∞ := max 1≤i≤n |y i |. This choice is not optimal for obtaining the best constants in estimates but it will simplify the exposition that follows.
Theorem 9.1. If B is any finite ball in ∞ (R n ), then M n : B → C(Ω) is a Lipschitz mapping, that is, with the constant C depending only on B, W, L, and d.
Sketch of Proof: We will be a bit brutal and not search for the best constant in (9.1). In what follows in this proof, C denotes a constant depending only on B, W, L, and d, and may change from line to line. For y, y ∈ B we wish to bound M (y) − M (y ) C(Ω) by δ := y − y n ∞ . For a vector valued continuous function g, we use g to denote its C(Ω) norm, which is the maximum of the C(Ω) norm of its components.
We denote by η (j) , j = 1, . . . , L, the vector valued function of x ∈ R d computed at level j by the network with parameters y, and by η (j) the corresponding vector of functions computed with parameters y . We can write where A j is the matrix determined by y to go from level j to level j + 1, and b (j) is the bias vector, j = 0, . . . , L − 1. Similarly, A j and b (j) correspond to the parameter y .
Since y, y ∈ B, all entries in the A j , A j , b (j) , b (j) are bounded. Likewise, the matrix norms of A j , A j as mappings from ∞ to ∞ are bounded. Also, we have, Using and the fact that ReLU(·) is a Lip 1 function, one derives One then proves by induction that η (j) ≤ C, j = 0, 1, . . . , L, and that The final step is that which gives the theorem.
Remark 9.1. A closer look at the above estimates shows that the Lipschitz constant for M n can be controlled if we take B as a small ball around the origin. The size of the ball is chosen so that each of the matrices A j , A j have small norm. To do this, the required size of the ball gets smaller as W gets larger.

Stability of a n
With the above analysis of M n in hand, we see that the stability of a NN approximation method rests on the properties of the parameter selection a n . It is of interest to understand whether the most common methods of parameter selection based on gradient descent provide any stability. We discuss this issue in §11.3.1. For now, we limit ourselves to recalling our discussion on how imposing stability limits approximation rates and when we know stable methods. For the discussion that follows, we limit ourselves to approximation using Σ n = Υ W 0 ,n (ReLU; d, 1) with W 0 fixed. Many of the same issues we raise concerning stability appear in approximation using shallow networks. Let us first observe that the parameter selection procedures that generate the super rates of convergence for Besov and Sobolev classes cannot be continuous because of (5.5). If we require that the mappings a n are only continuous and consider approximation in L p (Ω), Ω = [0, 1] d , then we can never attain rates of approximation better than O(n −s/d ) for the unit ball of any Besov space B s q (L τ (Ω)) that embeds compactly into L p (Ω). The only cases where we know that we can actually attain this rate is when τ ≥ p. In these cases, there are linear spaces, such as FEM spaces, contained in Σ n that provide this rate and the approximation can be done by a linear operator. So the following problem is not solved except for very special cases.
Problem 8: Consider the approximation of the unit ball of a Besov space B s q (L τ (Ω)) compactly embedded in L p (Ω) using the manifold Σ n . Give matching upper and lower bounds for the approximation rate in the case a n and M n are Lipschitz mappings. Similarly, determine upper and lower bounds when the parameter selection mapping a n is continuous.
A question closely related to stability is whether one can approximate well under the very modest restriction that a n is bounded. Recall that boundedness helps us with M n as well (see the above discussion). The issue of what approximation rates are possible when one imposes boundedness on a n was studied in detail in . The motivation in that paper was different from ours in that they were interested in NN approximation from the viewpoint of encoding. However, there is an intimate connection with stability as we have just discussed.

Approximation from data
Thus far, we have limited ourselves to understanding the approximation power of neural networks. The approximation rates we have obtained assumed full access to the target function f . This scenario does not match the typical application of NN approximation to the tasks of learning. In problems of learning, the only information we have is data observations of f . Such data observations alone do not allow any rigorous quantitative guarantee of how well f can be recovered, that is, how accurately the behavior of f at new points can be predicted. What is needed for the latter is additional information about f , which we have referred to as model class information. The model class information is an assumption about f that is often not provable but based more on heuristics about the application area.
Learning from data is a vast area of research that cannot be covered in any detail in this exposition. So, we limit ourselves to pointing out some aspects of this problem and how they interface with the theory of NN approximation that we have discussed so far. Obviously, any performance guarantees derived in the learning setting must necessarily be worse than those for approximation, where full information about f is assumed. Thus, an important issue is to quantify this loss in performance.
The most common setting for the learning problem is a stochastic one, where it is assumed that the data is given by random draws from an underlying probability distribution. However, it is useful to consider the deterministic setting as well since it sheds some light on the stochastic formulation and the type of results that we can expect.

Deterministic learning; optimal recovery
In this section, we wish to learn a function f which is an element of a Banach space X. Our goal is to recover f from some finite set of data observations. We assume that the data observations are in the form of bounded linearly independent linear functionals applied to f . Thus, our data takes the form (λ 1 (f ), . . . , λ m (f )) ∈ R m , λ j ∈ X * , j = 1, . . . , m, where X * is the dual space of X. As we have pointed out numerous times, to give quantitative results on how well f can be recovered requires more information about f which we call model class information, i.e., information of the form f ∈ K, where K is a compact set in X. When we inject the model class assumption that f ∈ K, we have the question of how accurately we can recover f from the two pieces of information, the data and the model class.
We shall present the functional analytic view of this problem which is known as optimal recovery. It will turn out that the optimal recovery problem is not always amenable to a simple numerical method for the recovery of f . Nevertheless, this viewpoint will be useful in motivating specific numerical methods and analyzing how well they do when compared with the optimal solution.

Optimal recovery in a Hilbert space
We shall restrict our development here to the most popular setting where X = H is a Hilbert space. The reader interested in the more general Banach space setting can consult (DeVore et al. 2013). In the Hilbert space setting, each of the functionals λ j has a representation λ j (f ) = f, ω j , ω j ∈ H, j = 1, . . . , m, which is referred to as the Riesz representation of λ j . The functions ω j span an m dimensional subspace of H. We can assume without loss of generality that the ω j 's are an orthonormal system. From the given data, we can find the projection w := P W f of f onto W . We think of w as the given data. Now, let us assume in addition that f is in a certain model class K, and ask what is the best approximation (with error measured in the norm of H) that we can give to f based on this information, i.e., the data and the model class information. One may think that the best we can do is to take P W f as the approximation. However, this is not the case since the information that f ∈ K allows us to say something about the projection of f onto the orthogonal complement W ⊥ of W .
Indeed, the model class information will allow us to give a best approximation to f from the available information (model class and data w) as follows. Let Then, the membership of f in K w is the totality of information we have about f . The best approximation to f is now given by the center of the set K w . Namely, let B := B(K w ) be the smallest ball in H which contains K w . This ball is referred to as the Chebyshev ball, its center b w ∈ H is called the Chebyshev center, and its radius R w is the Chebyshev radius. The best approximation we can give to f is to take b w as the approximation and the error that will ensue is R w . The function b w is the optimal recovery and R w is its error of optimal recovery.
Let us reflect a bit on the above optimal solution. Every function in K w is a possibility for f . From the information presented to us (model class plus data), we do not know which of these functions is the desired f . So, we do the best we can to approximate all of the possible f 's, which turns out to be the Chebyshev center. Each g ∈ K w (the possibilities for approximants of f ) is of the form w + η, where η is in the null space N = W ⊥ . So, in essence, we are trying to find the η ∈ W ⊥ that we can add to w so that the sum w + η ∈ K.
Remark 10.1. Notice that if we find any η ∈ N such that w + η is in K, then we have essentially solved the problem sincef := w + η approximates f to accuracy at worst 2R w . Such anf is called a near best solution.
The above description of optimal recovery, despite being elegant and optimal, is not very useful in constructing a numerical procedure since the Chebyshev ball is difficult to find numerically. Also in practice, we often are not sure what is the apropriate model class K in a given setting. However, optimal recovery is still a good guide for the development of numerical procedures.
There are two standard approaches to developing numerical algorithms for optimal recovery. The first one is to numerically generate a recovery through least squares minimization with a constraint that enforces the model class assumption. We will not engage this approach here but simply mention that several elegant results show that for certain model classes these optimization problems have exact solution in NN spaces, especially the ones with a single hidden layer. We refer the reader to (Unser 2020), (Parhi & Nowak 2018), (Ongie, Willett, Soudry & Srebro 2019), (Savarese, Evron, Soudry & Srebro 2019) for the most recent results using this approach.
The second approach, which is more closely tied to approximation, is to replace K by a simpler setK which is less complex than K, and yet accurate. One then solves the optimal recovery problem on the simpler surrogate model classK. We discuss this approach in the following two sections.

Optimal recovery by linear space surrogates
The usual approach to finding a surrogateK for K is to approximate K by a linear space of dimension n, or more generally, a nonlinear manifold Σ n , with n the number of parameters needed for its description. If we know that Σ n approximates K to accuracy ε n (here is where our error estimates for approximation are useful), we then can replace K bŷ K := {h ∈ H : dist(h, Σ n ) H ≤ ε n }. (10.2) Clearly, K ⊂K. Usually, we also have some knowledge on the norm f H for functions f ∈ K and this can be used to trim the setK even further. Once a surrogateK has been chosen, we solve the optimal recovery problem forK in place of K by using Chebyshev balls forK w as described above. As we shall now see, we can often solve the optimal recovery problem for the surrogate exactly by a numerical procedure.
We assume for the time being thatK is given by (10.2) with Σ n a linear space of dimension n < m. In this case, the problem is a much simpler recovery problem than the one for K, and optimal recovery has an exact solution that we now describe, see (Binev, Cohen, Dahmen, DeVore, Petrova & Wojtaszczyk 2017). Let us define by H w := {h ∈ H : P W h = w}, that is, H w is the set of all functions in H which satisfy the data. Since f ∈ K, K ⊂K, and P W f = w, we see thatK w := H w ∩K is non-empty. The center of the Chebyshev ball B(K w ) forK w is the point u * (w) ∈ H w which is closest to Σ n , that is u * (w) := argmin h∈Hw dist(h, Σ n ) H .
The function u * (w) is found as follows. One solves the least squares problem v * (w) := argmin v∈Σn P W v − w H , and then u * (w) = w + P W ⊥ v * (w), where W ⊥ is the orthogonal complement of W in H (the null space of P W ). One can also compute the Chebyshev radiusR w of B(K w ) aŝ Here are a few remarks to put the above results into context.

Using Neural Networks for data fitting
The typical setting for supervised learning is to find an approximation of an unknown function f , given a training data set of its point values (x (i) , f (x (i) ) , x (i) ∈ R d , f (x (i) ) ∈ R, i = 1, . . . , m. (11.1) We refer to the points x (i) , i = 1, . . . , m, as the data sites. Thus, the data observation functionals are point evaluations (delta functionals). In many applications, the dimension d is very large. For example, in classification problems for images, d is the number of pixels in the images, typically somewhere in the range of 10 3 to 10 6 , and for videos it is even higher. The learning problem is then to numerically produce from this data a functionf that is in some sense a good predictor of f on new unseen draws x ∈ R d . In the preceding section, we described a systematic approach to learning from data, called optimal recovery. It begins with two vital requirements: (i) a known model class K to which f is assumed to belong, and (ii) a specific norm or metric in which the recovery of f byf is measured. In the optimal recovery formulation of the problem, a solid theory exists to describe the optimal solution via the Chebyshev ball. The deficiency in this approach is that the construction of numerical algorithms to generate a surrogatef may be a significant computational challenge.
Optimal recovery is not the viewpoint taken in the general literature on learning. Rather, in the learning community, the data fitting task is formulated in a stochastic setting, where one assumes that the data comes from random draws of the data sites x (i) with respect to a probability distribution, and the f (x (i) )'s are noisy observations of some unknown function f . Performance is then evaluated on new draws of data in the sense of probability or expectation of accuracy on these draws. This is commonly referred to as generalization error. Note that in this setting, there is no model class assumption on the function f giving rise to the data, and so there can be no provable bound for the generalization error. What is done in practice is to give an empirical bound based on checking performance on a lot of new (random) draws which are referred to as test data.
Traditionally, model class assumptions on the unknown function f played a dominant role in the classical formulation and proof of a priori performance guarantees, see (Bousquet et al. 2005). However, as noted in the previous paragraph, in the now dominant field of deep learning, where neural network approximation is an important technique, one deviates from the classical setting of model class assumptions. Our goal in the sections that follow is to understand what role approximation using neural networks plays in this new setting.

Deep learning
Deep learning is characterized by its ability to successfully treat very high dimensional problems, where one begins with inordinately large data sets and employs intensive computation for generating surrogates. Its success in handling high dimensional problems is provided only by empirical verification that the numerically created surrogate performs well on new draws of x. A priori guarantees of performance are generally lacking. In fact, performance is not typically formulated under model class assumptions, which in turn prevents such a priori analysis. The lack of a specific model class assumption is probably due, at least in part, to the high dimensionality d, since in this case it is often unclear what appropriate model classes should be. Note, however, that since the data observations are point evaluations, a minimal assumption is that f is in a Reproducing Kernel Hilbert Space (RKHS), although the specific RKHS is not known or postulated.
Another important feature of deep learning is its use of over parameterization in the search for a surrogate. This runs in the face of classical learning which warns against overfitting the data because it leads to fitting the noise.

Possible model class assumption in high dimension
Before turning to the overparameterized setting, we wish to make a few remarks on possible viable model class assumptions that could be used towards providing a priori guarantees in deep learning. One valid view point is that the functions we are trying to recover do possess some special properties; we just do not know what they are.
The fact that neural networks are used quite successfully suggests that the functions we are trying to learn are well approximated by neural networks. If this is the case, then a natural model class assumption would be that f is in an approximation class A r ((Σ n ) n≥1 , X), which we recall consists of the functions f for which dist(f, Σ n ) X ≤ M n −r , n ≥ 1, (11.2) where again there is the question what is the appropriate space X in which to measure error. Here, (Σ n ) n≥1 would be the family of spaces outputted by the chosen NNs and n would represent the number of their parameters. This underlines the importance of understanding the approximation performance of neural networks in a rate/distortion sense, and, in particular, which functions are well approximated by neural network outputs.

Overparameterization
We turn now to learning from data using overparameterized models. When searching for an approximation to f from a set of outputs of a neural network with a given architecture, say Σ n = Υ W,L (σ; d; 1), it is usually the case in practice that the number of trainable parameters, that is the number n = n(W, L) of weights and biases, exceeds the number m of data sites x (k) , #data points = m n = #parameters.
In other words, neural networks are usually overparameterized. This means that there are generally infinitely many choices of the parameter vector θ (of network weights and biases) so that the network with these parameters outputs a function S(·; θ) that fits (interpolates) the data, that is S(x (i) ; θ) = f (x (i) ), i = 1, . . . , m.
Characterizing exactly which interpolant is chosen by the numerical method is at the heart of learning via overparameterized neural networks. In this section, we want to understand how this selection is done in practice and whether the selection has an analytic interpretation. In particular, there is the question of whether the numerical method itself is in a certain sense specifying a model class assumption. If so, it would be important to unravel what this hidden assumption is.

Selecting the interpolant by gradient descent
The standard way of selecting an approximantf to f in the practice of overparameterized deep learning using neural networks is to begin with a random starting guess θ (0) for the parameters and thereby specifying the first guess S(·; θ 0 ) for a surrogate. Successive approximations S(·, θ (k) ), for k = 1, 2, . . . , are then generated by applying a gradient descent (or stochastic gradient descent) to finding the minimum of a loss L, which usually takes the form of an empirical risk, that is (f (x (i)) − S(x (i) ; θ)) 2 . (11.3) If the step sizes are appropriately chosen in the descent algorithm, then this procedure seems to work well in practice in that S(·, θ (k) ) with k large is an approximation to f which generalizes well. Here, θ (k) is the output parameter of the gradient descent algorithm at the k th step. The above method for selecting a surrogate does not employ the traditional remedy for working with approximation methods that have the capacity to overfit the data which is to add a regularizer such as 1 or 2 penalty on the parameter vector in the iteration. The effect of incorporating such regularizers is studied for example in (Savarese et al. 2019, Ongie et al. 2019, Parhi & Nowak 2018. The main conclusion of the above papers is that one can view the addition of a constraint as a model class assumption. However, it is important to note that adding such a regularizer is not usually done in NN practice. While a weak regularization is sometimes employed, empirical evidence seems to indicate that it is not necessary for good generalization performance, see (Zhang, Bengio, Hardt, Recht & Vinyals 2017).
A number of attempts have been made to understand why descent algorithms, employed to train overparameterized neural networks, provide a surrogate that generalizes well, see (Jacot, Gabriel & Hongler 2018), (Dziugaite & Roy 2017), (Arora, Ge, Neyshabur & Zhang 2018), (Bartlett, Foster & Telgarsky 2017). However, the resulting a priori performance guarantees are often vacuous in practice in the sense that the probability of misclassification of a new sample is bounded from above by a number greater than one. Of course, no such guarantee can hold in the absence of a model class assumption on the underlying function f which provided the data.
On the other hand, some heuristic explanations have been put forward to explain the success of this approach. One of the most popular is that the descent algorithm itself provides a form of implicit regularization that biases learning towards selecting parameter values θ * that correspond in some sense to low complexity functions S(·; θ * ). The idea is that the starting guess S(·; θ (0) ) has relatively low complexity with high probability. Then, since the model is overparameterized, there are many values of θ for which S(·; θ) interpolates the data. In particular, there is often such a value θ * near θ (0) . Since gradient descent is essentially a greedy local search, it is reasonable to expect that it will converge to such a θ * that is near θ (0) .
These heuristics would match a model class assumption that f itself is well approximated by the output of neural networks depending on relatively few parameters, that is, f is in a model class A r with a large value of r. Or, more generally, that f is well-approximated by networks depending on many parameters, but with some additional constraints on the size or complexity or these parameters. The purpose of the next section is to provide some support for this idea in the simple case of overparameterized regression.

Gradient descent for linear regression
As we have seen, the outputs of a neural network form a complicated nonlinear family which is difficult to analyze. It could be therefore useful to understand what the above numerical approach based on gradient descent yields in the simpler case of linear regression. We briefly describe this in the present section. We seek to model a data set {(x (i) , f (x (i) ))}, x (i) ∈ R d , f (x (i) ) ∈ R, i = 1, . . . , m, by using a function from a linear space V n = span {φ j , j = 1, . . . , n} .
The key assumption we make is that the model is overparameterized, mean-ing that m < n. If A := (a ij ) is the m × n matrix with entries a i,j := φ j (x (i) ), i = 1, . . . , m; j = 1, . . . , n, the coefficients θ = (θ j ) n j=1 of any interpolant S(·, θ) = n j=1 θ j φ j (·) ∈ V n (11.4) to the data satisfy the underdetermined system of equations Aθ = y, y := (f (x (1) ), . . . , f (x (m) )) ∈ R m . (11.5) The standard way of choosing a solution to (11.5) is to choose the Moore-Penrose pseudoinverse θ * ∈ R n , which we recall is the solution which has minimum 2 (R n ) norm. Let W ⊥ denote the null space, which corresponds to all θ ∈ R n which are solutions to (11.5) with the zero vector on the right side. Further, we denote by W the orthogonal complement of W ⊥ in R n . Note that θ * ∈ W since θ * is itself a solution to (11.5).
In summary, we find that optimization by gradient descent from a random initialization has at least two important effects. First, the choice of initialization determines the value of the component P W ⊥ (θ (0) ) not "seen" by the data. Its norm is precisely the distance between the θ * andθ, which suggests that it is important to properly initialize the optimization. Second, the gradient descent was greedy, leaving P W ⊥ (θ (0) ) unchanged during the optimization. This can be viewed as a form of implicit regularization, since at least it does not increase this component. In addition, it implies an implicit model class assumption that the function f underlying the data {(x (i) , f (x (i) ))} is of low complexity which means that it is well approximated by V n .

Gradient descent selection for neural networks
The above discussion does not carry over directly to overparameterized data fitting with neural networks because the set of NN outputs is not a linear space. However, a recent line of work, see (Jacot et al. 2018), (Du, Zhai, Poczos & Singh 2019), (Allen-Zhu, Li & Song 2019), (Du, Lee, Li, Wang & Zhai 2019), (Liu, Zhu & Belkin 2020), has shown that for sufficiently wide networks such considerations are still approximately valid. In short, as we sketch immediately below, a number of rigorous results show that, as W → ∞, gradient descent on the mean squared error loss L using neural networks Υ W,L (σ; d, 1) can be recast as overparameterized regression in a RKHS H σ,L , determined by σ and L. The reproducing kernel of H σ,L is called the neural tangent kernel and is fixed throughout training in the limit when W → ∞.
To explain this point, suppose we are given a dataset as in (11.1). Let us fix L and solve the learning problem for this dataset using a class of neural networks Υ W,L (σ; d, 1) in which W is large. Starting from a random guess θ (0) , the trajectory of the gradient descent on the loss L, see (11.3), for the network parameters is given by (11.9) Varying W changes the number of components of θ. It is convenient to introduce the functions v i = v i (θ) := S(x (i) ; θ), i = 1, . . . , m, which record the values of S on the data set. A simple calculus exercise (Taylor's formula) shows that the trajectory of v i induced by (11.9) is In our earlier treatment of stability, see §5.5, we assumed full access to the target function f in the formulation of what stability meant and what was an optimal performance of a stable recovery when using nonlinear manifolds. Recall that the optimal recovery rate on a model class K was given by the stable widths δ * n (K) X , and these were connected to the entropy of K. Let us denote by D := {(x (i) , f (x (i) )}, x (i) ∈ R d , f (x (i) ) ∈ R, i = 1, . . . , m, the data provided to us. So, D is a collection of m points in R d+1 and D itself can be viewed as a point in R (d+1)m . We let Σ n = Υ W,L := Υ W,L (ReLU; d, 1) be the output set of the neural network architecture that has been chosen. Here, n is the total number of parameters used to describe the elements of Σ n , that is, n = n(W, L). We denote by a n : D → θ(D) the mapping of the data into the parameters θ(D) chosen by the numerical algorithm which, for the time being, we assume is based on gradient descent. Then, A n (D) = M n (a n (D)) ∈ Σ n is the output of the algorithm and the learned surrogate.
• Question 1: What are the regularity properties of A n ? Is it continuous or perhaps even smoother? Of course, the answer will depend on the step size restrictions imposed during the steps of gradient descent and, in addition, on the stopping criteria for the iterations. Recall that we know from §9.1 that M n is locally Lipschitz, that is, on each bounded set B in R n it is Lipschitz with Lipschitz constant γ B . This leads us to ask the next question. • Question 2: On which compact sets in R n does M n have a reasonable Lipschitz constant? Some information about this question can be extracted from our discussion in §9.1, but the analysis there was quite crude. Given an answer to Question 2, we would like a n to map into such a ball which leads us to the next question. • Question 3: What can be said about the range of a n as it relates to the initial parameter guess and subsequent step size restrictions?
Our next questions center on whether A n (D) is a good surrogate. Although model classes do not appear in the construction of A n , there is a belief that A n (D) provides a good surrogate for the target function f that gave rise to the data. If this is indeed the case, then this statement needs an analytic formulation. One such possible answer is that A n is good for a universal collection of model classes. To try to formulate this, let us now introduce a model class K into the picture, where K ⊂ X is a compact subset of X. We take the view that K exists but is unknown to us. Given such a model class K, the datasets given to us are now of the form D = D(f ), f ∈ K, where f (x (i) ) are the observed values at the data sites x (i) , i = 1, . . . , m. We can further add in variability of the data sites by introducing X := (x (i) ) m i=1 . In this way, we can view the data provided to depend on both the selection of sites and the f ∈ K, and write D(X , f ). One can then revisit Questions 1-3 in this setting.
We can now view A n as a map A n : X × K → Σ n and treat it as a random variable. This would allow us to measure its performance in expectation or with high probability. At this point, there would be no need to require that the mapping a n be given by gradient descent but rather put gradient descent into competition with more general mappings. This would lead to various notions of optimal performance similar to those, considered in Information Based Complexity, see e.g. (Traub & Wozniakowski 1980). One of these is E m,n (K) X := inf (11.12) where the infimum is taken over a class A of algorithms A n , perhaps imposing some stability on A n . Another meaningful measure of optimality would involve expected performance over random draws X . Whatever measure of performance is chosen, one can introduce a corresponding concept of width. Now, the width δ m,n (K) X for a model class K would depend on both m and n, and the properties imposed on the algorithms in A such as Lipschitz mappings. With such a width in hand, one can now ask for lower and upper bounds for these widths.