An optimal 
$L^2$
 autoconvolution inequality

Abstract Let 
$\mathcal {F}$
 denote the set of functions 
$f \colon [-1/2,1/2] \to \mathbb {R}_{\geq 0}$
 such that 
$\int f = 1$
 . We determine the value of 
$\inf _{f \in \mathcal {F}} \| f \ast f \|_2^2$
 up to a 
$4 \cdot 10^{-6}$
 error, thereby making progress on a problem asked by Ben Green. Furthermore, we prove that a unique minimizer exists. As a corollary, we obtain improvements on the maximum size of 
$B_h[g]$
 sets for 
$(g,h) \in \{ (2,2),(3,2),(4,2),(1,3),(1,4)\}$
 .


Introduction
Let g, h, N be positive integers.A subset A ⊂ [N] = {1, 2, . . ., N} is a B h [g] set if, for every x ∈ Z, there are at most g representations of the form where two representations are considered to be the same if they are permutations of each other.As a shorthand, we let B h = B h [1].Note that B 2 sets are the very wellknown Sidon sets.Let R h [g](N) denote the largest size of subset A ⊂ [N] such that A is a B h [g] set.By counting the number of ordered h-tuples of elements of A, we have the simple bound ( |A|+h−1 h ) ≤ ghN, which implies R h [g](N) ≤ (ghh!N) 1/h .On the other hand, Bose and Chowla showed that there exist B h sets of size N 1/h (1 + o (1)), where we use o (1) to denote a term going to zero as N → ∞ [2].This lower bound result has been generalized to more pairs (g, h) by several authors [3,4,11].Recently, the bound R h [g](N) ≥ (Ng) 1/h (1 − o( 1)) for all N, g ≥ 1, and h ≥ 2 was obtained in [10].In general, estimating the constant is an open problem.In fact, the only case for which the above limit is known to exist is in the case of the classical Sidon sets, where we have σ 2 (1) = 1.Henceforth, we will understand upper and lower bounds on σ h (g) to be estimates on the lim sup and lim inf, respectively.
Several improved upper bounds for σ h (g) have been obtained by various authors, for references to many of them, as well as an excellent resource on B h [g] sets in general (see [16]).In this work, we will improve the upper bounds on σ h (g) for h = 2 and 2 ≤ g ≤ 4, as well as g = 1 and h = 3, 4. No improvement on σ h (g) for g = 1 and h = 3, 4 has been made since [7] in 2001.The most recent improvements on estimates for σ 2 (g) which are the best for 2 ≤ g ≤ 4 are the following: Habsieger and Plagne [9] On the other hand, for g ≥ 5, the best recent upper bounds are the following: Interestingly, the key to improving upper bounds on σ 2 (g) is to better estimate the 2-norm of an autoconvolution for small g and the infinity norm of an autoconvolution for large g.In the case of the infinity norm, the best-known results are The lower bound is due to Cloninger and Steinerberger [6], and the upper bound is due to Matolcsi and Vinuesa [15].It is believed that the upper bound above is closer to the truth.The method of Cloninger and Steinerberger is computational, and is limited by a nonconvex optimization program.Throughout, we denote by F the family of nonnegative functions f ∈ L 1 (−1/2, 1/2) such that ∫ f = 1.For 1 ≤ p ≤ ∞, we define where the lower bound is due to Martin and O'Bryant [14] and the upper bound is due to Green [7].Our main theorem is the following, and we improve upper and lower bounds for μ 2 .
Theorem 1. 1 The infimum of the L 2 -norm of an autoconvolution f * f for f ∈ F can be bounded as follows: 0.574636066 ≤ μ 2 2 ≤ 0.574642912.

E. P. White
Figure 1: A close approximation to the minimizer.
We are also able to prove that there exists a unique minimizer f ∈ F of the L 2norm of an autoconvolution.Our method produces arbitrarily close approximations to the minimizer.The function f ∈ F with the smallest L 2 -norm of its autoconvolution we computed is shown in Figure 1.We also show its autoconvolution f * f , and the function π f (x) √ 1/4 − x 2 .Notably, the function π f (x) √ 1/4 − x 2 takes values in [0.99, 1.02] for |x| ≤ 0.499.A singularity of strength 1/ √ x at the boundary of the [−1/2, 1/2] domain creates an autoconvolution that neither vanishes nor "blows up" at the boundary, as demonstrated by f * f .Similar functions were studied by Barnard in Steinerberger on their work on convolution inequalities [1].
Combining our new bounds on μ 2 2 with methods of Green [7,Theorems 15,17,and 24] gives the following corollary.

Corollary 1.2 The following asymptotic bounds on B
, where μ 2 2 = 0.574636066 denotes the lower bound on μ 2 2 stated in Theorem 1.1.
Corollary 1.2 is an improvement on the previous best upper bounds for σ h [g] for h = 2 and 2 ≤ g ≤ 4 as well as g = 1 and h = 3, 4. One of the main theorems proved by Green in [7] gives a bound on the additive energy of a discrete function on [N].We show that our bound in the continuous case applies to the discrete one, and so the Theorem 1.1 bound gives another improved estimate.

Corollary 1.3 Let H∶
Then, for all sufficiently large N, we have The methods of Habsieger, Plagne, and Yu [9,18], Cloninger and Steinerberger [6], and Martin and O'Bryant are all limited by long computation times.In contrast, the key to our improvement is a convex quadratic optimization program whose optimum value is shown to converge to μ 2  2 .The strategy of using Fourier analysis to produce a convex program to obtain bounds on a convolution-type inequality was also employed recently by the author to improve bounds on Erdős' minimum overlap problem in [17].We hope that our methods may also be useful in obtaining estimates for the infimum of an autoconvolution with respect to other p-norms.

Existence and uniqueness of the optimizer
In this section, we prove the existence and uniqueness to the solution of the following optimization problem: We remark that (2.2) defines the family F seen in the introduction.For all f ∈ L 1 (R), we define the Fourier transform on R as For any f as in (2.2), we note that f * f = f 2 , and so by Parseval's identity, The following proposition proves the existence and uniqueness of an optimizer in F to (2.1) using the "direct method in the calculus of variations." A similar method is used to show the existence of optimizers to autocorrelation inequalities in [12].
Proposition There exists a unique extremizing function f ∈ F to the optimization problem (2.1).
Since L 1 and L ∞ are separable, we can apply the sequential Banach-Alaoglu theorem to conclude the existence of f ∈ L 1 (−1/2, 1/2) and g ∈ L ∞ (R) such that where possibly we passed to a subsequence of { f n } to make the above hold.For all h ∈ L 1 (R), by definition of convergence in the weak topology, we have hence g = f .Note that for all y ∈ R, we have and since e −2πix y 1 , and so by Fatou's lemma, Finally, we have We conclude that f ∈ F is an extremizing function.For uniqueness, suppose that Then, by Minkowski's inequality, Minkowski's inequality above must be an equality, implying f and g are linearly dependant.Since f , g have the same average value, we conclude that f = g and so the extremizing function is unique.∎ Note that the uniqueness of the optimizer implies that it must be even.Throughout, we will denote the unique optimizer by f ◇ ∈ F.

Useful identities
For ease of notation, we will always use lowercase letters f , g to denote functions on [−1/2, 1/2], or period 1 functions.We define the Fourier transform of We will use upper case letters F, G to denote functions on [−1, 1] or period 2 functions.We define the Fourier transform of This is an abuse of the notation "ˆ, " but which of the two above transforms is meant will be made clear by the letter case of the function notation.Let f ∈ F and define F(x) be the extension of f (x) to a function on [−1, 1] defined by setting We calculate the relationship between F and f below: From Parseval's theorem and the above, we obtain Lemma 3.1 For all f ∈ F, we have the identity Proof Since f (0) = 1, we have for all odd m ∈ Z, Substituting the above into (3.2) gives the result.∎ We are unable to analytically determine the f (k) such that ∥ f * f ∥ 2 is minimized.In the following section, we will use Lemma 3.1 together with a convex program to provide upper bounds on μ 2 as well as an assignment of f (k) that is very close to optimal.The following lemma suggests a method of obtaining strong lower bounds from good f ∈ F with small ∥ f * f ∥ 2 , i.e., good lower bounds can be found from good upper bound constructions.

Lemma 3.2 Let f , g be periodic real functions with period 1, such that
Then, Proof By Plancherel's theorem, we have Since Ĝ(0) = 0, by applying Hölder's inequality, we conclude (3.3). ∎ Suppose that f (x) leads to an F(x) that is close to optimal for (3.3).We hypothesize that for some C ∈ R, the function defined by ĝ(k) = C f (k) 3 for k ≠ 0 and ĝ(0) = 2 will create a G(x) that is also close to optimal for (3.3).We use this idea to produce good lower bounds for μ 2 in the following section.

Quantitative results
In this section, we describe a convex program used to approximate the optimal solution of (2.1) with finitely many variables.Our primal program is the following: Input: R, T ∈ N, For any R, T ∈ N, let O(R, T) be the optimum of the above program.We remark that the reason for the "redundant" variables {w k , x k } T k=1 and {y m , z m } R m=1 is to demonstrate that the program is easily implemented as a quadratically constrained linear program.For any T ∈ N, let F T ⊂ F be the subset of functions that are degree at most T in their Fourier series expansion, i.e., f ∈ F T implies f (k) = 0 for |k| > T.
An optimal L 2 autoconvolution inequality 115 Proof Fix arbitrary R, T ∈ N. The left inequality of (4.2) follows immediately from Lemma 3.1.Fix a b ∈ N, let f ◇ ∈ F be the extremizer, and define for all 0 < ε < 1/4 a smoothed version of it: , where Note that since ε < 1/4, we can consider h ε as a function on (−1/2, 1/2) with mass 1.Furthermore, If 1 + bε ≤ π, then from (4.3) and (4.4), we have This proves the right inequality of (4.2) since min For all m ≥ R + 1 and 1 ≤ k ≤ T, we have (2m − 1) 2 − 4k 2 ≥ 2m 2 .We can bound the inside sum above by Hölder's inequality: Substituting this estimate into (4.5),we obtain min As a consequence of Proposition 4, we see that the optimum of our program will converge to μ 2  2 for the right choice of input, thereby giving good upper and lower bounds for μ 2 2 .

Computational results
Proposition 4.1 suggests that R/T should be large to produce the best estimates of μ 2 2 by O(R, T).In contrast, we found the best performance of the convex program when T/R is large.Our best data come from using our convex program with R = 5, 000 and T = 40, 000.We used IBM's CPLEX software on a personal computer to determine the optimal solution, and the full assignment of { f k } 40,000 k=1 is available upon request.The first 20 values of f k are displayed in Table 1.
Here, we have O(5, 000, 40, 000) = 0.574643014.By Proposition 4, we obtain the estimates 0.573848267 ≤ μ 2 2 ≤ 0.575437762.Using more careful calculation, and Lemma 3.2, below we produce substantially better estimates with the same data.We remark that the optimal functions created by the convex program appear to converge to a function with asymptotes at x = ±1/2 on the order of 1/ √ x.For the remainder of this section, let T = 40, 000 and R = 5, 000.With f k the solution partially stated above, put f P (x) = 1 + ∑ 0≠|k|≤T f |k| e 2πikx .Also, let F P be the extension of f P to [−1, 1], defined to be zero outside of [−1/2, 1/2].The functions f P and f P * f P are shown in Figure 1.In the following two subsections, we calculate upper and lower bounds for μ 2  2 , thereby proving Theorem 1.1.We export our computed solution { f k } 40,000 k=1 to MATLAB and used "Variable-Precision Arithmetic" operations to avoid floating-point rounding errors on the order of precision stated in our theorem, and we the default of 32 significant digits.In the calculation of our upper and lower bounds, we will use the following quantities related to { f k } 40,000 k=1 :

Computing an upper bound
We want to estimate ∥ f P * f P ∥ 2 2 from above.We will take advantage of the fact that the Fourier coefficients FP (k) decay quickly.From (3.1), we see that FP (2m) = 0 for all |m| ≥ T + 1.Also, for odd |m| ≥ 4T, we have From (3.2), we have, for all N ≥ 2T, The choice of N = 10 7 gives the estimate ∥ f P * f P ∥ 2 2 ≤ 0.574642912.

Computing a lower bound
We use Lemma 3.2 to compute a good lower bound.To do this, we need to find a good choice of g(x) on [−1/2, 1/2].As per the discussion following Lemma 3.2, a good choice g P may have the Fourier coefficients ĝP (0) = 2 and ĝP (m) = α fP (m) 3 , m ∈ Z / {0}.
We need to accurately bound ∑ m≠0 | ĜP (m)| 4/3 from above.We can proceed similar to our recent upper bound calculation, using the decay of the Fourier coefficients.

Table 1 :
First values of { f k } for almost optimal f (x)