1 Introduction
In practice, L-functions appear as generating functions encapsulating information about various objects, such as Galois representations, elliptic curves, arithmetic functions, modular forms, Maass forms, etc. Studying L-functions is therefore of utmost importance in number theory at large. Two of their attached data carry critical information: their zeros, which govern the distributional behavior of underlying objects; and their central values, which are related to invariants such as the class number of a field extension. We refer to [Reference Iwaniec and Sarnak12] and references therein for further insight.
1.1 Distribution of zeros
The spacings of zeros of families of L-functions are well-understood: they are distributed according to a universal law, independent of the exact family under consideration, as proven by Rudnick and Sarnak [Reference Rudnick and Sarnak17]. This recovers the behavior of spacings between eigenangles of the classical groups of random matrices. However, the distribution of low-lying zeros, i.e. those located near the real axis, attached to reasonable families of L-functions does depend upon the specific setting under consideration. See [Reference Sarnak, Shin and Templier19] for a discussion in a general setting.
More precisely, let
$L(s, f)$
be an L-function attached to an arithmetic object f. Consider its nontrivial zeros written in the form
$\rho _f = \frac {1}{2}+i\gamma _f$
, where
$\gamma _f$
is a priori a complex number. There is a notion of analytic conductor
$c(f)$
of f quantifying the number of zeros of
$L(s,f)$
in a given region, more precisely such that the number of zeros with real and imaginary parts between
$0$
and
$1$
is approximately
$\log (c(f)) / 2\pi $
; we renormalize the mean spacing of the zeros to
$1$
by setting
$\tilde {\gamma }_f = {\log (c(f))} \gamma _f / {2\pi }$
. Let h be an even Schwartz function on
$\mathbb {R}$
whose Fourier transform is compactly supported, in particular, it admits an analytic continuation to all
$\mathbb {C}$
. The one-level density attached to f is defined by
The analogy with the behavior of small eigenangles of random matrices led Katz and Sarnak to formulate the so-called density conjecture, claiming the same universality for the types of symmetry of familiesFootnote 1 of L-functions as those arising for classical groups of random matrices.
Conjecture 1.1 (Katz-Sarnak)
Let
$\mathcal {F}$
be a family
$^\dagger $
of L-functions and
$\mathcal {F}_X$
a finite truncation increasing to
$\mathcal {F}$
when X grows. Then, there is one classical group G among
$\mathrm {U}$
,
$\mathrm {SO(even)}$
,
$\mathrm {SO(odd)}$
,
$\mathrm {O}$
or
$\mathrm {Sp}$
such that for all even Schwartz function
$h(x)$
on
$\mathbb {R}$
with compactly supported Fourier transform,
$$ \begin{align} \frac{1}{|\mathcal{F}_X|} \sum_{f \in \mathcal{F}_X} D(f, h) \xrightarrow[X\to\infty]{} \int_{\mathbb{R}} h(x) W_G(x)dx, \end{align} $$
where
$W_G(x)$
is the explicit distribution function modeling the distribution of the eigenangles of the corresponding group of random matrices, explicitly
$W_{\mathrm {U}}(x) = 1$
and
$$ \begin{align} \begin{array}{rclcrcl} W_{\mathrm{O}}(x)\!\!\!\!\!\! & = \!\!\!\!\!\!& \displaystyle 1 + \frac{1}{2}\delta_0(x), & \qquad & W_{\mathrm{SO(even)}}(x)\!\!\!\!\!\! & = \!\!\!\!\!\!&\displaystyle 1+ \frac{\sin 2\pi x}{2 \pi x}, \\[1em] W_{\mathrm{SO(odd)}}(x)\!\!\!\!\!\! & = \!\!\!\!\!\!& \displaystyle 1 - \frac{\sin 2\pi x}{2\pi x} + \delta_0(x), & \qquad & W_{\mathrm{Sp}}(x)\!\!\!\!\!\! & =\!\!\!\!\!\! &\displaystyle 1 - \frac{\sin 2\pi x}{2\pi x}. \end{array} \end{align} $$
The family
$\mathcal {F}$
is then said to have the type of symmetry of G.
Various results toward this conjecture have been established in the recent two decades, following the seminal paper of Iwaniec, Luo and Sarnak [Reference Iwaniec, Luo and Sarnak11]; we refer to [Reference Sarnak, Shin and Templier19] for a general discussion and various references.
1.2 Distribution of central values
The distribution of central values of L-functions is also finely understood, and the Keating–Snaith conjecture predicts that the logarithmic central values
$\log L(\tfrac 12, f)$
are asymptotically distributed according to a normal distribution, with explicit mean and variance depending on the family.
Conjecture 1.2 (Keating–Snaith)
Let
$\mathcal {F}$
be a reasonable family of L-functions in the sense of Sarnak, and
$\mathcal {F}_X$
a finite truncation increasing to
$\mathcal {F}$
when X grows. There is a mean
$M_{\mathcal {F}}$
and a variance
$V_{\mathcal {F}}$
such that for any real numbers
$\alpha < \beta $
,
$$ \begin{align} \frac{1}{|\mathcal{F}_X|}\left| \left\{ f \in \mathcal{F}_X \ : \ \frac{\log L(\tfrac12, f) - M_{\mathcal{F}}}{V_{\mathcal{F}}^{1/2}} \in (\alpha, \beta) \right\} \right| \xrightarrow[X\to\infty]{} \frac{1}{\sqrt{2\pi}} \int_\alpha^\beta e^{-x^2/2} dx. \end{align} $$
In particular, the family of the logarithmic central values
$\log L(\tfrac 12, f)$
equidistributes asymptotically with respect to a normal distribution.
Remark 1 Conjecturally, the central value is always nonnegative, as it can be seen assuming the generalized Riemann hypothesis and using a continuity argument on the real line. In general, if
$\log L(\tfrac12,f)$
has no meaning, the condition in the above density is considered not to be satisfied. In certain cases, e.g. the case of modular forms we will be considering, positivity can be obtained directly by applying Waldspurger formula or Kohnen-Zagier formula.
1.3 Relation between both conjectures
Radziwiłł and Soundararajan [Reference Radziwiłł and Soundararajan16] claimed a general principle that any restricted result toward Conjecture 1.1 can be refined to show that most such L-values have the typical distribution predicted by Conjecture 1.2. They instantiated this technique in the case of quadratic twists of a given elliptic curve and suggested the wide applicability of this approach, in particular, in the case of modular forms building on the pioneering work of Iwaniec, Luo and Sarnak [Reference Iwaniec, Luo and Sarnak11]. This article shows that this principle indeed holds and provides the proof in the case of modular forms in the level aspect.
More precisely, for integers
$k \geqslant 2$
and
$q \geqslant 1$
, let
$H_k(q)$
be an orthogonal basis of Hecke eigenforms of weight k and level q, which is a basis of the space of newforms
$S_k^{\mathrm {new}}(q)$
, normalized so that their first Fourier coefficients are
$a_f(1) = 1$
. We let
$c(f) = k^2q$
be the analytic conductor [Reference Iwaniec and Sarnak12] of f. Introduce for a general sequence
$(a_f)_{f \in H_k(q)}$
, the harmonic average
$$ \begin{align} \sideset{}{^h}\sum_{f \in H_k(q)} a_f := \frac{\Gamma(k-1)}{(4\pi)^{k-1}} \sum_{f \in H_k(q)} \frac{a_f}{\|f\|^2} \end{align} $$
which includes the suitable weights in order to apply the Petersson trace formula. In this setting, the seminal work of Iwaniec, Luo and Sarnak [Reference Iwaniec, Luo and Sarnak11] as well as the recent achievement of Baluyot, Chandee and Li [Reference Baluyot, Chandee and Li1] obtain the following restricted statement toward Conjecture 1.1.
Theorem 1.3 (Iwaniec, Luo, Sarnak & Baluyot, Chandee, Li)
Assume the generalized Riemann hypothesis. For any smooth function
$\Psi $
compactly supported and any Schwartz function h such that its Fourier transform
$\widehat {h}$
is supported in
$(-4, 4)$
, we have
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} D(f, h) \xrightarrow[Q \to \infty]{} \int_{\mathbb{R}} W_{\mathrm{O}} h = \widehat{h}(0) + \frac{1}{2}h(0), \end{align} $$
where
$W_{\mathrm {O}} = 1 + \tfrac 12\delta _0$
is the orthogonal density and
$N(Q)$
is the weighted cardinality of the family,
$$ \begin{align} N(Q) := \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} 1. \end{align} $$
Building on this result and exploiting the methodology outlined by Radziwiłł and Soundararajan, we prove the following statement toward Conjecture 1.2.
Theorem 1.4 For any
$q \geqslant 1$
and
$k \geqslant 2$
, let
$H_k(q)$
be an orthogonal basis of Hecke eigenforms of level q, weight k, and trivial nebentypus, which is also a basis of the space of newforms
$S_k^{\mathrm {new}}(q)$
, normalized so that their first Fourier coefficients are
$a_f(1) = 1$
. Assume the generalized Riemann hypothesis.
For any smooth function
$\Psi $
compactly supported and for any real numbers
$\alpha < \beta $
, we have
$$ \begin{align*} &\frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q}\right) \left| \left\{ f \in H_k(q) \ : \ \tfrac{\log L\left(\tfrac12,f\right) + \tfrac12\log\log c(f)}{\sqrt{\log \log c(f)}} \in (\alpha, \beta) \right\} \right|\\&\quad\geqslant \frac{5}{8} \frac{1}{\sqrt{2\pi}} \int_\alpha^\beta e^{-x^2/2} dx + o(1). \end{align*} $$
This result is in phase with Conjecture 1.2 with
$M_{\mathcal {F}} = -\tfrac 12 \log \log c(f)$
and
$V_{\mathcal {F}} = {\log \log c(f)}$
.
Remark 2 Following the ideas of [Reference Radziwiłł and Soundararajan16], this article confirms the spirit that Theorem 1.3 implies Theorem 1.4. Since Theorem 1.3 is conditional under the generalized Riemann hypothesis, we also assume it in Theorem 1.4.
The assumption of the generalized Riemann hypothesis in the works [Reference Baluyot, Chandee and Li1, Reference Iwaniec, Luo and Sarnak11] is essentially a technical choice. Using methods such as density theorems and trace formulas it would be possible to circumvent the GRH assumption (see, for instance, [Reference Devin, Fiorilli and Södergren5]). However, the support allowed for the Fourier transform in Theorem 1.3 would be smaller, and therefore, the lower bound obtained in Theorem 1.4 would be weaker. Bombieri and Hejhal [Reference Bombieri and Hejhal2] proved the central limit theorem for logarithms of central values in the wide generality of the Selberg class assuming strong properties of the involved L-functions (Ramanujan conjecture, orthogonality relations, density theorems and the generalized Riemann hypothesis). Hough [Reference Hough8] proved similarly that under the generalized Riemann hypothesis and the density conjecture imply the central limit theorem.
Remark 3 In the case of a modular form with sign
$\varepsilon _f = -1$
in the functional equation, its associated central L-value vanish, so that in such a case there is no hope to obtain a lower bound with a constant
$1$
toward the Keating–Snaith conjecture (which hides such issues in the notion of “reasonable” family). Using the sieving technique from [Reference Iwaniec, Luo and Sarnak11], it is possible to isolate the modular forms having positive sign in the functional equation, and the same approach as the one presented here would yield a constant
$13/16$
toward Conjecture 1.1.
1.4 Strategy of proof and structure of this article
In Section 2, we recall the needed definitions on modular forms and L-functions. In particular, explicit formulas relate central values of L-functions to sums of modular coefficients over primes,
$$ \begin{align} \log L(\tfrac12, f) \leadsto \sum_{p \leqslant x} \frac{a_f(p)}{\sqrt{p}} - \frac{1}{2}\log \log x, \end{align} $$
so that the claimed mean is already displayed and most of the study reduces to understanding the distribution of the above sums over primes, denoted
$P(f,x)$
, as well as the error in the explicit formula, which can be expressed as a sum over zeros of L-functions. In order to study the distribution of the sums over primes
$P(f,x)$
, we appeal to the moment method and examine the behavior of their powers
$P(f,x)^k$
. Regrouping equal primes together, this leads to consider sums of the form
$$ \begin{align} \sum_{\substack{p_1, \ldots, p_\ell \\ p_i \neq p_j }} \frac{a_f(p_1)^{\alpha_1} \cdots a_f(p_\ell)^{\alpha_\ell}}{p_1^{\alpha_1} \cdots p_\ell^{\alpha_\ell}}, \end{align} $$
where
$p_i$
are prime numbers and
$\alpha _i$
are positive integers. The study of these sums constitutes the heart of this article and requires to finely understand sums of Fourier coefficients of modular forms. In Section 3, the contribution of the terms with
$\alpha _i = 2$
is shown to be the dominant term, and to match the corresponding moment of the normal distribution, i.e., the main term in Theorem 3.1. We inductively reduce the study of the other sums of the form (1.9) to the case where the only powers arising are
$\alpha _i = 1$
; these are then inductively shown to contribute as an error term by using the harmonic average and trace formulas, a strategy implemented in Section 4. Section 5 concludes the proof by showing that the extra terms arising in the explicit formula, in the guise of sum over zeros, are negligible except for a small proportion of modular forms. This is the place where limited results toward the distribution of low-lying zeros are used, and is the origin of the loss in Theorem 1.4 compared to the Keating–Snaith conjecture.
Remark 4 Radziwiłł and Soundararajan [Reference Radziwiłł and Soundararajan16] outline a general strategy to prove such results. In their specific case of quadratic twists of elliptic curves, they rely on the Poisson summation formula to estimate character sums – we use instead trace formulas, which also amount to an orthogonality relation – as well as the complete multiplicativity of characters – we use instead Hecke relations.
2 Odds and ends
2.1 Modular L-functions
We start recalling the needed theory of modular forms, referring to [Reference Iwaniec9] for a detailed account. Let
$S_k(q)$
be the space of holomorphic cusp forms of weight k, level q, and trivial nebentypus. A cusp form
$f \in S_k(q)$
has an attached L-function defined by
$$ \begin{align} L(s,f) = \sum_{n = 1}^\infty \frac{a_f(n)}{n^s}, \end{align} $$
where the
$a_f(n)$
are its Fourier coefficients, defined by the Fourier expansion
where
$e(z):=\exp (2\pi iz)$
. The modular forms are arithmetically normalized, i.e., we assume that
$a_f(1) = 1$
. In this normalization, Deligne’s bound states that
$a_f(n) \ll d(n) \ll n^\varepsilon $
, where
$d(n)$
denotes the divisor function. In particular, the Dirichlet series (2.1) converges for all
$\mathrm {Re}(s)> 1$
. The degree two L-function
$L(s,f)$
can be completed by explicit gamma factors [Reference Iwaniec and Kowalski10, Section 5.11] so that we have the functional equation
where
$\varepsilon _f \in \{\pm 1\}$
is the root number of f.
If
$f \in S_k(q)$
is an eigenfunction of all the Hecke operators
$T_n$
for
$(n,q)=1$
, we say that f is a Hecke eigenform. If it moreover lies in the orthogonal complement of the space of the oldforms, i.e., those of the form
$f(z) = g(dz)$
for a certain
$g \in S_k(q/d)$
where
$d \mid q$
, then we say that f is a newform, case in which it is an eigenform for the Hecke operators
$T_n$
for all
$n \geqslant 1$
. Let
$H_k(q) \subset S_k(q)$
be an orthogonal basis of the space of newforms consisting of Hecke eigenforms f. For
$f \in H_k(q)$
, we have the Euler product
where the product is over prime numbers p, and
$\alpha _f(p), \beta _f(p) \in \mathbb {C}$
are called the spectral parameters of f at p. This expression encapsulates the Hecke relations satisfied by the coefficients. By taking the logarithmic derivative of this expression, we obtain
$$ \begin{align} -\frac{L'}{L}(s,f) = \sum_{n \geqslant 1} \frac{\Lambda_f(n)}{n^s}, \end{align} $$
where
$\Lambda _f(n) = (\alpha _f(p)^k + \beta _f(p)^k) \log (p)$
if
$n = p^k$
is a prime power, and
$\Lambda _f(n) = 0$
otherwise.
We assume the generalized Riemann hypothesis for the symmetric squares L-functions
$L(s, \mathrm {sym}^2 f)$
all along the article.
2.2 Explicit formula for sums over zeros
We have the celebrated Weil explicit formula, proven, for instance, in [Reference Iwaniec, Luo and Sarnak11, (4.11)], relating sum over zeros of L-functions and sum over primes of their spectral parameters. For any smooth function h with compact Fourier support, we have
$$ \begin{align} D(f,h) = \widehat{h}(0) - \frac{2}{\log c(f)} \sum_{p} \sum_{\nu \geqslant 1} (\alpha_f(p)^\nu + \beta_f(p)^\nu) \frac{\log p}{p^{\nu/2}} \widehat{h}\left( \frac{\nu \log p}{\log c(f)}\right) + O\left( \frac{1}{\log c(f)}\right). \end{align} $$
Using the relations between coefficients and spectral parameters when
$p \nmid q$
, and the Deligne bounds on
$a_f(n)$
, we obtain that the terms
$\nu \geqslant 3$
contribute no more than the displayed error term, so that we deduce as in [Reference Iwaniec, Luo and Sarnak11, Lemma 4.1] the following expansion of the one-level density.
Proposition 2.1 (Explicit formula for sums over zeros)
We have, for all
$f \in H_k(q)$
and all smooth function h,
$$ \begin{align} D(f, h) = \widehat{h}(0) + \tfrac12h(0) + P^{(1)}(f, h) + P^{(2)}(f, h) + O\left(\frac{\log\log c(f)}{\log c(f)}\right), \end{align} $$
where, for
$\nu \geqslant 1$
, we let
$$ \begin{align} P^{(\nu)}(f,h) = \frac{2}{\log c(f)} \sum_{p\nmid q} a_f(p^\nu) \frac{\log p}{p^{\nu/2}} \widehat{h}\left( \frac{\nu \log p}{\log c(f)}\right), \end{align} $$
where the sum runs over prime numbers p not dividing q.
Remark 5 The stated result from [Reference Iwaniec, Luo and Sarnak11, Lemma 4.1] displays the contribution of the squares of primes, i.e., the term
$P^{(2)}(f, h)$
. This term can be included in the error term under the generalized Riemann hypothesis of
$L(s, \mathrm {sym}^2 f)$
, that we assume for other – but similar – purposes (see [Reference Iwaniec, Luo and Sarnak11, (4.23)].
2.3 Explicit formula for central values
The connection between central values of L-functions, sums over primes and sums over zeros dates back to Selberg, and can be found in [Reference Radziwiłł and Soundararajan16, Proposition 1] in the case of quadratic Dirichlet characters. The proof carries on mutatis mutandis.
Proposition 2.2 (Explicit formula for central values)
Assume that
$L(\tfrac 12, f)$
is nonzero. We have, for all
$x \leqslant c(f)$
,
$$ \begin{align} \log L(\tfrac12, f) = P(f, x) - \tfrac12 \log \log x + O\Big( \frac{\log c(f)}{\log x} + \sum_{\gamma_f} \log(1 + (\gamma_f \log x)^{-2}) \Big), \end{align} $$
where we defined the sum over primes
$$ \begin{align} P(f, x)=\sum_{\substack{p<x \\ p\nmid q}} \frac{a_f(p)}{p^{1/2}}. \end{align} $$
Note that the term
$-\tfrac 12\log \log x$
is the expected mean of the logarithmic central values as predicted by Conjecture 1.2 and stated in Theorem 1.4. This property reduces the study of central L-values to the study of their distribution around the mean, which is governed by the above sum over primes (studied in Theorem 3.1) and by the sum over zeros in the error term (studied in Section 5).
2.4 Trace formulas
We introduce in this section quasi-orthogonality statements which will be central to understand harmonic averages of coefficients. Recall that we denote by
$H_k(q)$
an orthogonal basis of Hecke newforms of level q and introduce
$B_k(q)$
an orthogonal basis of all Hecke eigenforms of level q.
2.4.1 Petersson trace formula
Consider the averages
We then have the following quasi-orthogonality statement [Reference Iwaniec, Luo and Sarnak11, Proposition 2.1].
Proposition 2.3 (Petersson trace formula)
For
$m, n, q \geqslant 1$
, we have
$$ \begin{align} \Delta_q(m, n) = \delta_{m, n} + 2\pi i^{-k} \sum_{\substack{c \geqslant 1 \\ q \mid c}} \frac{S(m, n, c)}{c} J_{k-1}\left( \frac{4\pi \sqrt{mn}}{c}\right) \end{align} $$
where
$\delta _{m, n}$
is the Kronecker delta symbol,
$J_{k-1}$
is the J-Bessel function of order
$k-1$
, and
$S(m, n, c)$
is the
$\mathrm {GL}(2)$
Kloosterman sum, defined by
$$ \begin{align} S(m, n, c) := \sum_{a \in (\mathbb{Z}/c\mathbb{Z})^\times} e\left( \frac{am + \bar{a}n}{c}\right), \end{align} $$
where
$\bar {a}$
denotes the inverse of a in
$(\mathbb {Z}/c\mathbb {Z})^\times $
.
Define the average of coefficients over the newforms,
We have the following sieving result that relates averages over
$B_k(q)$
and over
$H_k(q)$
, in other words allowing to sieve oldforms in [Reference Baluyot, Chandee and Li1, Lemma 2.3].
Lemma 2.4 Suppose that
$m, n , q$
are positive integers with
$(mn,q)=1$
, and let
${q = q_1q_2}$
, where
$q_1$
is the largest factor of q satisfying
$p \mid q_1 \Leftrightarrow p^2 \mid q$
for all primes p. Then, we have
$$ \begin{align} \Delta^\star_q(m, n) = \sum_{\substack{q = L_1L_2d \\ L_1 \mid q_1 \\ L_2 \mid q_2}} \frac{\mu(L_1L_2)}{L_1L_2} \prod_{\substack{p \mid L_1 \\ p^2 \nmid d}} \left( 1-p^{-2}\right)^{-1} \sum_{e \mid L_2^\infty} \frac{\Delta_d(me^2, n)}{e}. \end{align} $$
Remark 6 Note that, because of the Möbius function
$\mu (L_1L_2)$
, we necessarily have
$(L_2, d)=1$
and
$(e, d) = 1$
, which will be of much use later.
2.4.2 Kuznetsov trace formula
The spectral theory of automorphic forms is explained in various references such as [Reference Iwaniec and Kowalski10, Chapter 15]. We introduce the notations from [Reference Baluyot, Chandee and Li1, Lemma 3.1] to describe the three types of elements of the spectrum.
-
(Modular forms) Let
$B_\ell (q)$
be an orthogonal basis of the space of holomorphic cusp forms of weight
$\ell $
and level q, in which dimension is denoted
$\theta _\ell (q)$
. We can write
$f_{j, \ell }$
for the elements of
$B_\ell (q)$
and introduce their Fourier coefficients through the Fourier expansion (2.16)
$$ \begin{align} f_{j, \ell}(z) = \sum_{n \geqslant 1} \psi_{j, \ell}(n) (4\pi n)^{\ell / 2} e(nz). \end{align} $$
We say that f is a Hecke eigenform if it is an eigenfunction of all the Hecke operators
$T_n$
for
$(n,q)=1$
. Note that for
$(n,q)=1$
, we have
$a_{j, \ell }(n) a_{j, \ell }(1) = \sqrt {n}\psi _{j, \ell }(n)$
. -
(Maass forms) Let
$\lambda _j = \tfrac 14 + \kappa _j^2$
be the eigenvalues of the hyperbolic Laplacian counted with multiplicities and in increasing order, in the space of cusp forms on
$L^2(\Gamma _0(q) \backslash \mathbb {H})$
. By convention, choose the sign of
$\kappa _j$
such that
$\kappa _j \geqslant 0$
when
$\lambda _j \geqslant \tfrac 14$
and
$i \kappa _j> 0$
when
$\lambda _j < \tfrac 14$
. For each positive
$\lambda _j$
, choose corresponding eigenvectors
$u_j$
in such a way that the family
$(u_j)_j$
forms an orthonormal basis of the corresponding eigenspace, and define the associated Fourier coefficients
$\rho _j(m)$
by the Fourier expansion (2.17)
$$ \begin{align} u_j(z) = \sum_{m\neq 0} \rho_j(m) W_{0, i\kappa_j}(4\pi|n| y)e(mx), \end{align} $$
where
$W_{0,it}(y) := (y/\pi )^{1/2}K_{it}(y/2)$
is a Whittaker function, and
$K_{it}$
is the modified Bessel function of the second kind. We call u a Hecke eigenform if it is an eigenfunction of all the Hecke operators
$T_n$
for
$(n,q)=1$
, and we then denote by
$\lambda _u(n)$
the Hecke eigenvalue of u for
$T_n$
. Writing
$\rho _u(n)$
as the Fourier coefficient, we have
$\lambda _u(n) \rho _u(1) = \sqrt {n}\rho _u(n)$
when
$(n,q)=1$
. When u is a newform, this holds for all
$n \neq 0$
instead. -
(Einsenstein series) Let
$\mathfrak {c}$
be a cusp for
$\Gamma _0(q)$
. We define
$\varphi _{\mathfrak {c}}(m,t)$
to be the m-th Fourier coefficient of the real-analytic Eisenstein series at
$\tfrac 12 + it$
, i.e., by the Fourier expansion (2.18)
$$ \begin{align} E_{\mathfrak{c}}(z, \tfrac12 + it) = \delta_{\mathfrak{c} = \infty} y^{\tfrac12 + it} + \varphi_{\mathfrak{c}}(0,t) y^{\tfrac12 - it} + \sum_{m \neq 0} \varphi_{\mathfrak{c}}(m,t) W_{0,it}(4\pi|n|y)e(mx). \end{align} $$
Proposition 2.5 (Kuznetsov trace formula)
For
$\phi : (0, \infty ) \to \mathbb {C}$
a smooth and compactly supported function, and
$m, n, q \geqslant 1$
, we have
$$ \begin{align} \sum_{\substack{c \geqslant 1 \\ q \mid c}} \frac{S(m, n, c)}{c} \phi\left( \frac{4\pi \sqrt{mn}}{c}\right) & = \sum_{\substack{\ell \in 2\mathbb{N}_+ \\ 1 \leqslant j \leqslant \theta_\ell(q)}} (l-1)! \sqrt{mn}\ \overline{\psi_{j, \ell}}(m) \psi_{j, \ell}(n) \phi_h(\ell) \end{align} $$
$$ \begin{align} & \qquad + \sum_j \frac{\overline{\rho_j}(m) \rho_j(n) \sqrt{mn}}{\cosh \pi \kappa_j} \phi_+(\kappa_j) \end{align} $$
$$ \begin{align} & \qquad + \frac{1}{4\pi} \sum_{\mathfrak{c}} \int_{-\infty}^{+\infty} \frac{\sqrt{mn}}{\cosh \pi t} \overline{\varphi_{\mathfrak{c}}}(m, t) \varphi_{\mathfrak{c}}(n, t) \phi_+(t) dt, \end{align} $$
where the Bessel transforms are defined as by
$$ \begin{align} \phi_+(r) & := \frac{2\pi i}{\mathrm{sinh}(\pi r)} \int_0^\infty (J_{2ir}(\xi) - J_{-2ir}(\xi)) \phi(\xi) \frac{d\xi}{\xi} \end{align} $$
where
$J_\ell $
is the J-Bessel function of the first kind.
3 Moments
By the explicit formula (2.9), a critical quantity to understand in order to control the distribution of the central values is the sums over primes
$P(f,x)$
, and this will be investigated by means of the moment method as in [Reference Radziwiłł and Soundararajan16]. The following result is the fundamental tool to understand their distribution, and is an analog of [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Theorem 3.1].
Theorem 3.1 (Moment property)
We have, for all
$\ell \geqslant 0$
,
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P(f,x)^\ell = (M_\ell + o(1)) (\log \log(x))^{\ell/2} \end{align} $$
where we introduced the
$\ell $
-th Gaussian moment
Remark 7 On average over the family, Theorem 3.1 states that the moments of
$P(f,x)$
, i.e., essentially the moments of
$\log L(\tfrac 12,f) + \tfrac 12 \log \log x$
by the explicit formula (2.9), match the moments of the normal distribution, hence justifying the shape of Conjecture 1.2 and of Theorem 1.4.
The remaining of this section as well as the following one constitute the proof of this result and of two corollaries.
3.1 Sums over primes of coefficients
We follow the strategy of [Reference Radziwiłł and Soundararajan16, Proposition 3] using the tools developed in [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Proposition 4.1], adapting it to the specific sum over primes
$P(f,x)$
arising in the explicit formula. After expanding the power
$P(f,x)^\ell $
in
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P(f, x)^\ell \end{align} $$
and gathering together primes that are equal, we are reduced to study sums of the type
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_l \leqslant x \\ p_i \nmid q \\ p_i \neq p_j}} \frac{a_f(p_1)^{\alpha_1} \cdots a_f(p_\ell)^{\alpha_\ell}}{p_1^{\alpha_1/2} \cdots p_\ell^{\alpha_\ell/2}}, \end{align} $$
where
$\alpha _i$
are positive integers. Inspired by the above expression of the expanded moment, introduce the notation, for any integer
$\alpha \geqslant 1$
and prime number p,
$$ \begin{align} F(p, \alpha) := \frac{a_f(p)^\alpha}{p^{\alpha/2}}. \end{align} $$
By the expansion (3.4), it is sufficient to study sums of products of such
$F(p, \alpha )$
. We state in this section some first estimates for these quantities. Informally, the sum for higher powers
$\alpha \geqslant 3$
will contribute negligibly, the contribution of the sum for powers
$\alpha =2$
will display a precise equivalent by means of the Rankin–Selberg method and will determine the effective distribution in Theorem 1.4, and the sum for powers
$\alpha =1$
will be studied using the Perron formula and bounds on L-functions.
Lemma 3.2 (Large parts)
We have, for all
$\alpha \geqslant 3$
and uniformly on
$f \in H_k(q)$
,
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p \nmid q}} F(p,\alpha) \ll 1. \end{align} $$
Proof Using Deligne’s bound
$|a_f(p)| \leqslant 2$
, the result follows since it reduces to the sum of
$p^{-3/2}$
which converges absolutely.
Lemma 3.3 (
$2$
-parts)
We have
$$ \begin{align} \sum_{\substack{p\leqslant x \\ p \nmid q}} F(p,2) \ll \log\log(x), \end{align} $$
where the implied constant is absolute.
Proof The bound (3.7) is immediate by Deligne’s bound
$|a_f(p)| \leqslant 2$
and using Mertens’ estimate
$$ \begin{align*}\sum_{p\leqslant x} \frac1p \ll \log\log(x).\\[-42pt]\end{align*} $$
Lemma 3.4 (
$1$
-parts)
We have, for all
$n \geqslant 1$
and all
$x \leqslant q$
,
$$ \begin{align} \sum_{\substack{p_1, \ldots, p_n \leqslant x \\ p_i \nmid q \\ p_i \neq p_j}} \prod_{i=1}^n F(p_i, 1) \ll \log(q)^{n + \varepsilon}. \end{align} $$
Proof The result [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Lemma 2.12] reads
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p \nmid q}} \left( b_f(p) := \frac{a_f(p)\log(p)}{p^{1/2}}\right) \ll \log(x)^{1+\varepsilon}\log(q) \end{align} $$
and by partial summation, we, therefore, deduce
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p \nmid q}} \frac{a_f(p)}{p^{1/2}} = \sum_{\substack{p \leqslant x \\ p \nmid q}} \frac{b_f(p)}{\log p} \ll \sum_{\substack{p \leqslant x \\ p \nmid q}} \Bigg| \sum_{\substack{p' \leqslant p \\ p' \nmid q}} b_f(p')\Bigg| \frac{1}{p\log^2 p} \ll \log(q)^{1+\varepsilon} \sum_{\substack{p \leqslant x \\ p \nmid q}} \frac{1}{p\log p} \ll \log(q)^{1+\varepsilon} \end{align} $$
giving the desired result for
$n=1$
. We finish the proof inductively for
$n \geqslant 1$
, adding back the missing primes in order to get a genuine product, which incur an extra contribution made of higher powers, therefore of negligible size by the two above lemmas.
Remark 8 The proof of [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Lemma 2.12] boils down to using the Perron formula to relate the sought sum to
$L'/L$
, on which we have bounds that are enough for the result. Note that these “rough” bounds on the
$1$
-parts and
$2$
-parts will not be sufficient to bound the whole sum over the family, since the expected main term in Theorem 3.1 is of size
$\log \log x$
while the above bounds are about
$\log x$
and
$\log \log x$
. The harmonic average (in the guise of trace formulas) or finer properties of L-functions will have to be fully exploited in order to get enough cancellations. These bounds will, however, be sufficient to address number of cases and remains fundamental in the proofs.
The next paragraphs of this section are devoted to prove finer estimates for the
$2$
-parts (see Proposition 3.5) and to prepare the stage to estimating the
$1$
-parts.
3.2 A Rankin bound
We need a finer and genuine asymptotics for the
$2$
-part, since it will ultimately contribute as the main term. We have the following statement, which is the standard Rankin bound with emphasis on the uniformity of the error term in the function f – and where we use the generalized Riemann hypothesis.
Proposition 3.5 For all
$f \in H_k(q)$
, for all
$x < c(f)$
, and assuming that
$L(s,\mathrm {sym}^2 f)$
has no zeros in the rectangle
$\{ z \ : \ \sigma _0 \leqslant \mathrm {Re}(z) \leqslant 1, \ |\mathrm {Im}(z) - t| \leqslant 3\}$
for a certain
$1/2 \leqslant \sigma _0 < 1$
, we have
$$ \begin{align} \sum_{\substack{p < x \\ p \nmid q}} \frac{\lambda_f(p)^2 }{p} = \log\log x + O(\log \log \log c(f)), \end{align} $$
where the implied constant is absolute.
The remaining of Section 3.2 is dedicated to the proof of this result. We start adding back the missing primes and use Deligne’s bound
$\lambda (p) \ll 1$
to control the incurred error. We need the following estimate.
Lemma 3.6 For all
$x \leqslant q$
, we have
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p\mid q}} \frac{1}{p} \ll \log\log\log(q), \end{align} $$
for
$x\leqslant q$
.
Proof Let
$\omega (q)$
be the number of distinct prime divisors of q, and write the prime factorization of q as
$q=p_1^{\alpha _1}p_2^{\alpha _2} \cdots p_k^{\alpha _k}$
, where each
$p_n$
is a prime factor of q ordered so that
$p_1 < p_2 < \cdots < p_k$
and
$k=\omega (q)$
. Then
$$ \begin{align} \log{q} = \sum_{n=1}^k \alpha_n \log{p_n} \geqslant k\log2 = (\log2)\omega(q) \end{align} $$
so that
$\omega (q) \ll \log q$
. Since
$p_n \asymp n\log n$
by a classical estimate dating back to Chebyshev – we more precisely know that
$p_n \sim n \left (\log {n} + \log \log {n} - 1 \right )$
by Dusart [Reference Dusart6] – we deduce
We, therefore, deduce
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p\mid q}} \dfrac1p \ll \sum_{n=1}^{\omega(q)} \dfrac1{p_n} \sim \log\log(p_{\omega(q)}) \ll \log\log\log(q) \end{align} $$
by Mertens estimate on the sum over reciprocals of primes.
The problem therefore reduces to estimating the complete sum over primes
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p \nmid q}} \frac{\lambda(p)^2 }{p} \rightsquigarrow \sum_{\substack{p \leqslant x}} \frac{\lambda(p)^2 }{p}. \end{align} $$
We use the Hecke relation
$\lambda (p)^2 = \lambda (p^2) + 1$
, and use Mertens estimate
$$ \begin{align} \sum_{p \leqslant x} \frac{1}{p} = \log \log x + O(1), \end{align} $$
with an absolute error term. Therefore, for
$x \leqslant q$
, we have
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p \nmid q}} \frac{\lambda(p)^2}{p} = \sum_{\substack{p \leqslant x}} \frac{\lambda(p^2) }{p} + \log\log x + O(\log\log\log(q)), \end{align} $$
which already displays the main term in Theorem 1.4, so that we are reduced to study the sum of coefficients
$\lambda _f(p^2)$
. Such a sum may be completed into the sum of coefficients of squares of prime powers at the cost of a uniformly bounded error term, and the sum thus obtained has size determined by
$L(1, \mathrm {sym}^2 f)$
via the Perron formula (see [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Lemma 2.11]. It is therefore sufficient to bound L-values, and we follow the same strategy as in [Reference Cogdell and Michel4, Corollary 4.4] to do so. We state these bounds for general L-functions
$\log L(s, f)$
.
Lemma 3.7 Let
$s = \sigma + it$
with
$\sigma> \tfrac 12$
and
$|t| \leqslant 3c(f)$
. Let
$\sigma _0 \in (1/2, \sigma )$
. Suppose
$L(s,f)$
has no zeros in the rectangle
$\{ z \ : \ \sigma _0 \leqslant \mathrm {Re}(z) \leqslant 1, \ |\mathrm {Im}(z) - t| \leqslant 3\}$
. Then, we have, uniformly in f,
Proof For
$\sigma> 2$
, then we have
$\log L(s,f) \ll 1$
uniformly in f, by Deligne’s bound
$a_f(n) \ll 1$
and by absolute convergence.
Assume
$\sigma \leqslant 2$
and follow the strategy of Granville–Soundararajan [Reference Granville and Soundararajan7]; they prove the analogous result for Dirichlet L-functions, but the result is general as used for instance in [Reference Cogdell and Michel4, Lemma 2.3], unproven there. Consider circles of center
$2+it$
and radii
$r := 2-\sigma $
, so that they pass through s. On the larger circle of radius
$R := 2-\sigma _0$
, we have for all z on the circle,
This follows from convexity bounds [Reference Steuding20, Lemma 6.7 and Theorem 6.8], which states that L-values in the critical strip are bounded by
$L(z,f) \ll (|z|^2 c(f)^2)^{1-\sigma }$
for all
$\sigma \in (0,1)$
, uniformly in f. We get the bound (3.20), for
$\sigma \in (1/2, 1)$
, the worst case being
$\sigma = 1/2$
.
For s in the rectangle
$\{ z \ : \ \sigma _0 \leqslant \mathrm {Re}(z) \leqslant 1, \ |\mathrm {Im}(z) - t| \leqslant 3\}$
, we use the Borel-Carathéodory theorem to obtain
which is as desired.
Assuming strong zero-free regions, typically implied by the generalized Riemann hypothesis, we can approximate L-functions by short sums of coefficients, with an error term depending on the chosen length. The following lemma makes it precise.
Lemma 3.8 Let
$s= \sigma + it$
with
$\sigma> 1/2$
and
$|t| \leqslant 2c(f)$
. Let
$y \geqslant 2$
a real parameter and
$\sigma _0 \in (1/2, \sigma )$
. Suppose there are no zeros of
$L(z, f)$
in the rectangle
$\{z \ : \ \sigma _0 \leqslant \mathrm {Re}(z) \leqslant 1, |\mathrm {Im}(z) - t| \leqslant y+ 3 \}$
. Let
$\sigma _1 = \min (\tfrac 12(\sigma +\sigma _0), \sigma _0 + 1/\log (y))$
. We then have
$$ \begin{align} \log L(s,f) = \sum_{n=2}^y \frac{\Lambda(n) a_f(n)}{n^s \log n} + O\left( \frac{\log c(f)}{(\sigma_1 - \sigma_0)^2} y^{\sigma_1-\sigma} \right), \end{align} $$
where the implied constant is independent of f.
Proof By the truncated Perron formula given in [Reference Montgomery and Vaughan13, Corollary 5.3] or [Reference Murty14, Example 4.4.15], we can express the above short sum by a vertical integral of the L-function. For
$c = 1-\sigma + 1/\log y$
, since
$L(s,f)$
is entire, we have
$$ \begin{align} \frac{1}{2\pi i} \int_{c-iy}^{c+iy} \log L(s+w, f) \frac{y^w}{w} dw & = \sum_{n = 2}^y \frac{\Lambda(n) a_f(n)}{n^s \log n} + O\left( \frac{1}{y} \sum_{n \geqslant 1} \frac{y^c}{n^{\sigma + c}} \frac{1}{|\log(y/n)|} \right) \end{align} $$
$$ \begin{align} & = \sum_{n = 2}^y \frac{\Lambda(n) a_f(n)}{n^s \log n} + O\left( y^{-\sigma} \log y \right). \end{align} $$
We can, on the other hand, move the integration line from
$\mathrm {Re} = c$
to
$\mathrm {Re} = \sigma _1 - \sigma < 0$
. We exactly assumed that there are no zero in the rectangle thus crossed, i.e., we pick no singularities, except
$w = 0$
where there is a simple pole with residue
$\log L(s,f)$
. The remaining integrals after picking up this L-value are three segments of the form
on the segments
$[c\pm iy, \sigma _1-\sigma \pm iy]$
and
$[\sigma _1 - \sigma - iy, \sigma _1 - \sigma + iy]$
. By the previous Lemma 3.7 and the assumptions on
$\mathrm {Re}(w) \geqslant \sigma _1-\sigma $
, these integrals are bounded by
$$ \begin{align} \ll \int \frac{\log c(f)}{\sigma + \mathrm{Re}(w) - \sigma_0} \frac{y^w}{w} dw \ll \frac{\log c(f)}{(\sigma_1 - \sigma_0)^2} y^{\sigma_1 - \sigma}. \end{align} $$
This ends the proof of the lemma.
We can now instantiate the above lemma with suitable choices of variables to obtained the desired bound:
Lemma 3.9 Let
$\eta> 2(\log c(f))^{-1}$
. Assume that
$L(s,f)$
has no zeros in the rectangle
$\mathrm {Re}(s) \in [1-\eta , 1]$
and
$|\mathrm {Im}(s)| \leqslant \log ^{10/\eta }\! c(f)$
– this is, for instance, true with any
$0<\eta <1/2$
when assuming the Generalized Riemann Hypothesis. Then
uniformly for
$\mathrm {Re}(s) \geqslant 1-1/\log \log c(f)$
and
$|\mathrm {Im}(s)| \leqslant \log ^{10}c(f)$
.
Proof Apply the above with
$\sigma _0 = 1-\eta $
,
$\sigma = \mathrm {Re}(s) \geqslant 1-1/\log \log c(f)$
, and
$\sigma - \sigma _0 \geqslant \sigma - \sigma _1 \geqslant \eta /2$
. Choose the value
$y = \log ^{10/\eta }\! c(f)$
. Use Deligne’s bounds
$|\Lambda _f(p^\alpha )| \ll \log p$
to remove higher powers of primes in the sum, corresponding to a bounded contribution, and get
$$ \begin{align} |\log L(s,f)| \leqslant \left| \sum_{p=2}^y \frac{\Lambda(p) a_f(p)}{p^s \log p} \right| + O(1) \ll \sum_{p=2}^{\log^{10/\eta}\! c(f)} \frac{1}{p^{1-1/\log\log c(f)}} \ll \log\log\log c(f). \end{align} $$
This is the claimed result.
We obtain Proposition 3.5 by using Perron formula to relate the sum of
$\lambda _f(p^2)$
to
$L(1, \mathrm {sym}^2 f)$
, and Lemma 3.9 for
$\mathrm {sym}^2 f$
at
$s=1$
to bound this L-value. By averaging Proposition 3.5 over the family, we obtain the following estimate for the
$2$
-parts.
Corollary 3.10 Assume the Generalized Riemann Hypothesis for
$L(s, \mathrm {sym}^2 f)$
. For all
$Q^\delta \ll x \ll Q$
, we have
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p \leqslant x \\ p\nmid q}} F(p,2) = \log\log x + O(\log\log\log x), \end{align} $$
where the implied constant is absolute.
3.3 Case splitting and reduction to powers one
Recalling the definition of
$F(p,a)$
, the expression (3.4) splits into sums of the type
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_\ell \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^\ell F(p_i, \alpha_i) \end{align} $$
so that it is sufficient to study these. We split into different cases according to the number of conspiring primes (i.e., the size of the powers
$\alpha _i$
), and use the lemmas established in Sections 3.1 and 3.2 to treat each part. Define the following cases:
-
Case A – each power
$\alpha _i$
is
$2$
; -
Case B – each power is at least
$2$
, at least one is larger; -
Case C – at least one power is
$1$
, but not all; -
Case D – each power is
$1$
.
To prove Theorem 3.1, we proceed by induction on the number of terms
$\ell $
in the product. The remainder of Section 3.3 is dedicated to treating the cases A, B, and C or to reduce them to case D, which will be addressed in Section 4.
Case A: Each power is
$2$
By the estimate of Lemma 3.10, we have
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p\nmid q}} F(p,2) = \sum_{\substack{p \leqslant x \\ p\nmid q}} \frac{a_f(p)^2}{p} = \log\log(x) + O(\log\log\log c(f)), \end{align} $$
which corresponds to the situation
$\ell = 1$
in Case A. Let
$\ell \geqslant 1$
and assume inductively that for all
$l=1,2,\ldots ,\ell $
, we have
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_l \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^l F(p_i,2) = (\log\log(x))^l + o((\log\log(x))^l). \end{align} $$
We will prove the corresponding property for
$\ell + 1$
. Adding back the missing primes in order to complete one of the sums over the primes
$ p = p_{\ell +1}$
, we get
$$ \begin{align} & \sum_{\substack{p_1, \ldots, p_{\ell+1} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell+1} F(p_i,2) = \sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i,2) \sum_{\substack{p\leqslant x \\ p\nmid q \\ p\neq p_i,\, 1\leqslant i\leqslant\ell}} F(p,2) \notag\\ & \qquad = \Bigg(\sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i,2)\Bigg) \Bigg(\sum_{\substack{p\leqslant x \\ p\nmid q}} F(p,2)\Bigg) - \ell \Bigg(\sum_{\substack{p_2, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{\substack{i=2}}^{\ell} F(p_i,2)\Bigg)\Bigg(\sum_{\substack{p\leqslant x \\ p\nmid q}} F(p,4)\Bigg).\notag\\ \end{align} $$
By Lemma 3.2, the sum including
$F(p,4)$
in (3.32) is uniformly bounded, so that the rightmost term in (3.32) falls into the induction hypothesis and is bounded by
$(\log \log (x))^\ell = o(\log \log (x))^{\ell +1}$
. In the left term of (3.32), we average over the family and use Hölder inequality as in [Reference Cheek, Gilman, Jaber, Miller and Tomé3], as well as induction to conclude it is equivalent to
$\log \log (x)^{\ell + 1}$
, proving the case for
$\ell + 1$
and finishing the proof by induction. Thus
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_{\ell+1} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i,2) \sim (\log\log(x))^{\ell} \end{align} $$
for any
$\ell \in \mathbb {N}^\star $
by induction.
It remains to estimate the contribution of these terms corresponding to the powers
$\alpha _i = 2$
in the k-th moment of
$P(f,x)$
. The k primes can be paired into
$k/2$
such squares, giving the contribution of
$(\log \log (x))^{k/2}$
in Theorem 3.1. The number of such pairings can be obtained as follows: select
$k/2$
primes, then pair each of them with one of the
$k/2$
remaining primes (which has
$(k/2)!$
possibilities), and notice that every pairing has been counted twice because of the possible swaps
$p_i \leftrightarrow p_j$
(which are
$2^{k/2}$
), so that the total number of such contributions falling into Case A is
$$ \begin{align} \binom{k}{k/2} \frac{(k/2)!}{2^{k/2}} = \frac{k!}{(k/2)! 2^{k/2}} = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} x^k e^{-x^2/2} dx, \end{align} $$
and we exactly recover the k-th moment of the normal law, justifying the main contribution in Theorem 3.1.
Case B: Each power is at least
$2$
, at least one being larger than
$2$
By Lemma 3.2, we have
$$ \begin{align} \sum_{\substack{p \leqslant x \\ p\nmid q}} F(p,\alpha) = O(1) \end{align} $$
whenever
$\alpha \geqslant 3$
, the underlying constant being absolute. Equation (3.35) immediately implies that this holds on average over the family, in particular,
$$ \begin{align*}\frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p\leqslant x \\ p\nmid q}} F(p,\alpha) = o\left(\log\log(x)\right).\end{align*} $$
Let
$\ell \geqslant 1$
and assume inductively that for all
$l=1,2,\ldots ,\ell $
, we have
$$ \begin{align} \sum_{\substack{p_1, \ldots, p_l \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^l F(p_i, \alpha_i) = o((\log\log(x))^l), \end{align} $$
where
$\alpha _i\geqslant 2$
for all
$i=1,2,\ldots ,l$
, and there is at least one
$j\in \{1,2,\ldots ,l\}$
such that
$\alpha _j\geqslant 3$
. We address the case of
$\ell +1$
factors. Reordering such that at least one
$j\in \{1,2,\ldots ,\ell \}$
satisfies
$\alpha _j\geqslant 3$
, and adding the missing primes in order to complete the sum over primes
$p_{\ell + 1}$
, we can write
$$ \begin{align} & \sum_{\substack{p_1, \ldots, p_{\ell+1} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell+1} F(p_i, \alpha_i) = \sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i, \alpha_i) \sum_{\substack{p_{\ell+1} \leqslant x \\ p_{\ell+1} \nmid q \\ p_{\ell+1} \neq p_i,\, 1\leqslant i\leqslant\ell}} F(p_{\ell+1},\alpha_{\ell+1}), \notag\\ &\qquad = \Bigg(\sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i, \alpha_i)\Bigg) \Bigg(\sum_{\substack{p_{\ell+1} \leqslant x \\ p_{\ell+1} \nmid q}} F(p_{\ell+1} ,\alpha_{\ell+1})\Bigg) \end{align} $$
$$ \begin{align} &- \sum_{m=1}^\ell \Bigg(\sum_{\substack{p_1, \ldots, p_{m-1}, p_{m+1}, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{\substack{i=1 \\ i\neq m}}^{\ell} F(p_i, \alpha_i)\Bigg)\Bigg(\sum_{\substack{p_{\ell+1} \leqslant x \\ p_{\ell+1} \nmid q}} F(p_{\ell+1},\alpha_{\ell+1}+\alpha_m)\Bigg). \end{align} $$
Examining (3.38), since
$\alpha _{\ell +1} + \alpha _m \geqslant 3$
, we can use the bound given by Lemma 3.2 to conclude that the corresponding sum is uniformly bounded; the induction hypothesis therefore applies (
$\alpha _j \geqslant 3$
) to the other factor in (3.38) and allows to conclude. Examining (3.37), since
$\alpha _{\ell +1} \geqslant ~2$
, we have that the sum over
$p_{\ell +1}$
is uniformly bounded by
$\log \log x$
by Lemma 3.3; the induction hypothesis therefore applies to the first factor in (3.37) and allows to conclude that it is
$o(\log \log (x))^{\ell /2}$
, so that the whole expression is
$o(\log \log (x)^{\ell +1})$
. This concludes the induction for Case B, and we have
$$ \begin{align*} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i, \alpha_i) = o\left((\log\log(x))^{\ell}\right) \end{align*} $$
for any
$\ell \in \mathbb {N}^\star $
.
Case C: At least one power is
$1$
, but not all
We now reduce Case C to Case D appealing to induction. Let
$\ell \geqslant 2$
. We assume inductively that, for all
$l=2,\ldots ,\ell $
such that at least one power
$\alpha _i \geqslant 2$
, we have
$$ \begin{align} \begin{aligned} &\frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_l \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^l F(p_i, \alpha_i) \\ &\quad\ll \frac{(\log\log(x))^{l-n_l}}{N(Q)} \Bigg|\sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_{n_l} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{n_l} F(p_i,1)\Bigg|, \end{aligned} \end{align} $$
where
$n_l := \#\{j=1,2,\ldots ,l \mid \alpha _j=1\}$
and we ordered the
$\alpha _i$
’s so that
$\alpha _1 = \alpha _2 = \cdots = \alpha _{n_l} = 1$
and
$\alpha _{n_l+1}, \alpha _{n_l+2}, \ldots , \alpha _l \geqslant 2$
. Adding the missing primes to complete the sum over
$p_{\ell +1}$
, we have
$$ \begin{align} & \sum_{\substack{p_1, \ldots, p_{\ell+1} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell+1} F(p_i, \alpha_i) = \sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i, \alpha_i) \sum_{\substack{p_{\ell+1}\leqslant x \\ p_{\ell+1} \nmid q \\ p_{\ell+1} \neq p_i,\, 1\leqslant i\leqslant\ell}} F(p_{\ell+1},\alpha_{\ell+1}) \notag\\ &\qquad = \Bigg(\sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^{\ell} F(p_i, \alpha_i)\Bigg) \Bigg(\sum_{\substack{p_{\ell+1}\leqslant x \\ p_{\ell+1}\nmid q}} F(p_{\ell+1},\alpha_{\ell+1})\Bigg) \end{align} $$
$$ \begin{align} &- \sum_{m=1}^{\ell} \Bigg(\sum_{\substack{p_1, \ldots, p_{m-1}, p_{m+1}, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{\substack{i=1 \\ i\neq m}}^{\ell} F(p_i, \alpha_i)\Bigg) \Bigg(\sum_{\substack{p_{\ell+1}\leqslant x \\ p_{\ell+1}\nmid q}} F(p_{\ell+1},\alpha_{\ell+1}+\alpha_m)\Bigg). \end{align} $$
The sum of
$F(p_{\ell +1},\alpha _{\ell +1})$
in (3.40) is uniformly dominated by
$\log \log (x)$
by Lemma 3.3 since
$\alpha _{\ell +1} \geqslant 2$
. The sum of
$F(p_{\ell +1},\alpha _{\ell +1}+\alpha _m)$
in (3.41) is uniformly bounded by Lemma 3.2 since
$\alpha _{\ell +1} + \alpha _m, \geqslant 3$
. The multiple sums of
$F(p_i, \alpha _i)$
in (3.40) and (3.41) either contain a term
$\alpha _i\geqslant 2$
, in which case we appeal to the induction hypothesis, or only contain terms
$\alpha _i=1$
, which is Case D. This completes the induction reducing Case C to Case D.
4 Case D and consequences
The above sections reduced the proof of Theorem 3.1 to the proof of case D, where each power is
$\alpha _i = 1$
. We treat this case and deduce important consequences from this result.
4.1 First moment of coefficients
This case is the most difficult and requires to make use of the harmonic sum over the family, i.e., trace formulas. We have to prove the following bound on the
$1$
-parts to prove they contribute as an error term.
Proposition 4.1 For all
$\ell \geqslant 1$
, we have
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \prod_{i=1}^\ell F(p_i, 1) = o\left((\log \log(x))^{\ell/2}\right). \end{align} $$
The techniques used in [Reference Radziwiłł and Soundararajan16] for quadratic twists of elliptic curves do not apply in our case, mainly because they have complete multiplicativity of the characters and they could use Poisson summation formula for the sum of characters. We closely follow the strategy of [Reference Baluyot, Chandee and Li1], who proved the analogous result with different weights for a single prime, and extend inductively as in [Reference Cheek, Gilman, Jaber, Miller and Tomé3]. The rest of this section is dedicated to the proof of this result.
Proof Using Hecke multiplicativity for coefficients, we rewrite
$a_f(p_1) \cdots a_f(p_\ell ) = a_f(p_1 \cdots p_\ell )$
since the primes are different. We are therefore reduced to study the sum
$$ \begin{align} \Sigma := \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \sum_{\substack{p_1, \ldots, p_{\ell} \leqslant x \\ p_i\nmid q \\ p_i \neq p_j}} \frac{a_f(p_1 \cdots p_\ell)}{\sqrt{p_1 \cdots p_\ell}}. \end{align} $$
The harmonic sum has to be exploited by means of trace formulas. However, trace formulas, viz., the Petersson trace formula here, can be used for sums over all the modular forms of a given weight and level, not only over newforms as in (4.1). It is therefore necessary to add back the missing oldforms in the sum. Using Lemma 2.4 to do so and swapping summations in (4.1), we obtain
$$ \begin{align*} &\Sigma = \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sum_{\substack{p_1, \ldots, p_\ell \leqslant x \\ p_i \neq p_j\\ p_i \nmid q}} \frac{1}{\sqrt{p_1 \cdots p_\ell}}\\&\qquad\times \sum_{\substack{q = L_1L_2d \\ L_1 \mid q_1 \\ L_2 \mid q_2}} \frac{\mu(L_1L_2)}{L_1L_2} \prod_{\substack{p \mid L_1 \\ p^2 \nmid d}} (1-p^{-2})^{-1} \sum_{e \mid L_2^\infty} \frac{\Delta_d(e^2, p_1 \cdots p_\ell)}{e}. \end{align*} $$
We truncate the summations over
$L_1, L_2$
and over e. Consider the tail of the sum, for
$L_1L_2> L_0$
, use the trivial estimate given in Lemma 3.4 to bound the tail of
$\Sigma $
and using that
$N(Q) \asymp Q$
by standard dimension formulas, to get
$$ \begin{align} & \frac{1}{Q} \sum_{L_1L_2> L_0} \frac{1}{L_1L_2} \Psi\left( \frac{L_1L_2d}{Q}\right) \sum_{e \mid L_2^\infty} \frac{\tau(e^2)}{e} \left|\ \ \sideset{}{^h}\sum_{f \in B_k(q)} \sum_{p_1, \ldots,p_\ell \leqslant x} \frac{a_f(p_1 \cdots p_\ell)}{\sqrt{p_1 \cdots p_\ell}} \right| \end{align} $$
$$ \begin{align} & \quad \ll \frac{\log Q}{Q} \sum_{q \ll Q/L_0} \frac{\tau(L_2)}{L_1L_2} \Psi\left( \frac{L_1L_2d}{Q}\right) \ll \frac{\log Q}{Q} \sum_{q \ll Q/L_0} \log(Q)^\ell \ll \frac{\log(Q)^{\ell + 1}}{L_0}, \end{align} $$
so that we get an error term of constant size for any
$L_0> \log (Q)^{\ell + 1}$
. We can bound roughly the sum for
$L_1L_2 < L_0$
but
$e>E$
and get an error of constant size so long as E is not less than a power of
$\log (Q)$
. We therefore can restrict the sums from now on to these ranges
$L_1L_2 < L_0$
and
$e < E$
for E a certain power of
$\log (Q)$
, as in [Reference Baluyot, Chandee and Li1]. As in the above sections, we can add the sum over primes dividing the level, i.e.,
$p_i \mid q$
, at a cost
$\log \log \log q$
, therefore, getting a saving even without exploiting the summation over the family. We are therefore reduced to deal with
$\Sigma = \Sigma _2 + O(1)$
where
$$ \begin{align} &\Sigma_2 := \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sum_{p_i \neq p_j} \frac{1}{\sqrt{p_1 \cdots p_\ell}}\\&\qquad\times \sum_{\substack{q = L_1L_2d \\ L_1L_2 < L_0 \\ L_1 \mid q_1 \\ L_2 \mid q_2}} \frac{\mu(L_1L_2)}{L_1L_2} \prod_{\substack{p \mid L_1 \\ p^2 \nmid d}} (1-p^{-2})^{-1} \sum_{\substack{e \mid L_2^\infty \\ e < E}} \frac{\Delta_d(e^2, p_1 \cdots p_\ell)}{e}. \nonumber\end{align} $$
We now perform a more precise arithmetic parametrization of the summation, following [Reference Baluyot, Chandee and Li1]. Recalling that
$q = L_1L_2d$
, we replace the conditions
$L_i \mid q_i$
by
$L_1 \mid d$
,
$(L_2, d)=1$
, and
$d = L_1m$
as in [Reference Baluyot, Chandee and Li1, Remark following Lemma 2.3]. This in particular implies
$$ \begin{align} \prod_{\substack{p_1 \mid L_1 \\ p_1^2 \nmid d}} (1-p^{-2})^{-1} = \prod_{p \mid L_1} (1-p^{-2})^{-1} \prod_{r \mid (L_1, m)} \frac{\mu(r)}{r^2}. \end{align} $$
We therefore have
$q = L_1^2L_2m$
where
$(m, L_2)=1$
, and write
$m = rn$
since
$r \mid m$
. We altogether have
$$ \begin{align} \Sigma_2 & = \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sum_{p_i \neq p_j} \frac{1}{\sqrt{p_1 \cdots p_\ell}} \sum_{\substack{q = L_1^2L_2rn \\ L_1L_2 < L_0 \\ (L_1rn, L_2) = 1}} \frac{\mu(L_1L_2)}{L_1L_2} \end{align} $$
$$ \begin{align} & \qquad \times \prod_{p_1 \mid L_1} (1-p^{-2})^{-1} \prod_{r \mid (L_1, rn)} \frac{\mu(r)}{r^2} \sum_{\substack{e \mid L_2^\infty \\ e < E}} \frac{\Delta_d(e^2, p_1 \cdots p_\ell)}{e}. \end{align} $$
By Petersson trace formula from Proposition 2.3, noting that
$e^2 \neq p_1 \cdots p_\ell $
since the primes are assumed to be all different, we get
$$ \begin{align*} \Delta_d(e^2, p_1 \cdots p_\ell) = 2\pi i^{-k} \sum_{c \geqslant 1} \frac{S(e^2, p_1 \cdots p_\ell, cL_1 r n)}{cL_1 r n}J_{k-1}\left( \frac{4\pi \sqrt{e^2p_1 \cdots p_\ell}}{cL_1 r n}\right). \end{align*} $$
Using Möbius inversion to detect the condition
$(L_2, n)=1$
, we can rephrase
$\Sigma _2$
as
$$ \begin{align*} \Sigma_2 & = \frac{2\pi i^{-k} }{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q}\right) \sum_{\substack{L_1L_2 < L_0 \\ (L_1, L_2)=1}} \frac{\mu(L_1L_2)}{L_1L_2} \prod_{p \mid L_1} (1-p_1^{-2})^{-1} \sum_{\substack{p_1, \ldots, p_\ell \\ p_i \neq p_j}} \frac{1}{\sqrt{p_1 \cdots p_\ell}} \sum_{\substack{e \mid L_2^\infty \\ e < E}} \frac{1}{e} \end{align*} $$
$$ \begin{align*} & \quad \times \sum_{c \geqslant 1} \sum_{n \geqslant 1} \sum_{d \mid L_2} \mu(d) \sum_{r \mid L_1} \frac{\mu(r)}{r^2} \frac{S(e^2, p_1 \cdots p_\ell, cL_1 dnr)}{cL_1 nrd} \Psi\left( \frac{L_1^2L_2 nrd}{Q}\right) J_{k-1} \left( \frac{4\pi\sqrt{e^2 p_1 \cdots p_\ell}}{cL_1nrd} \right). \end{align*} $$
Noting
$\mathfrak {m} := cL_1dnr \equiv 0$
modulo
$cL_1rd$
, we get that the sum over n is
$$ \begin{align} \sum_{\mathfrak{m} \equiv 0 (cL_1 d r)} \frac{S(e^2, p_1 \cdots p_\ell, \mathfrak{m})}{\mathfrak{m}} f\left( \frac{4\pi\sqrt{e^2p_1 \cdots p_\ell}}{\mathfrak{m}}\right), \end{align} $$
where we introduced
$$ \begin{align} f(\xi) := \Psi\left(\frac{2\pi \sqrt{e^2p_1 \cdots p_\ell} L_1L_2}{cQ\xi}\right) J_{k-1}(\xi). \end{align} $$
Smoothly dyadically cut the sums over
$p_i$
into blocks of size
$p_i \asymp P_i$
, inputting a smooth partition of unity V such that
$\sum V(p_i/P_i) = 1$
for all
$i \in \{1, \ldots , \ell \}$
. We then recognize the function as
$$ \begin{align} f(\xi) = H\left(\xi, \frac{p}{p_1 \cdots p_\ell}\right) J_{k-1}(\xi), \end{align} $$
where
$$ \begin{align} H(\xi, \lambda) := \Psi\left(\frac{X}{\xi}\sqrt{\lambda}\right) \quad \text{with} \quad X = \frac{4\pi L_1L_2\sqrt{p_1 \cdots p_\ell e^2}}{cQ}. \end{align} $$
We can then follow mutatis mutandis the proof of [Reference Baluyot, Chandee and Li1, Lemma 6.1] to deduce from the smoothness and the compact support of
$\Psi $
that the
$\mathbb {R}^2$
-Fourier transform of H is rapidly decaying, viz.,
for all
$A \geqslant 1$
, by repeatedly integrating by parts. Moreover, since
$\Psi $
is compactly supported, we can represent the above by a plateau function
$W(\xi /X)$
. By Fourier inversion, we have
$$ \begin{align} f(\xi) = J_{k-1}(\xi) W\left( \frac{\xi}{X}\right) \iint_{\mathbb{R}^2} \widehat{H}(u,v)e(u\xi + v\tfrac{p}{P}) dudv. \end{align} $$
Inserting it in the above expression for
$\Sigma _2$
, we get
$$ \begin{align} \Sigma_2 &= \frac{2\pi i^{-k}}{N(Q)} \sum_{\substack{(L_1, L_2)=1 \\ L_1L_2 < L_0}} \frac{\mu(L_1L_2)}{L_1L_2} \prod_{p \mid L_1} (1-p^{-2})^{-1} \sum_{r \mid L_1} \frac{\mu(r)}{r^2} \sum_{d \mid L_2} \mu(d) \sum_{P_i, \mathrm{dyadic}} \end{align} $$
$$ \begin{align} & \quad \sum_{\substack{e \mid L_2^\infty \\ e < E}} \frac{1}{e} \iint_{\mathbb{R}^2} \widehat{H}(u, v) \sum_{c \geqslant 1} \sum_{p_i \neq p_j} \frac{1}{\sqrt{p_1 \cdots p_\ell}} \prod_{i=1}^\ell V\left(\frac{p_i}{P_i}\right) e\left( v\frac{p_i}{P_i}\right) S(u,p_1 \cdots p_\ell) dudv, \end{align} $$
where we let
$$ \begin{align} S(u,p) := \sum_{n \geqslant 1} \frac{S(e^2, p, cL_1rdn)}{cL_1rdn}h_u\left( \frac{4\pi\sqrt{pe^2}}{cL_1rdn}\right) \end{align} $$
with
$h_u(x) = W(x/X) J_{k-1}(x) e(u x)$
. This expression
$S(u,p)$
exactly appears as an arithmetic side of a Kuznetsov trace formula, with the new level
$cL_1rd$
. By the Kuznetsov trace formula from Proposition 2.5, the innermost sums can be rephrased as
$$ \begin{align} \sum_{c \geqslant 1} \sum_{p_i} \prod_{i=1}^\ell \frac{1}{\sqrt{p_i}} e\left( c\frac{p_i}{P_i}\right) V\left( \frac{p_i}{P_i}\right) \left[\mathcal{D}(c,\mathfrak{p},u) + \mathcal{C}(c,\mathfrak{p},u) + \mathcal{H}(c, \mathfrak{p}, u)\right], \end{align} $$
where
$\mathfrak {p} = p_1 \cdots p_\ell $
and
$\mathcal {D}$
,
$\mathcal {C}$
, and
$\mathcal {H}$
stand for the discrete, continuous, and holomorphic contributions from the spectral side, respectively. Recall that they are explicitly defined by
$$ \begin{align} \mathcal{D}(c, p, u) & = \sum_j \frac{\overline{\alpha_j}(e^2) \alpha_j(p) \sqrt{pe^2}}{\cosh \pi \kappa_j} h_+(\kappa_j), \end{align} $$
$$ \begin{align} \mathcal{C}(c, p, u) & = \frac{1}{\pi} \sum_{\mathfrak{c}} \int_{\mathbb{R}} \frac{\sqrt{pe^2}}{\cosh \pi t} \overline{\varphi_{\mathfrak{c}}}(e^2, t) \varphi_{\mathfrak{c}}(p, t) h_+(t) dt, \end{align} $$
$$ \begin{align} \mathcal{H}(c, p, u) & = \frac{1}{2\pi} \sum_{\substack{\ell \geqslant 2 \\ 1 \leqslant j \leqslant \theta_\ell(cL_1, rd)}} (l-1)! \sqrt{pe^2} \ \overline{\psi_{j, \ell}}(e^2) \psi_{j, \ell}(p) h_h(\ell), \end{align} $$
where the precise notations are as in Section 2.4.2.
Proposition 4.2 With the above notations, we have
$$ \begin{align} \sum_{c \geqslant 1} \sum_{p \asymp P} \frac{1}{\sqrt{p}} e\left( v\frac{p}{P}\right) V\left(\frac{p}{P}\right) \mathcal{D}(c, p, u) & \ll Q^\varepsilon (1+|u|)^2 (1+|v|)^2 \frac{\sqrt{P}}{Q}, \end{align} $$
$$ \begin{align} \sum_{c \geqslant 1} \sum_{p \asymp P} \frac{1}{\sqrt{p}} e\left( v\frac{p}{P}\right) V\left(\frac{p}{P}\right) \mathcal{H}(c, p, u) & \ll Q^\varepsilon (1+|u|)^2 (1+|v|)^2 \frac{\sqrt{P}}{Q}, \end{align} $$
$$ \begin{align} \sum_{c \geqslant 1} \sum_{p \asymp P} \frac{1}{\sqrt{p}} e\left( v\frac{p}{P}\right) V\left(\frac{p}{P}\right) \mathcal{C}(c, p, u) & \ll Q^\varepsilon (1+|u|)^2 (1+|v|)^2 \left(\frac{\sqrt{P}}{Q} + P^{1/4+\varepsilon} \right). \end{align} $$
We obtain the analog statement for the product by an immediate induction. The spectral aspect of the computations are exactly as in [Reference Baluyot, Chandee and Li1, Propositions 6.2 and 6.3] where they are extensively treated, and we therefore afford not to give all the details. The main point to import the computations from loc. cit. is that the quantities only differ by logarithmic factors, while the bound ultimately obtained for
$\Sigma _2$
therein displays a power savings in Q, see [Reference Baluyot, Chandee and Li1, End of Section 6]. Since
$P \ll c(f) \leqslant Q$
, Proposition 4.2 indeed implies Proposition 4.1.
We briefly explain how to bound the discrete and holomorphic parts, following [Reference Baluyot, Chandee and Li1, Proposition 6.2]. The discrete contribution (the holomorphic one is analogous, and easier since we have the Deligne bound) displays sums that are
$$ \begin{align} \sum_{c \geqslant 1} \sum_p \frac{1}{\sqrt{p}} e\left(v\frac{p}{P}\right) \mathcal{D}(c, p, u) V\left( \frac{p}{P} \right)= \sum_{c \geqslant 1} \sum_j \frac{e \overline{\rho_j(e^2)}}{\cosh \pi \kappa_j} h_+(\kappa_j) \sum_p \frac{\sqrt{p} \rho_j(p)}{\sqrt{p}} e\left(v\frac{p}{P}\right) V\left(\frac{p}{P}\right). \end{align} $$
We start bounding this innermost p-sum.
Lemma 4.3 We have
$$ \begin{align} \sum_p \frac{\sqrt{p} \rho_j(p)}{\sqrt{p}} e\left( \frac{vp}{P} \right) V\left( \frac{p}{P}\right) \ll (|\rho_j(1)| + |\rho_f(1)|) (cL_1rd)^\varepsilon \log(P)^\varepsilon (1+|v|)^2, \end{align} $$
where f is a suitable oldform below f (see [Reference Baluyot, Chandee and Li1]).
Proof By Mellin inversion applied to
$eV$
, we have
$$ \begin{align} \sum_p \frac{\sqrt{p} \rho_j(p)}{\sqrt{p}} e\left( \frac{vp}{P}\right) V\left( \frac{p}{P}\right) = \frac{1}{2i\pi} \sum_p \frac{\sqrt{p \alpha_j(p)}}{\sqrt{p}} V_0\left( \frac{p}{P}\right) \int_{(0)} p^{-s} \tilde{W}(s) P^s ds, \end{align} $$
where
$W(x) = W_v(x) = e(vx) V(x).$
We have that
$\tilde {W}_v(it) \ll ((1+|v|)/(1+|t|))^{A}$
by integrating by parts. Swapping the summation and integration, we therefore get
$$ \begin{align} \frac{1}{2i\pi} \int_{(0)} P^s \tilde{W}(s) \sum_p \frac{\alpha_j(p) \sqrt{p}}{p^{1/2+it}} V_0(p/P). \end{align} $$
By the “trivial bound” on the power one terms from Lemma 3.4, we have that
$$ \begin{align} \sum_p \frac{\alpha_j(p) \sqrt{p}}{p^{1/2+it}} V_0\left( \frac{p}{P}\right)& \ll |\rho_j(1)| \log(cL_1 rd) \log(X)^\varepsilon + |\rho_f(1)| (cL_1 rd)^\varepsilon \end{align} $$
Therefore, applying the decay of
$\tilde {W}(s)$
written above with
$A=2$
to ensure the convergence of the vertical integral, we get that the whole sum over p is indeed
and the vertical integral converges, giving the claimed result.
Back to the whole discrete contribution, and inputting the above bound, we get that
$$ \begin{align} \mathcal{D}(c, p, u) & = \sum_{c \geqslant 1} \sum_j \frac{e \rho_j(e^2)}{\cosh \pi \kappa_j} h_+(\kappa_j) \sum_p \frac{\sqrt{p}\alpha_j(p)}{\sqrt{p}} e\left( \frac{vp}{P}\right) V\left( \frac{p}{P}\right) \end{align} $$
$$ \begin{align} & \ll \sum_{c \geqslant 1} \min(X^{k-1}, X^{-1/2}) (cL_1 dr)^\varepsilon \log(P)^\varepsilon (1+|v|)^{2} e^{1+\varepsilon} \frac{1 + |\log X|}{F^{1-\varepsilon}} \end{align} $$
$$ \begin{align} & \qquad \times \sum_j \frac{|\rho_f(1)|(|\rho_j(1)| + |\rho_f(1)|)}{\cosh \pi \kappa_j} \left( \frac{F}{1 + \kappa_j}\right)^C,\end{align} $$
where we used the bound on
$h_+$
given in [Reference Baluyot, Chandee and Li1], with in particular
$F \asymp (1+|u|)(1+4\pi L_1L_2\sqrt {Pe^2}/cQ)$
, as well as the bound
$e\rho _j(e^2) \ll e^{1+\varepsilon } |\rho _f(1)|$
on the coefficients. We can now sum the spectral part depending on the position of the spectral parameter
$\kappa _j$
with respect to F:
$$ \begin{align*} &\sum_j \frac{|\rho_f(1)|(|\rho_j(1)| + |\rho_f(1)|)}{\cosh \pi \kappa_j} \left( \frac{F}{1 + \kappa_j}\right)^C \\&\quad\ll \sum_{|\kappa_j| < F} \frac{|\rho_j(1)|^2}{\cosh \pi \kappa_j} \frac{1}{F^{1-\varepsilon}} + \sum_{|\kappa_j|> F} \frac{|\rho_j(1)|^2}{\cosh \pi \kappa_j} \frac{1}{F^{1-\varepsilon}} \left( \frac{F}{1+|\kappa_j|}\right)^{2+\varepsilon}. \end{align*} $$
The first sum is bounded by a spectral large sieve. We can then finish the proof as in [Reference Baluyot, Chandee and Li1], mutatis mutandis. It then remains to bound the continuous spectrum: we can do it as in [Reference Baluyot, Chandee and Li1] following exactly the same lines, inputting the induction as in [Reference Cheek, Gilman, Jaber, Miller and Tomé3], and just changing the “trivial bound” from [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Lemma 2.12] to the “trivial bound” from Lemma 3.4. This completes the proof of Proposition 4.1 and of Theorem 3.1.
4.2 Moment method and distribution
As in [Reference Radziwiłł and Soundararajan16], the determination of the moments in Theorem 3.1 essentially allows to say that
$P(f,x)$
, hence the central value, mimicks the behavior of a normal distribution, in phase with the Keating–Snaith Conjecture 1.2. We encapsulate in the following statement the distributional consequence of this moment method.
Corollary 4.4 We have, for all sequence
$(b_f)_{f \in H_k(q)}$
,
$$ \begin{align} \sideset{}{^h}\sum_{\substack{f \in H_k(q) \\ P(f,x)/\sqrt{\log\log x} \in (\alpha, \beta)}} b_f = (M(\alpha, \beta) + o(1))\sideset{}{^h}\sum_{f \in H_k(q)} b_f, \end{align} $$
where
$$ \begin{align} M(\alpha, \beta) := \frac{1}{\sqrt{2\pi}}\int_\alpha^\beta e^{-x^2/2} dx. \end{align} $$
Proof Asymptotically, Theorem 3.1 proved that the
$\ell $
-th moment of
$P(f,x)/\sqrt {\log \log x}$
behaves as the
$\ell $
-th moment of the normal distribution, i.e., when x grows to infinity,
$$ \begin{align} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} \left( \frac{P(f,x)}{\sqrt{\log\log x}} \right)^\ell b_f \sim \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} x^\ell e^{-x^2/2} dx \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right)\sideset{}{^h}\sum_{f \in H_k(q)} b_f, \end{align} $$
for all
$\ell \geqslant 0$
, so we deduce that, for any polynomial
$R \in \mathbb {R}[X]$
,
$$ \begin{align} \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right) \sideset{}{^h}\sum_{f \in H_k(q)} R\left( \frac{P(f,x)}{\sqrt{\log\log x}} \right) b_f \sim \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} R(x) e^{-x^2/2} dx \sum_{q \geqslant 1} \Psi\left(\frac{q}{Q}\right)\sideset{}{^h}\sum_{f \in H_k(q)} b_f, \end{align} $$
and by approximating the characteristic function
$\mathbf {1}_{(\alpha , \beta )}$
in
$L^1$
-norm by a polynomial R, we deduce that (inputting the smooth sum over levels and the weighted sum of
$f \in H_k(q)$
in the summation over
$\mathcal {F}_Q$
to ease notations)
$$ \begin{align} \sum_{\substack{f \in \mathcal{F}_Q \\ P(f,x)/\sqrt{\log\log x} \in (\alpha, \beta)}} b_f & = \sum_{f \in \mathcal{F}_Q} \mathbf{1}_{(\alpha, \beta)}\left( \frac{P(f,x)}{\sqrt{\log\log x}} \right) b_f \end{align} $$
$$ \begin{align} & \sim \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} \mathbf{1}_{(\alpha, \beta)}(x) e^{-x^2/2} dx \sum_{f \in \mathcal{F}_Q} b_f = M(\alpha, \beta)\sum_{f \in \mathcal{F}_Q} b_f, \end{align} $$
as claimed.
4.3 Uncorrelation lemma
A similar result has to be available when weighted by one-level densities, analogously to the central result [Reference Radziwiłł and Soundararajan16, Proposition 3, second part].
Corollary 4.5 (Weighted moments property)
Assume the generalized Riemann hypothesis for the symmetric squares L-functions
$L(s, \mathrm {sym}^2 f)$
. We have, for all smooth function h with compactly supported Fourier transform, and all
$\ell \geqslant 1$
,
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P(f,x)^\ell D(f,h) = (M_\ell + o(1)) (\log\log(x))^{\ell/2} \int_{\mathbb{R}} W_{\mathrm{O}} h. \end{align} $$
This proposition means that we can decouple the one-level density statement and the moment property, both exploiting trace formulas. In other words, one-level densities and sums over coefficients are uncorrelated.
Proof We have to study the sum
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P(f,x)^\ell D(f,h). \end{align} $$
The one-level density is understood by Proposition 2.1 and can be written as
$$ \begin{align} D(f, h) = \hat{h}(0) + \tfrac12 h(0) + P^{(1)}(f,h) + O\left( \frac{\log \log q}{\log q}\right), \end{align} $$
as proven for instance in [Reference Iwaniec, Luo and Sarnak11, (4.25)], consequence of the generalized Riemann Hypothesis for
$L(s, \mathrm {sym}^2 f)$
, and where the implied constant only depends upon the test-function h. Note that the main term of this expression is
$\hat {h}(0) + \tfrac 12 h(0) = \int h W_{\mathrm {O}}$
, the limiting one-level density, and can therefore be pulled out of the sum, since independent of f and q, and the Theorem 3.1 is therefore applicable as it stands, giving a contribution of
which already accounts for the main term displayed in Corollary 4.5. The error term contributes negligibly to the whole sum over the family. The remaining contribution is
$$ \begin{align} \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P(f,x)^\ell P^{(1)}(f, h), \end{align} $$
which, by applying the Cauchy–Schwarz inequality, is bounded by
$$ \begin{align} \left( \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P(f,x)^{2\ell} \right)^{1/2} \left( \frac{1}{N(Q)} \sum_{q \geqslant 1} \Psi\left( \frac{q}{Q} \right) \sideset{}{^h}\sum_{f \in H_k(q)} P^{(1)}(f, h)^2\right)^{1/2}. \end{align} $$
By Theorem 3.1, the first parenthesis is bounded by
$(\log \log (x))^\ell $
, so that its square root has similar size as the expected main term. The statement [Reference Cheek, Gilman, Jaber, Miller and Tomé3, Proposition 4.1], where they study the moments of the on-level density, bounds the second parenthesis by
$O(1/\log Q) = o(1)$
, proving that the whole contribution coming from
$P^{(1)}(f, h)$
is negligible, as claimed.
5 Proof of Theorem 1.4
With the above tools being now at hand, we follow the strategy presented in [Reference Radziwiłł and Soundararajan16] in the case of quadratic twists of an elliptic curve. We show that there are not many small zeros by an amplification process, which will be used to prove that the sum over zeros in the explicit formula (2.9) contributes as an error term. The moment method will then allow to select the values for which we are in the desired range, giving the result.
5.1 Amplification of small zeros
The following result, analogue of [Reference Radziwiłł and Soundararajan16, Lemma 1], uses Theorem 3.1 to quantify the proportion of
$f \in H_k(q)$
such that
$P(f,x)$
falls into a specific range; and Corollary 4.5 to jointly quantify the proportion of
$f\in H_k(q)$
having not too many small zeros. Introduce the notation
$x = X^{1/\log \log \log X}$
for this section.
Proposition 5.1 The smooth averaged number of
$f\in H_k(q)$
such that
$P(f,x) / \sqrt {\log \log x} \in (\alpha , \beta )$
and such that there are no zeros with
$|\gamma _f| \leqslant (\log X \log \log X)^{-1}$
is at least
where
$M(\alpha , \beta )$
is the normal distribution, as defined in (4.36).
Proof Choose for h the explicit Féjer kernel up to the maximal Fourier support
$L=4$
allowed by the low-lying zero result given in Theorem 1.3, i.e.,
which has Fourier transform supported in
$(-1,1)$
, and
$h(x) = h_0(4x)$
so that
$\hat {h}(y) = \tfrac 14\hat {h}_0(x/4)$
is compactly supported in
$(-4, 4)$
. Let
$H = D(f,h)$
and
$\Psi = \Psi (q/Q)$
to lighten notation for the duration of the proof. We get from Corollary 4.4:
$$ \begin{align} \sum_{\substack{f \in H_k(q) \\ P(f,x)/\sqrt{\log\log x} \in (\alpha, \beta)}} H\Psi& \sim M(\alpha, \beta)\sum_{f \in H_k(q)} H\Psi, \end{align} $$
and, by Corollary 4.5, we get
$$ \begin{align} \sum_{f \in H_k(q)} H\Psi \sim \int_{\mathbb{R}} W_{\mathrm{O}}h \sum_{f \in H_k(q)} \Psi = \frac34 \sum_{f \in H_k(q)} \Psi, \end{align} $$
because
by the explicit choice of h (see [Reference Iwaniec, Luo and Sarnak11] for the proof of the optimality of this function in such a setting).
We use the similar amplification argument as in [Reference Radziwiłł and Soundararajan16]. Rewrite the above sum as
$$ \begin{align} \sum_{(\alpha, \beta)} H \Psi = \sum_{\substack{(\alpha, \beta) \\ \exists}} H\Psi + \sum_{{\substack{(\alpha, \beta) \\ \nexists}}} H\Psi \end{align} $$
using
$\ell = (\log X \log \log X)^{-1}$
and the following notation:
$$ \begin{align} \sum_{\substack{(\alpha, \beta) \\ \exists}} H\Psi & = \sum_{\substack{f \in H_k(q) \\ \exists \ |\gamma_f| \leqslant \ell}} H\Psi \mathbf{1}_{P(f,x)/\sqrt{\log\log x} \in (\alpha, \beta)}, \end{align} $$
$$ \begin{align} \sum_{{\substack{(\alpha, \beta) \\ \nexists}}} H\Psi & = \sum_{\substack{f \in H_k(q) \\ \nexists \ |\gamma_f| \leqslant \ell}} H\Psi \mathbf{1}_{P(f,x)/\sqrt{\log\log x} \in (\alpha, \beta)}. \end{align} $$
The weights
$ h(\gamma _f)$
are nonnegative, since the function h we chose is nonnegative. If
$L(s,f)$
has a zero
$\gamma _f$
of size at most
$\ell $
, then
$\tilde \gamma _f$
is at most
$\log \log (X)^{-1}$
, and its conjugate is also a zero of the same size. Choosing a continuous function h such that
$h(0) = 1$
, when x grows to infinity, both
$h(\tilde {\gamma }_f)$
and
$h(\tilde {\overline {\gamma }}_f)$
are at least
$1-\varepsilon $
, for any given
$\varepsilon>0$
. In particular,
$H = D(f, h) \geqslant 2 - \varepsilon $
and we can therefore write
$$ \begin{align} &\sum_{(\alpha, \beta)} H \Psi = \sum_{\substack{(\alpha, \beta) \\ \exists}} H\Psi + \sum_{{\substack{(\alpha, \beta) \\ \nexists}}} H\Psi \\&\quad\geqslant (2-\varepsilon) \sum_{\substack{(\alpha, \beta) \\ \exists}} \Psi + \sum_{{\substack{(\alpha, \beta) \\ \nexists}}} H\Psi = (2-\varepsilon)\sum_{(\alpha, \beta)} \Psi + \sum_{{\substack{(\alpha, \beta) \\ \nexists}}} (H-2+\varepsilon)\Psi,\nonumber \end{align} $$
so that we get
$$ \begin{align} \sum_{(\alpha, \beta)} H\Psi - \sum_{\substack{(\alpha, \beta) \\ \nexists}} (H-2+\varepsilon)\Psi \geqslant (2-\varepsilon)\sum_{(\alpha, \beta)} \Psi. \end{align} $$
On the other hand, the above consequences of the moment method and of the limiting one-level density results allow to estimate the sums over all forms with restrictions on
$P(f,x)$
. More precisely, Corollary 4.4 implies
and Corollary 4.5 states that
$$ \begin{align} \sum_{(\alpha, \beta)} H \Psi &\sim \frac{3}{4} M(\alpha, \beta) \sum_{f \in H_k(q)} \Psi. \end{align} $$
We thus derive from (5.11) that
$$ \begin{align} \frac34 M(\alpha, \beta) \sum_{f \in H_k(q)} \Psi - \sum_{\substack{(\alpha, \beta) \\ \nexists}} (H-2+\varepsilon)\Psi \geqslant (2-\varepsilon) M(\alpha, \beta) \sum_{f \in H_k(q)} \Psi. \end{align} $$
Since
$0 \leqslant h \leqslant 1$
, we get
$$ \begin{align} (2-\varepsilon)\sum_{\substack{(\alpha, \beta) \\ \nexists}} \Psi \geqslant \sum_{\substack{(\alpha, \beta) \\ \nexists}} (2-H-\varepsilon)\Psi \geqslant \frac54 M(\alpha, \beta) + o(1), \end{align} $$
from where we obtain a lower bound for the smoothed quantity of
$f \in H_k(q)$
having zeros of size at most
$\ell $
, viz.,
$$ \begin{align} \sum_{\substack{(\alpha, \beta) \\ \nexists}} \Psi\left( \frac{q}{Q} \right) \geqslant (\tfrac58 - \varepsilon) M(\alpha, \beta) \sum_{f \in H_k(q)} \Psi\left( \frac{q}{Q} \right), \end{align} $$
for all
$\varepsilon>0$
, as desired.
Remark 9 The constant
$5/8$
is exactly the one appearing in Theorem 1.4, and this is where we see that the quality of the results toward the density conjecture, i.e., the width of the allowed Fourier support, conditions the quality of this lower bound. Note that this gives the same value as the method in [Reference Iwaniec, Luo and Sarnak11] to obtain lower bounds for nonvanishing, as anticipated by [Reference Radziwiłł and Soundararajan16].
5.2 Few zeros contributing a lot
The following result quantifies how rare are the
$f \in H_k(q)$
such that the contribution from the sum over zeros in the explicit formula (2.9) is large.
Proposition 5.2 For
$x \leqslant q \leqslant Q$
, the number of
$f \in H_k(q)$
such that
is asymptotically dominated by
$ X/\log \log \log X$
.
Proof The same proof as in [Reference Radziwiłł and Soundararajan16, Lemma 2] holds mutatis mutandis.
5.3 Conclusion
This closely follows the argument of [Reference Radziwiłł and Soundararajan16], now that all the corresponding estimates have been established. We write it here for the sake of completeness. Recall from Proposition 2.2, with
$x = c(f)$
, that
$$ \begin{align} \log L(\tfrac12,f) = P(f,x) - \tfrac12 \log\log(x) + O\left(\sum_{\gamma_f} \log(1 + (\gamma_f \log x)^{-2})\right). \end{align} $$
By Proposition 5.1, we may select f’s such that
$P(f,x)/\sqrt {\log \log X} \in (\alpha , \beta )$
and that there are no small zeros, without loosing at most a proposition of
$\tfrac 38$
of the whole family, i.e.,
$$ \begin{align} \sum_{\substack{f \in H_k(q) \\ P(f,x)/\sqrt{\log\log x} \in (\alpha, \beta) \\ \nexists |\gamma_f| \leqslant (\log X \log\log X)^{-1}}} 1 \geqslant \tfrac58 M(\alpha, \beta) N(Q). \end{align} $$
By Proposition 5.2, we may remove f’s such that the sum over zeros larger than
$(\log X \log \log X)^{-1}$
contributes more than
$(\log \log \log (X))^3$
, since they are asymptotically a negligible cardinality, and the other ones do not contribute that much.
The proportion of f such that
$P(c(f),f)/\sqrt {\log \log c(f)}$
falls into
$(\alpha , \beta )$
is therefore asymptotically larger than
$\tfrac 58 M(\alpha , \beta )$
as claimed in the theorem, henceforth, ending the proof of Theorem 1.4.
Acknowledgements
We are grateful to Maksym Radziwiłł and Kannan Soundararajan for enlightening discussions. We also thank Steven J. Miller, Youness Lamzouri, Oleksiy Klurman and Pico Gilman for further comments. This work started when D. L. was visiting Kyushu University and ended when A. I. S. was visiting Université de Lille; we thank both institutions for good working environment.


