On the Hasse principle for complete intersections

We prove the Hasse principle for a smooth projective variety $X\subset \mathbb {P}^{n-1}_\mathbb {Q}$ defined by a system of two cubic forms $F,G$ as long as $n\geq 39$. The main tool here is the development of a version of Kloosterman refinement for a smooth system of equations defined over $\mathbb {Q}$.

While the case of lower degree hypersurfaces (R = 1) has seen several breakthroughs in recent times, the case of general complete intersections has seen relatively lower success. In the case of pair of quadrics over Q, Munshi introduced a version of the delta method which allows one to use Kloosterman refinement [15]. He combined this with Poisson summation to verify the Hasse principle when n ≥ 11, provided that their intersection is non-singular. Unfortunately, the techniques used fail to generalise effectively outside of the case of two quadrics.
There have been two recent notable breakthroughs. Myerson [16], [17] improved the square dependence on R in Birch's result to a linear one. When d = 2 and 3, these results improve the lower bound to n − σ ≥ 8R and 25R respectively. This is a significant improvement when R is large. However when R is small (say 2), it fails to improve upon Birch's bounds. Typically one expects better understanding of the distribution of rational points when d and R are relatively small. When R = 1, this is facilitated by an analytic technique called Kloosterman refinement, which allows one to use the Poisson summation formula in an effective way. A recent breakthrough was obtained in the second author's work [20], where a version of Farey dissection was developed for a system of two forms in the function field setting. Unfortunately, so far the method there doesn't extend to the Q setting.
The main purpose of this work is to provide a route to Kloosterman refinement for a system of forms over Q in the settings where Poisson summation does not work directly. In particular, the method here should improve upon the current results as long as the defining forms F and G of X are not two quadrics or a cubic and a quadric.
We now define the setting in this paper. Let F (x), G(x) ∈ Z[x 1 , ..., x n ] be two homogeneous cubic forms in n variables and with integer coefficients, and let X denote the smooth projective variety defined by their simultaneous zero locus. The long standing result by Birch n ≥ 49 is yet to be improved in the current setting (a pair of cubics). In the case of a system of diagonal cubic forms, one can obtain significantly stronger results. In particular, Brüdern and Wooley [5], [6] proved that the Hasse principle is true for a smooth system of R diagonal cubic forms in n variables provided that n ≥ 6R + 1.
In this paper, we will use a combination of Kloosterman refinement and a two dimensional version of averaged van der Corput differencing to improve upon Birch's result. In particular, we aim to prove the following result: Theorem 1.1. Let X := X F,G ⊂ P n−1 Q be a smooth complete intersection variety defined by a system of two cubic forms F and G. Then X satisfies the Hasse principle provided that n ≥ 39.
To the best of the authors' knowledge, this is the first known improvement of the Birch's result in this case. As is the typical feature of the methods used here, with some more work the result can be easily extended to cover the cases of singular varieties, as long as n − σ ≥ 40. However, here we will stick to the non-singular setting. The limitation of the method here is n ≥ 38. Akin to the work [13] of Marmon and the second author, saving an extra variable variable will require substantially new technical input which we will not attempt to obtain here.
For those familiar with circle method techniques, there are two key innovations here that facilitate Theorem 1.1. The first improvement comes from developing a two dimensional version of averaged van der Corput differencing. This followed by Weyl differencing itself could hand us Theorem 1.1 when n ≥ 43. Interestingly, averaged van der Corput followed by pointwise Poisson summation fails to improve upon this. Our second innovation comes from combining averaged van der Corput process with a version of Kloosterman refinement followed by Poisson summation. This combination saves us 4 extra variables. If one were to combine the averaged van der Corput process here with the method of Munshi [15], our rough calculations show that one would require n ≥ 42 variables. This is to be expected as the method in [15] would result in the use of a larger total modulus (the parameter Q appearing in this paper). This is wasteful if one is dealing with forms in many variables, rendering the method less effective when dealing with complete intersections which are not defined by two quadrics.
We now give a more detailed outline of the key ideas. From now on, we will assume that X is a complete intersection of two cubics as before further containing a non-singular adelic point, i.e. that Given a smooth weight function ω ∈ C ∞ c (R n ), and a large parameter 1 ≤ P , we define the following smooth counting function: ω(x/P ).
Our main tool in proving Theorem 1.1 is the asymptotic formula for N (P ) obtained in Theorem 1.2. Before stating it, let us define the weight function ω in the following way. We will choose ω to be a smooth weight function, centred at a non-singular point x 0 ∈ X(R) with the additional property that its support is a "small" region around x 0 . Upon recalling (1.2), it is easy to see that the existence of such a point is guaranteed by our earlier assumption that X has a non-singular adelic point. In particular, the point x 0 ∈ X(R) must have Rank(∇F (x 0 ), ∇G(x 0 )) = 2.
Using homogeneity of F and G, we may further assume that |x 0 | < 1. This condition is superficial, and only assumed to make the implied constants appearing in our argument simpler. Let denote a non-negative smooth function supported in the hypercube [−1, 1] n . Given a parameter 0 < ρ < 1 to be suitably decided later, we define (1.4) ω(x) := γ(ρ −1 (x − x 0 )).
We are now set to state our main counting result, which directly implies Theorem 1.1.
Our main tool here will be provided by the Circle method. It begins with by writing the counting function N (P ) as an integral of a suitable exponential sum: denotes the corresponding exponential sum.
In the traditional circle method, the unit square I := [0, 1] 2 is split into major arcs M which consist of the points in I which are "close" to a rational point a/q, where a = (a 1 , a 2 ) ∈ Z 2 of "small" denominator q, and minor arcs m = I\M. The limitation of the process usually occurs while bounding the integral m S(α)dα.
When R = 1, Kloosterman's revolutionary idea [11] was to apply Farey dissection to partition [0, 1] and use it to bound the minor arc contribution. This allows us to treat the minor arcs in a similar way to the major arcs. This idea essentially allows us -upon setting α := a/q + z and fixing the value of z -to consider averages of the corresponding exponential sum of the form a mod q (a,q)=1 S(a/q + z).
The extra average over a allows us to save an extra factor of size O(q 1/2 ), when q is sufficiently large and z relatively small.
When R = 2, finding an analogue of Farey dissection which can be used to attain Kloosterman refinement over Q has proved to be major problem. In [20], the second author has managed find such an analogue in the function field setting, but so far it is not known how to use these ideas when working over Q. The path to Kloosterman refinement in this paper will not focus on innovations to Farey dissection, and will instead focus on improving van der Corput differencing.
In the setting of that we will discuss (pair of two cubics), the Poisson summation formula cannot be applied directly. To be more precise, it is possible to apply Poisson summation, but the bound that it gives is trivial due to the corresponding exponential integral bound behaving badly when the degrees of our forms become too large.
We therefore must use a differencing argument (such as van der Corput) to bound |S(α)| by a sum with polynomials of lower degree. To do this, one essentially starts by using Cauchy's inequality to bound . This leads us for a fixed integer q and a fixed small z ∈ I to consider the averages of the form (1.7) where Q is a suitable parameter to be fixed later. This parameter Q arises from using a two dimensional version of Dirichlet approximation theorem. We further develop a two dimensional version of averaged van der Corput differencing used by in [7] , [8], and [13] to estimate the averages of |S(a/q + z)| 2 over z. This leads us to considering quadratic exponential sums for a system of differenced quadratic forms The extra averaging over a in (1.7) leads us to a saving of the size O(q) in the estimation of a |S(a/q + z)| 2 , and in the light of squaring technique used in (1.6), it overall saves us a factor of size O(q 1/2 ) when q is square-free. The methods developed here are versatile and can be readily adapted to deal with general complete intersections. While dealing with averages of squares of corresponding exponential sums next to rationals of type (a 1 , ..., a R )/q, where q is square-free, we would be able to save a factor of size O(q R/4 ) over the bounds coming from averaged van der Corput along with pointwise Poisson summation. To the best of the authors' knowledge, this is the first known version of Kloosterman refinement which generalises this way over Q. This method could be further combined with any further versions of Kloosterman refinement in the contexts where a degree-lowering squaring technique is essential. For instance, in the function field setting, this method could potentially combined with the method in the aforementioned work by the second author [20] to be able to save a factor of size O(q (R−1)/4+1/2 ) instead.
1.1. Acknowledgements. We would like to thank Tim Browning and Oscar Marmon for their help.

Background on a pair of quadrics
Exponential sums for a pair of quadrics will feature prominently in this work. Let Q 1 (x), Q 2 (x) be a pair of quadratic forms in n variables with integer coefficients and consider the variety defined by x ∈ Q n . Let Sing K (V ) to be the (projective) singular locus of V . When Q 1 and Q 2 intersect properly, namely, if V is of projective dimension n − 3 then we can express the singular locus of V as follows: We say that the intersection variety of Q 1 (x) and Q 2 (x), V , is non-singular if dim Sing K (V ) = −1, and singular otherwise. It should be noted that (2.1) only truly encapsulates the set of singular points when Q 1 , Q 2 have a proper intersection over K (that is, the forms Q 1 (x), Q 2 (x) share no common factor over K). However, Sing K (V ) is still a well defined set with a well defined dimension, even when Q 1 and Q 2 intersect improperly, and so we will also use this definition in this case.
We will now begin by noting a slight generalisation of [13,Lemma 4.1] in the context of two quadrics, which will be vital in various stages of this paper: Lemma 2.1. Let Q 1 , Q 2 be a pair of quadratic forms defining a complete intersection X = V (Q 1 , Q 2 ). Let Π be a collection of primes such that #Π = r ≥ 0 and define Π a := {p ∈ Π | p > a} for every a ∈ N. Then there exists a constant c = c (n) and a set of primitive linearly independent vectors e 1 , · · · , e n ∈ Z n satisfying the following property for any integer 0 ≤ η ≤ n − 1, any subset φ = I ⊂ {1, 2} and any υ ∈ {∞} ∪ Π 2c : The subspace Λ η ⊂ P n−1 Fυ spanned by the images of e 1 , · · · , e n−η is such that Here given ∅ = I ⊂ {1, 2}, let X I denote the complete intersection variety defined by the forms {F i : i ∈ I}. Moreover, the basis vectors e i can be chosen so that for every i = 1, · · · , n and (2.5) L n det(e 1 , · · · , e n ) L n for some constant L = O n (r + 1).
Proof. Note that the statement of this lemma is identical to that of [13,Lemma 4.1] except that in the latter there is an additional assumption that the closed subscheme This is equivalent to the case when X 1 and X 2 intersect properly. Therefore, it is enough to consider different cases where we have an improper intersection. In each of these particular cases, somewhat softer argument works.
In the trivial case when Q 1 = Q 2 = 0, any basis e 1 , ..., e n will work. When Q 2 = λQ 1 , where λ ∈ K and Q 1 a non zero quadratic form then we may apply [13,Lemma 4.1] only to the hypersurface X 1 to find a basis e 1 , ..., e n which is chosen such that (2.2) and (2.3) hold for I = {1}. This choice will clearly work for all I ⊂ {1, 2}.
In the remaining case when Q 1 = L 1 L 2 , Q 2 = L 1 L 3 , where L i = v i · x and L 2 is not a scalar multiple of L 3 . In this case, it is easy to check that the singular locus of X 1 ∩ X 2 to is the hyperplane L 1 = 0. Here, we may apply [13,Lemma 4.1] to the single variety defined by the cubic form L 1 L 2 L 3 = 0. The basis Λ that we get from this process will work here as well.
We are now ready to prove the following generalisation of [9, Proposition 2.1]. This will be particularly helpful for us when we are working with exponential sums of the form where q is square-full, in Section 5.3. Here, as is standard the * next to the sum denotes that the sum is over (a, q) = 1 and the notation e q (x) := exp(2πix/q).
Proposition 2.2. Let ν either denote a finite prime ν n 1 or the infinite prime, let F ν either denote the corresponding finite field or Q, and let where V is defined as above. Moreover for every (a 1 , a 2 ) ∈ F 2 ν \(0, 0), the rank of the matrix associated to the quadratic form a 1 Q 1 + a 2 Q 2 , a 1 M 1 + a 2 M 2 , satisfies Moreover, there exists a set Γ = {γ 1 , ...., γ k } ⊂ F ν , such that as long as a 1 = λ i a 2 for some 1 ≤ i ≤ k ≤ n, and a 2 = 0, then Proof. Let M 1 and M 2 denote the integer matrices defining the forms Q 1 and Q 2 respectively. We firstly note that for s ν (Q 1 , Q 2 ) = −1, we recover ( In particular, we have for every a ∈ F 2 ν \(0, 0). Next, we note that Rank(a 1 M 1 + a 2 M 2 ) < n if and only if there is some j ∈ {1, · · · , n} such that a 1 λ 1,j + a 2 λ 2,j = 0, which imposes the desired restriction on (a 1 , a 2 ) provided that (λ 1,j , λ 2,j ) = (0, 0). However, if (λ 1,j , λ 2,j ) = (0, 0), then it is easy to see from the definition of Q i (x) that ∇Q 1 (me j ) = ∇Q 2 (me j ) = 0 for every m ∈ F ν (provided ν > 2), where e j is the j-th vector in the standard basis. This implies that me j ∈ Sing(Q 1 , Q 2 ), and so s ν (Q 1 , Q 2 ) ≥ 0, giving us a contradiction.
If s ν (Q 1 , Q 2 ) = −1, we invoke Lemma 2.1. As long as ν n 1, we obtain a basis e 1 , ..., e n of F n ν such that the system of quadricsQ 1 ,Q 2 corresponding to the restriction of Q 1 and Q 2 onto the subspace Λ n−sν −1 obeys (2.2) -(2.3). This clearly defines a system of non-singular quadratic forms defined over n − s ν − 1, whose complete intersection is non-singular over F ν as well. Now letM 1 andM 2 denote the integer matrices defining the formsQ 1 , andQ 2 respectively. The Lemma now follows from noticing that for any pair (a 1 , a 2 ) ∈ F 2 ν \ (0, 0) and further using our analysis of the non-singular case above.
One of the key bounds for exponential sums in this work will be provided by Weyl differencing. Typically, these bounds use a 'Birch-type' singular locus σ K as defined in (2.10) instead of the singular locus (2.1) used here. A relation between the two has been studied in [4]. A minor modification of [16, Lemma 1.1] readily provides us with the following result: Lemma 2.3. Let F, G be non-constant forms of any degree, K be a field, and let for any (a 1 , a 2 ) ∈ K\{(0, 0)}.
Our main exponential sum bound for square-full moduli q will be in terms of the size of the null set for some matrix M . The following three Lemmas will be related to this set.
Lemma 2.4. For every u, v ∈ N, and every M ∈ M n (Z), we have Proof. It is easy to prove that #Null q (M ) is a multiplicative function, so we will not prove that when (u, v) = 1. We will be brief when showing the inequality, as this is a standard Hensel Lemma type of argument. If x ∈ Null uv (M ), then we must have x ∈ Null u (M ). Hence, if we write x := y + uz, then y must be in Null u (M ). Now, fix y and assume that there is some z 1 , z 2 (not necessarily distinct) such that y + uz i ∈ Null uv (M ). Then M (y + uz i ) ≡ 0 mod uv, Therefore, upon letting z 2 := z 1 + z we must have Hence, there can only be at most #Null v (M ) possible values for z and so there can only be at most #Null v (M ) values for z such that y + uz ∈ Null uv (M ) for any given y. We also have that y must be in Null u (M ). This gives us as required.
In both Section 5 and 6, we will need to bound #Null p (M ) for matrices of the form M (a) := a 1 M 1 + a 2 M 2 , where M 1 and M 2 are symmetric matrices associated to some quadratic forms Q 1 (x), Q 2 (x). In Proposition 2.2, we noted that for most values of a, Rank p (M (a)) ≥ n − s p − 1, but there were potentially a few lines of a's where Rank p (M (a)) = n − s p − 2. Naturally, a lower bound on the size of the rank of a matrix leads to an upper bound on the dimension of the nullspace of a matrix (due to the rank-nullity theorem), and so using Rank p (M (a)) ≥ n − s p − 2 in order to bound #Null p (M (a)) for every a would be wasteful. This will lead us to considering averages of #Null p (M (a)), where a is allowed to vary. This is the topic of the next lemma.
Lemma 2.5. Let Q 1 , Q 2 be quadratic forms in n variables, q ∈ N, and be squarefree (in other words, the p i 's are prime) such that d | q. Furthermore, let M 1 , M 2 be integer matrices defining Q 1 and Q 2 respectively, and let s p = s p (Q 1 , Q 2 ) be as defined in (2.7) for K = F p , p a prime. Then Proof. We firstly note that upon setting a = b + dc, (2.14) For convenience, define Using the Chinese remainder theorem, it is easy to see that T (d) is a multiplicative function. In particular, we have It is therefore sufficient to consider where p is a prime. When p n 1, the right hand side is trivially O(p 2 ). It is therefore enough to consider the case p n 1, where the implied constant is chosen as in the statement in Proposition 2.2. Proposition 2.2 now implies that except for O n (p) different exceptional pairs (a 1 , a 2 ), Rank(a 1 M 1 +a 2 M 2 ) ≥ n−s p −1. Moreover, for the exceptional pairs we still have Rank(a 1 M 1 + a 2 M 2 ) = n − s p − 2. Finally, we note that if M is an integer matrix rank k over F p , it is easy to see that #{x ∈ F n p : M x = 0} p n−k .
Applying these results to (2.17) gives us (2.16). Hence, by (2.14) -(2.15), we have During the process of bounding quadratic exponential sums, we will need to bound the size of the set The next lemma will help us to do this by letting us relate N b,q (M ) to Null q (M ). Lemma 2.6. Let q ∈ N be even, M ∈ M n (Z/qZ), and let N b,q (M ) be defined as in (2.18). Then for every b ∈ {0, 1} n , either N b,q (M ) = ∅ or there exists some y b ∈ (Z/qZ) n such that N b,q (M ) = y b + Null q (M ).
Proof. If we assume that N b,q (M ) = ∅, then there must be some y ∈ N b,q (M ). By the definition of Null q (M ), if y 0 ∈ Null q (M ), then y + y 0 ∈ N b,q (M ). Hence Likewise, we note that if y 1 , y 2 ∈ N b,q (M ), then y 1 −y 2 ∈ Null q (M ), and so #N b,q (M ) ≤ #Null q (M ). Therefore Combining this with (2.19) gives us the result we desire.

Initial setup
In this section we will start with some initial considerations which will help us to properly set up the circle method and state our main results which will be used to prove Theorem 1.2. As stated before, the Hardy Littlewood circle method transforms the task of answering Theorem 1.2 to proving an asymptotic formula: Here S(α) is the exponential sum as defined in (1.5), and C X denotes a product of local densities. We will start by splitting the box [0, 1] 2 into a set of major arcs and minor arcs as follows: For any pair (α 1 , α 2 ), we can use a two dimensional version of Dirichlet's approximation theorem to find a simultaneous approximation (a 1 /q, a 2 /q). In particular upon taking Q = P 3/2 , there exists a = (a 1 , a 2 ) ∈ Z 2 and q ∈ N s.t. (a 1 , a 2 , q) = 1, q ≤ Q, and We can therefore write for some |z| := max{|z 1 |, |z 2 |} ≤ 1/qQ 1/2 . The choice Q = P 3/2 arises from our final optimisation of various bounds. We explain this in detail in Section 9.3.1. Now let 0 < ∆ < 1 be some small parameter also to be chosen later, and define M q,a (∆) := (α 1 , α 2 ) mod 1 : We then define the set of major arcs to be M q,a (∆). (3.4) This union of sets is disjoint if P −2∆ ≥ 2P −3+∆ , namely when ∆ < 1 and when P is sufficiently large. Moreover, it is easy to check that P −3+∆ < 1/qQ 1/2 for any q ≤ Q, provided that Q < P 3−∆ . This is certainly true for our final choice Q = P 3/2 since we assumed ∆ < 1, and so we have that each set M q,a is contained in the corresponding range from (3.2). Therefore, the major arcs give the following contribution to the integral in (3.1): where (3.6) S a (q, z) := S(a/q + z).
Hence, we can bound the minor arcs contribution, upon further bringing the average over a inside the integral in (3.1), by Our techniques for dealing with the major arcs contribution are standard. Let Then provided that we have ∆ ∈ (0, 1/7), The proof of this lemma, along with the proof of convergence of the singular series will be established in Section 10.
The majority of our effort will be spent in bounding the minor arcs contribution. In order to state the Proposition we aim to prove for the minor arcs, we need to further specify our choice of weight function and the point which it will centred on. Let x 0 be a fixed point satisfying |x 0 | < 1 and Without loss of generality, we may assume that for some 0 < C < 1, possibly depending on x 0 . We will also slightly expand our definition of the test function ω to assume it to be supported in a box x 0 + (−ρ, ρ) n , for a small parameter ρ > 0 to be chosen in due course. Moreover, we ask that the following bound to be true on its derivatives: (3.14) max ∂ j 1 +···+jn ∂x j 1 1 · · · ∂x jn n ω(x) | x ∈ R, j 1 + · · · + j n = j j,n 1, for every j ≥ 0. A satisfactory bound for the minor arcs will be produced by the following proposition, which we aim to prove: Let F, G be a system of two cubic forms with a smooth intersection satisfying n ≥ 39, and let ω ∈ C ∞ c (x 0 +(−ρ, ρ) n ) satisfy (3.14), where x 0 satisfies (3.13). Then there exists some δ = δ(∆) > 0 and some ρ 0 > 0, such that for any 0 < ∆ < 1/7 and for any 0 < ρ < ρ 0 , we have S m = O n,ρ,∆,||F ||,||G|| (P n−6−δ ).
Here, given a polynomial F , let F denote the maximum of all its coefficients.
A major part of the rest of this work will be dedicated to proving Proposition 3.2, which will ultimately be achieved in Section 9. Before we move on, it will be desirable to obtain a consequence of our choice of ω and x 0 , akin to the conditions [13, (2.15)-(2. 16)]. This will be our aim in Lemma 3.3 below, which will be useful in setting up a two dimensional van der Corput differencing argument in Section 4 and in particular, in the proof of Lemma 4.3. We choose the vectors e 1 and e 2 to be a basis for the span of the two dimensional vector space {∇F (x 0 ), ∇G(x 0 )}, chosen in the following way: where γ = ∇G(x 0 ) · e 1 , and γ 1 = ∇G(x 0 ) − γe 1 is a non-zero constant by (3.13). e 2 is chosen so that the Gram-Schmidt procedure works.

Proof.
A key in the proof here will be the following bound, which is an easy consequence of the Mean Value Theorem: Given any x ∈ Supp(P ω), we have Let us first prove that the conditions for ∇F (x) in (3.16) -(3.17) are met. The key here are the conditions (3.12) and (3.13). Clearly, using (3.18) we have for some M F,1 > 0 which is independent of ρ, provided that ρ is chosen to be small enough. Similarly, we may also assure that In both of these equations, the implied constants only depend on F , ||G|| and n. This will be a feature of all implied constants appearing in this proof. On the other hand, since ∇F (x 0 ) = ∇F (x 0 ) e 1 is orthogonal to e 2 , we have by (3.18). In other words, there is some M F,2 > 0 independent of ρ such that To deal with the inequalities concerning G, we use (3.13), which hands us a constant 0 < C < 1 satisfying Therefore, for any x ∈ Supp(P ω), by (3.18) and (3.21), we have that Hence (since ∇G(x 0 ) > 0 is a constant), provided that the support ρ is sufficiently small, we may choose some 0 < C < 1 independent of ρ such that Thus, for any x ∈ Supp(P ω), where we have used (3.21) to bound γ by C ∇G(x 0 ) , as well as (3.22) and (3.19). Hence provided that the support ρ is chosen to be sufficiently small, there is some we conclude that (3.16) is true. Finally, (3.22) also hands us: for any x ∈ Supp(P ω). Therefore, upon setting M 2,G := C G(x 0 ) , and taking we are now able to verify (3.17). Furthermore, there is some ρ 0 > 1, such that M 1 and M 2 are independent of ρ provided that ρ ≤ ρ 0 . This concludes the proof of the lemma.

Van der Corput differencing
In this Section, we will use van der Corput differencing to bound S a (q, z) by a quadratic exponential sum. We will introduce the topic by beginning with the simpler pointwise van der Corput differencing before attempting to generalise the differencing arguments used in [20] to attain a bound which also takes advantage of averaging over the both z integrals. In both cases, we will innovate on the standard differencing approach in order to introduce a path to attaining Kloosterman refinement.

4.1.
Pointwise van der Corput. For convenience, we will set where F and G are cubic forms. Since x is summed over all of Z n , we can replace x with x + h, for any h ∈ Z n , giving where S(q, z) is as defined in (3.9). Let H ⊂ Z n be a set of lattice points (which we may choose freely). In the case of pointwise van der Corput differencing, we can just take H to be the set of lattice points h such that |h| < H, for some 1 ≤ H P which we may choose freely. However, we will not specify this in the arguments that follow since we will need a different choice of H when we come to averaged van der Corput differencing later. Applying the Cauchy-Schwarz inequality to (4.2) gives the following The key difference between this and the standard van der Corput differencing process is the introduction of the a sum in the Cauchy-Schwarz step. In particular, this enables us to bring the a sum inside of the bracket in the final step which in turn gives us a path to Kloosterman refinement. We still need to write S(q, z) in terms of a quadratic exponential sum however, so we will come back to Kloosterman refinement later. Set y := x + h 2 , h = h 1 − h 2 and recall that we defined ω to be a real weight function. Therefore, after setting denote the corresponding exponential sum for the system of quadratic polynomials F h and G h . Note that the top form of F h , F h , is precisely (1.8). Finally, by noting that N (h) ≤ #H = H n , we arrive at the following: . This bound will be useful to us when t := |z| is small, say of size P −3−∆ , since it is wasteful to use averaged van der Corput differencing in this case. We will now set up averaged van der Corput differencing, which will be a key in proving Proposition 3.2.

4.2.
Averaged van der Corput. Throughout this section, x 0 will denote a fixed point Likewise, F and G will be cubic polynomials whose leading forms satisfy (3.16) and (3.17) for a fixed orthonormal set of vectors e 1 , e 2 (see (3.15)). Let denote an extended orthonormal basis of R n . We will begin our effort to bound the sum (4.7) where S(q, z) = * a mod q |S a (q, z)| is as defined in (3.9). As in the previous section, let 1 ≤ H P be a parameter to be chosen later. Typically, H will be chosen as a small power of P , so it is safe to further assume H log P P . Also, let ε > 0 be an arbitrarily small absolute constant to be chosen at the end. Note that the implied constants will be allowed to depend on the choice of ε after it is introduced into our bounds. As is standard ( [20] for example), we start by splitting the integral over z above as a sum over O(P ε ) dyadic intervals of the form [t, 2t] where P −3+∆ ≤ t ≤ 1/(qQ 1/2 ). For convenience, given t ∈ R 2 >0 , we will set Analogous to [7] and [13,Sec. 3], for a fixed value of P −3−∆ < t < 1/qQ 1/2 we choose two sets T 1 , T 2 , each of cardinality O(1 + tHP 2 ) such that Thus, an application of Cauchy-Schwarz further gives Here we have used T := T 1 × T 2 , and in order to simplify the notation. After an inspection of the right hand side of (4.8), it is easy to see that where the sum over t runs over O ε (P ε ) choices satisfying Note that the choice of the parameter H will ultimately depend on t. For now, we will assume t to be fixed. We are therefore first led to find a bound for |S(q, z)| 2 using van der Corput differencing. We may now use the same arguments as those from Section 4.1 to arrive at the following: where H ⊂ Z n is a set of lattice points to be chosen later, and denotes the corresponding exponential sum for the system of quadratic polynomials F h and G h (this is a restating of (4.4) and (4.5)). Therefore by (4.9), (4.10), and (4.12), we have shown the following: (4.14) Then, for any 1 ≤ H ≤ P , H ⊂ Z n , and t satisfying (4.11) we have Since we intend to develop a two dimensional version of averaged van der Corput differencing, we intend to choose H to be a set of size O(P 2 H n−2 ) and then use averaging over z 1 and z 2 to show that for all but O((H log(P )) n ) of h ∈ H, the value of the averaged integral M q (τ , H) defined in (4.10) is negligible. This will enable us to 'win' an extra factor of P/H in our final estimate for (4.7) when compared to pointwise van der Corput differencing.
Our choice of H will be informed by the following lemma: For any h ∈ R n , any 1 ≤ H ≤ P , any fixed τ and any N > 0, provided that h = n i=1 h i e i satisfies the following condition: where L = log(P ), {e 1 , ..., e n } denote the basis chosen in (4.6) and the implied constants only depend on n, F and G .
Proof. We start by rewriting: (4.17) and e q (x) := e 2πix/q . We may separate the two integrals over z and integrate them to get We note that if either |F h (y)| or |G h (y)| are HP 2 L, then trivially bounding everything in J from above gives: We will rewrite F h as follows: is the constant part of F h and H F (y) is the Hessian of F evaluated at y. Now for h satisfying (4.18), we have h is a cubic polynomial in h, and the implied constants depend only on F , G and n. Note that , and so we may simplify (4.21) to since H, |h 1 |, |h 2 | < P . We also write h = h 1 e 1 + ... + h n e n and invoke (3.16) and (3.17) to further get that for all y ∈ Supp(P ω) we have and so we get by (4.22). For now, let us focus on the case |h 2 | ρ −1/2 |h 1 |. In this case, we must have that h 1 satisfies (4.18). Furthermore, upon choosing c 1 ≤ ρ 2 and by (4.18), we have Hence, we may simplify (4.23) to obtain provided that ρ is chosen to be sufficiently small with respect to M 1 . It now remains to study the case |h 1 | ρ 1/2 |h 2 |. In this case, we instead have that h 2 must satisfy the bound in (4.18). We now apply the same process used to obtain (4.22) to G h (y) to obtain where the implied constants again depend only on n, F and G . Note again that Combining this with (4.24), and applying (3.16) -(3.17) gives We now aim to simplify (4.25). Using the assumption that |h 1 | ρ 1/2 |h 2 |, the fact that |h 2 | must obey (4.18) in this case, and setting c 2 ≤ ρ we have as long as ρ is chosen small enough.
The lemma above leads to the following natural choice for H: where c 1 and c 2 are the implied constants arising in (4.16). Essentially, H is chosen to be the collection of lattice points inside of a fixed n dimensional cuboid, B P , centred at the origin, with volume Vol(B P ) = c 1 c 2 P 2 H n−2 . The sides of the cuboid are in the direction of the basis vectors {e 1 , · · · , e n }. We now claim that This follows very easily from the following asymptotic formula for a general cuboid B with side lengths l 1 , · · · , l n . It is easy to see that The error comes from estimating the n − 1 dimensional boundary of B. In our case l 1 = c 1 P ,l 2 = c 2 P , l i = H for i ≥ 3, which leads to (4.27 Then for any 1 ≤ H ≤ P , any 1 ≤ N , and any t > 0 such that (4.11) holds, we have where the maximum over z is taken over the set Proof. Let H be as in (4.7). Then we use the decomposition H =H ∪ H\H. By construction, Furthermore, note that for any fixed h, N (h) as defined in (4.3) satisfies the bound Therefore by Lemma 4.3, Further combining with the bounds q ≤ Q ≤ P 3/2 and #T (1 + tHP 2 ) 2 P 6 , which arises from using crude bounds t ≤ 1 and 1 ≤ H ≤ P , we may bound the contribution from the sum over h ∈ H \H in (4.15), U (q, z), as follows: as N is allowed to be arbitrarily large. Therefore, combining this with Lemma 4.2, we get Further note that for a fixed τ and for any z satisfying |z − τ | ≥ HP 2 L we have the following decay of the function in the integrand: Thus, in the same vein as before, using bound (4.31) in (4.30) we may obtain The lemma now follows after using (4.27) to estimate #H, using the estimate #T = O((1 + tHP 2 ) 2 ), and (4.8) which allows us to take the maximum over all possible z appearing in the expression.
Since H is arbitrary, we may re-label HL as H at the expense of a factor of size at most O ε (P ε )) we can now conclude the following Lemma 4.5. For any 1 ≤ H P , any 0 < ε < 1, any t satisfying (4.11) and any N ≥ 1 we have where the maximum over z is taken over the set

Quadratic Exponential Sums: Initial Consideration
The differencing technique used in Section 4 leads us to consider quadratic exponential sums T h (q, z) (see (4.13)) for a family of differenced quadratic forms F h and G h . Throughout this section, let q denote an arbitrary but fixed integer. Our main goal here is to estimate quadratic sums corresponding to a general system of quadratic polynomials F, G defined as ω(y/P )e((a 1 /q + z 1 )F (y) + (a 2 /q + z 2 )G(y)).
Here F and G denote a system of quadratic polynomials with integer coefficients and ω denotes a compactly supported function on R n . Let us denote leading quadratic parts of F and G by F (0) and G (0) respectively. We further assume that the quadratic forms F (0) and G (0) are defined by integer matrices M 1 and M 2 respectively. We will later apply the estimates in this section by setting F = F h and G = G h . Given a (finite or infinite) prime p, by s p we denote where further, given a set of forms F 1 , ..., F R , s p (F 1 , ..., F R ) denotes the dimension of singular locus of the projective complete intersection variety defined by the simultaneous zero locus of the forms F 1 , ..., F R . That is: When n ≥ 2, given an integer q, we define D(q) by On the other hand, when n = 1, we define D(q) as where, given a polynomial F , Cont(F ) is the gcd of all its coefficients.
As is standard, we begin by applying Poisson summation to T (q, z). This will allow us separate the sum over a and the integral over z, into an exponential sum and an exponential integral respectively. In particular, applying Poisson summation gives us the following: Proof. The proof of Lemma 5.1 is standard and can be obtained by slightly modifying [3, We now apply Poisson summation on the second sum (and use the substitution x = u + qv) to get As a result, we trivially have the following pointwise bound The treatment of the exponential integral is standard. In particular, we can use the following lemma to bound I(z; q −1 m): Let V := 1 + qP ε−1 max{1, HP 2 |z|} 1/2 , ε > 0, and N ∈ N. Then The proof of this is almost identical to the proofs of [2, Lemma 6.5-6.6], and so we will not detail it here. In particular, the only thing in the proofs that needs to be tweaked in order to verify Lemma 5.2 is that Θ in [2, equation (6.11)] must be replaced with We also note that we use |∇F z (y) − m| ≤ V instead of P q −1 |∇F z (y) − m| ≤ P q −1 V since we are using slightly different notation.
The latter bound enables us to handle the tail of the sum over m. Let V := qP −1 max{1, HP 2 |z|}. By trivially bounding |S(q; m)| by q n , and setting N ≥ n + 2, it is easy to show that by the second half of Lemma 5.2. Hence, Now by the first half of Lemma 5.2 (setting N ≥ n + 4), we have where m 0 (y) := ∇F z (y). Hence, we have the following: Proposition 5.3. Let |z| = max{|z 1 |, |z 2 |}. Then for any q ∈ N, for some m 0 (y), where Our attention now turns to finding a suitable bound for |S(q; m)|. As is standard when dealing with exponential sum bounds, we will take advantage of the multiplicative property of S(q; m) and decompose q into its square-free, square, and cube-full components so that we can use better bounds in the former two cases (in particular, we will make use of the a sum to improve our bounds in the former cases). Indeed, we may use a Lemma of Hooley [10, Lemma 3.2] to get the following result.  where rr + ss = 1.
The above lemma is proved using a very standard argument akin to [3, Lemma 10] and [13,Lemma 4.5], and therefore we will skip its proof here. Our treatment of bounds for the quadratic exponential sums will vary depending on whether q is square-free, a square or cube-full. Since the exponential sums satisfy the mutliplicativity relation (5.9), it is natural to set q = b 1 b 2 q 3 where for some constants c 1 , c 2 , c 3 such that (b 1 , c 1 ) = (b 2 , c 2 ) = (q 3 , c 3 ) = 1. Finding suitable bounds for the size of these three exponential sums will be the topic of the rest of this section.

5.1.
Square-free Exponential Sums. In this section, we will briefly consider the quadratic exponential sums S(b 1 ; m) when q = b 1 is square-free. This case is extensively studied in [13,Section 5], where bounds are obtained for exponential sums for a general system of polynomials F and G. Using the multiplicativity of the exponential sum in (5.9), it is enough to consider the sums S(p, m) where p is a prime. We may rewrite p a 2 =1 u mod q e p (a 1 F (u)+a 2 G(u)+m·u) and Σ 4 := u mod q e p (m·u).
Here the notation Σ 1 and Σ 4 is used to correspond to the corresponding sums in [13,Section 5]. Note that the argument in [13, Section 5] does not depend on the degree of the forms F and G. In fact our exponential sums are more "natural" than the ones which appear in [13] and as a result, only sums Σ 1 and Σ 4 appear in our analysis. We may now use the results in [13, Section 5] directly here as they do indeed bound the sums Σ 1 and Σ 4 as well, but only in the case where F and G intersect properly over F p . When n ≥ 2, we may use [13, Prop 5.2, Lemma 5.4] to get Proposition 5.5. Let F, G ∈ Z[x 1 , · · · , x n ] be quadratic polynomials such that s ∞ (F (0) , G (0) ) = −1. Let b 1 be a square-free number where for every m ∈ Z n . Furthermore Φ has the following properties: (1) Φ is homogeneous.
(3) log ||Φ|| n log ||F || + log ||G||. Proof. To begin, since s ∞ (F, G) = −1, we may use an explicit description of the dual variety as seen by a Q-version of [20,Lemma 4.2] to see that the polynomial defining the dual variety of the intersection variety of F, G satisfies the four conditions for Φ in the statement of this proposition. Hence, we will may let Φ be this polynomial. This allows us to improve the first assertion of [13,Proposition 5.2]: Indeed, if we let . We automatically get this since for every v not on the dual variety of F, G (over A major difference here with [13] is that both of our forms F , G vary as h varies, and this forces us to consider the case when F and G intersect improperly in greater detail. We will firstly show that in this case. To do this, we start by noting that In the case where n > 1 and F, G intersect improperly over F p , there are three cases to consider: F ≡ 0 (or G ≡ 0), F ≡ λG for some λ ∈ F, and F ≡ L 1 L 2 , G ≡ L 1 L 3 for some hyperplanes L i such that L i ≡ 0, L 2 ≡ λL 3 . For the first two cases, we note that by definition of Sing p (F, G) (see (2.1)), we have since Rank(∇F (x), ∇G(x)) < 2 is automatically true when F ≡ 0, G ≡ 0, or F ≡ λG. In particular, we see that {x ∈ F n p : F (x) = G(x) = 0} is a subset of the affine singular locus of F, G (over F p ). Therefore We therefore see that Hence by (5.15), when F , G intersect improperly, we have provided that n ≥ 2, as required. Therefore, we may conclude that for a general p (irrespective of whether or not the intersection is proper) where C is some constant. Finally by Lemma 5.4, we have does not contribute more than O(P ε ). To see this, we note that d(b 1 ) log(b 1 )/ log log(b 1 ). Hence there is some constant d such that b ε 1 in that case. Hence, we may conclude that Proposition 5.5 is true. We will bound the C(n) term in future lemmas by b 1 without further comment.
We also must consider when n = 1. In this case, it is sufficient for us to use a weaker bound than [13,Lemma 5.5]. We will show the following: Proposition 5.6. Let F, G ∈ Z[x] be quadratic polynomials and let b 1 be a square-free integer. Then Proof. The proof of Proposition 5.6 is almost trivial. We start by applying Lemma 5.4 so that we may consider S(p; cm) for some p c. We note that and we trivially have |Σ 4 | ≤ p. Hence, by (5.4) and noting that (p, Cont(F ), Cont(G)) ≤ (p, Cont(F (0) ), Cont(G (0) )): and so |S(b 1 ; m)| b 2+ε 1 D(b 1 ) for any m ∈ Z.

5.2.
Square-full Bound. In this section, we will derive the bound which will be used when q is square-full. When q is square-full, we give up on saving q over the a sum, and instead start with the bound where F, G are quadric polynomials, and S(a, q; m) := For a fixed value of a, the exponential sum S(a, q; m) is a standard quadratic exponential sum with leading quadratic part defined by the matrix where λ 1 | λ 2 | · · · | λ n . Since the forms F (0) and G (0) are assumed to be arbitrary for now, it is easy to conclude that Remark 5.7. Recall that we aim to finally substitute F = F h and G = G h . Note that the extra factor appearing on the right hand side of (5.20) is a generalisation of the factor D(b 1 ) 1/2 appearing in Proposition 5.5. This is a drawback of van der Corput differencing that although one starts with a nice pair of forms F and G, one ends up with exponential sums of differenced polynomials F h and G h , which can be highly singular modulo q. If q = p for some prime p, if the singular locus s p as defined in (5.2) is large, then this gives restrictions on the vector h mod p. When is small, the extra factors appearing can be compensated from the corresponding bounds on the h sum. However, in the case at hand, when q = p for a large , we can not rule out the possibility that for many h, there may exist a large q such that the factor n i=1 λ 1/2 q,i is as large as q n/2 . This complication arises partly due to the simplicity of the quadratic exponential sums appearing. However, later we would need to average the sums over various |m − m 0 | ≤ V . We will aim to salvage some of this loss by gaining a congruence condition on m instead and saving from the sum over m. This idea partly has already featured in Vishe's work [20,Lemma 6.4]. However, in [20], the authors are dealing with fixed F and G, which is not the case here.
Our main goal here is to prove the following result: Proposition 5.8. Let a ∈ Z 2 and q ∈ N be such that (a, q) = 1, let m ∈ Z n , and let F, G be quadratic polynomials. Let Here, T be the matrix appearing in the Smith normal form of M in (5.19), λ q,i be as in (5.21) and given a vector v, let (v) i denote its i-th component. Proof.
To estimate |S(a, q; m)|, we begin by working with its square: We will now change order of summation by setting x = y + z. Then where m = m + b. Therefore The "2" appearing in δ 2M (z) gives rise to some minor technical difficulties in the case when q is even. Therefore, we will start by considering the case when q is odd first.

5.2.1.
Case: q odd. In this case, δ 2M (z) = 1 if and only if M z ≡ 0 mod q, and so we may replace δ 2M (z) in (5.24) by δ M (z). Furthermore, we note that M z ≡ 0 mod q implies that z t M z ≡ 0 mod q. Hence (5.24) simplifies as: Now, M has a Smith Normal form over Z as in (5.19), Smith(M ) := SM T , for some matrices S, T ∈ SL n (Z). In particular, matrices S and T are invertible over Z/qZ, for any q ∈ N. We will now rewrite our sum in terms of the Smith(M ), Firstly, we note that Therefore, on using the substitution z → T −1 z, (5.24) becomes where T t is the transpose of T . This is true because We now turn our attention to structure of the Null q (SM T ). Since S and T are defined to be the unique matrices (up to units) such that SM T = Smith(M ), it is quite easy to determine precisely when z ∈ Null q (SM T ). Therefore SM T z ≡ 0 mod q if and only if (5.29) q λ q,i z i for every i ∈ {1, · · · , n}. Therefore Hence by (5.21), and (5.28)-(5.29), we have the following: Finally it is easy to check that #Null q (SM T ) = #Null q (M ) since S and T are both invertible over Z/qZ and therefore in this case we establish: which clearly suffices.

5.2.2.
Case: q even. We now turn to the case where q is even. In this case, the above argument needs to be modified due to not being able to directly replace the condition δ 2M (z) with δ M (z) in (5.24). Instead we note that δ 2M (z) = 0 if and only if M z ≡ 0 mod q/2. In particular, there must be some c ∈ {0, 1} n such that Therefore, if we let We now wish to write N c,q in terms of Null q (M ) as this will enable us to use the arguments discussed in the odd case. To do this, we invoke Lemma 2.6 to see that either N c,q = ∅ or there exists some y c ∈ (Z/qZ) n such that N c,q = y c + Null q (M ).
Hence |S(a, q; m)| 2 = q n c∈{0,1} n Nc,q(M ) =∅ z∈y c +Nullq(M ) Finally, we note that M z ≡ 0 mod q since z ∈ Null q (M ), and so by (5.34), we have the following: This is precisely (5.26) with an extra factor of 2 n and some absolute value signs around the sum (which are irrelevant). We may therefore repeat the arguments in the q odd case which follow from (5.26) to establish Proposition 5.8.

5.2.3.
Special Case: n = 1. We will now briefly consider the case when n = 1, as we will need to deal with this case separately later. The arguments used above are still valid in this case, but the bound that we get is simpler due to the matrix, M , becoming an integer. In particular, Proposition 5.8 becomes Proposition 5.9. Let a ∈ Z 2 and q ∈ N be such that (a, q) = 1, let m ∈ Z, and let F, G ∈ Z[x] be quadratic polynomials. Let We will use Propositions 5.8 and 5.9 directly in our future treatment of the cube-full part of S(q 3 , m) (see (5.11)) in order to get additional saving over the m sum. For the perfect square part -b 2 -however, we will derive a slightly weaker bound from this which will be used to get saving over the h sum later on in the argument.

5.3.
Cube-free Square Exponential Sums. In this section, we will assume that q = b 2 , or equivalently that q is a cube-free square. In this case, we will give up on the potential saving we could attain via the m sum from the ∆ q (m ) term in Proposition 5.8, and bound #Null q (M (a)) 1/2 in terms of the singular locus of F, G, where M (a) is defined as in (5.22). In this special case, we will need to obtain a pointwise saving over the a sum in order for our bound to be useful. We will start with the case when n ≥ 2. Upon letting b 2 = c 2 , and by Proposition 5.8, Lemmas 2.4 -2.5, and (5.16) we have When n = 1, we have M (a) = a 1 d F + a 2 d G for some constants d F , d G . the same type of argument applies. By Proposition 5.9, We may then use the multiplicativity relation in Lemma 5.4 to get Combining this with (5.37) gives us the following:

Quadratic Exponential Sums: Finalisation
In this section, we will combine all of the bounds we have found in Section 5 to reach our final estimate for T (q, z). Recall that Proposition 5.3 hands us In the last section, we focused on getting bounds for individual exponential sums |S(q; m)|. We begin by considering averages of exponential sums. Throughout, let m 0 be an arbitrary but fixed vector in Z n and let b(a) = b be defined as in (5.22). For n ≥ 2: By Lemma 5.4 and Propositions 5.5, 5.8, and 5.10, there are some constants c 1 , c 2 , c 3 such that (b 1 , c 1 ) = (b 2 , c 2 ) = (q 3 , c 3 ) = 1, and where M (a) be as in (5.17) and since we can "divide through" by c 3 , as (c 3 , q 3 ) = 1 (in particular (c 3 , λ) = 1 for any divisor, λ, of q 3 ).
The first and most difficult task for this section is to bound B(b 1 , q 3 , V ; m 0 ). This will be quite a delicate task since we need to save over the m sum in two different ways, simultaneously. The following Lemma will provide our main estimate for this sum: Proof. We begin by noting that and so by the definition of ∆ T,q 3 (5.23) we clearly have that for any x ∈ Z n . Therefore -since we are looking for an upper bound of B(b 1 , q 3 , V ; m 0 ) -we may replace ∆ T,q 3 (m + b ) in (6.3) with ∆ T,c (m + b ). Furthermore, since all elements of our sum are non-negative, we may extend the sum in (6.3) if we wish. In particular, the following bound must be true: We have extended the sum up toV so that we can consider complete sums modulo c, as this will make it easier to acquire saving from ∆ T,c later. To this end, let m := m 0 + m 1 + cm 2 , where m 1 ∈ (Z/cZ) n and |m 2 | ≤V /c. Applying this decomposition on the right-hand side of (6.5) gives The upshot of reordering our sum in this way is that we have managed to separate ∆ T,c (m 0 + m 1 + b ) and (Φ(m 0 + m 1 + cm 2 ), b 1 ) 1/2 . In particular, we can treat m 1 as fixed for now, and since m 0 and c are also fixed, we may focus on acquiring saving in the We observe that (Φ c,m 0 ,m 1 (m 2 ), b 1 ) must be equal to some divisor of b 1 , so we will decompose the m 2 sum as follows: We now aim to use [3,Lemma 4] to bound the right hand side. Since Φ is homogeneous with Cont(Φ) = 1 by Proposition 5.5, and since c and d are co-prime, we have that Hence for every prime p dividing d, Φ(m 0 + m 1 + cx) is a non-trivial polynomial and therefore the corresponding variety is of dimension n − 1. Therefore, we may now use [3,Lemma 4] to conclude that Substituting this back into (6.8) gives the following: This in turn will enable us to find a suitable bound for B(b 1 , q 2 , V ; m 0 ). By (6.7) and (6.9), we have In order to find the bound we desire for B(b 1 , q 3 , V ; m 0 ), we will need to turn our attention to the sum of type for some fixed l ∈ Z n . Our bound here will be independent of the choice of the vector l. This sum is much easier to handle since we have a complete sum at hand. It is easy to check from the definition of ∆ T,c (x + l) (and the fact that det(T t ) = 1) that Therefore, by (6.10), we have as required.
Recall that our ultimate goal was to find a suitable bound for |T (q, z)|. Upon noting that the above treatment of |m−m 0 |≤V |S(q; m)| works for any value of y ∈ P Supp(ω) we may now substitute the bound in Lemma 6.2 into (6.1) to get the following bound for T (q, z): If q is sufficiently small (q < P 2 say), then the right hand term dominates over 1 for every n ≥ 1. Therefore, we finally reach the following bound for |T (q, z)|: where q 3 = c 2q 3 as defined in Lemma 6.1. Note that if we use a weaker bound c ≤ b where n is the number of variables of F, G, b 3 is the 4th power-free cube part of q, q i is the i-th power-full part of q.
The bound for the n = 1 case is much simpler to derive than in the n > 1 case. By Lemma 5.4 and Propositions 5.6, 5.9, and 5.10, we have We trivially have a V ≤ q 2 3 V . As for the other part of the sum, upon recalling that q 3 =q 3 c 2 , we have by the same argument as the proof of Proposition 5.10. Combining this with (6.14) gives the following result.
Lemma 6.4. Let q = b 1 b 2 q 3 ∈ N, m ∈ Z, q 3 := c 2q 3 be defined as in Lemma 6.1. Then for every ε > 0, Finally, upon recalling that 3 , we may combine this lemma with (6.1) to get our final bound for |T (q, z)| in the n = 1 case: Proposition 6.5. For every q < P 2 , z, and every ε > 0, if n = 1, we have , where n is the number of variables of F, G, b 3 is the 4th power-free cube part of q, and q 4 is the 4th power-full part of q.

Finalisation of the Poisson bound
In this section, we will finalise our main bounds coming from Poisson summation. For a fixed value of t, Lemmas 4.1 and 4.5 allow us to consider bounding the sum is the quadratic exponential sum as defined in (4.13). We may therefore apply our bounds for quadratic exponential sums in Propositions 6.3 and 6.5 to estimate these. Now that h is allowed to vary, we will define h denote the leading quadratic parts of F h and G h respectively. We recall that q = b 1 b 2 q 3 , where q 3 is the cube-full part of q, and b 1 , b 2 are the square-free and cube-free square parts of q. Since we are fixing q for now, b 1 , b 2 , and q 3 are also fixed. Recall that we may write b i = b i,0 b i,1 · · · b i,n , q 3 = q 3,0 q 3,1 · · · q 3,n where b i,j , and q 3,j now depend on h and are defined to be We see that for any q fixed, there are at most O(q ε ) = O(P ε ) possible choices for c = (b 1,0 , · · · , b 1,n , b 2,0 , · · · , b 2,n , q 3,0 , · · · , q 3,n ) since there are only at most O(q ε ) partitions of q into multiplicative factors. Therefore using the triangle inequality, we have that for some particular c , and c(h) := (b 1,0 (h), · · · , q 3,n (h)). We can then decompose this sum further by grouping h's with s ∞ (h) = s.
Here, given ν either a prime, or ∞ we define . We now aim to estimate the size of H s . We start by noting that we must have that To get a bound on #H s we will start by constructing a set which contains H s that is easier to work with. Let Then, upon defining [h] p to be the reduction modulo p of a point h ∈ Z n , we have In order to bound this larger set, we will need the following lemma, which is analogous to [13,Lemma 8.2]. Proof. We prove this result for any pair of forms instead of two cubics as it does not change the argument. Since we can use [14,Lemma 3(ii)] to conclude that dim(V ν,i ) ≤ min{n, n+σ + 1−i}, provided that ν = p d 1 ,d 2 1. Therefore we only need to check V ∞,i . We will use a slight modification to the argument used in [3,Lemma 1] in order to show that dim(V ∞,i ) ≤ min{n, n + σ + 1 − i}: Let for F, G homogeneous forms of degree 3, and let D := {(x, y) ∈ A 2n Q | x = y}. Then by the Affine Dimension Theorem, we have that Next, we note that by Euler's identity. Hence, by (7.6), we have dim(U ∩ D) = σ + 1, and so (7.7) dim(U ) ≤ n + σ + 1.
We can now use (7.5) and the argument found in [3, Section 7] to get the following upper bound for #H: For convenience set (recall that h H T h (q, z) P ε n−1 s=−1 U s by (7.2)). We will use (7.8) to bound U s later, but for now, we need to find a bound on |T h (q, z)|. To do this we will need to apply the hyperplane intersections lemma, namely Lemma 2.1 and then apply the bounds found in Propositions 6.3 and 6.5.
Let η be chosen so as to maximize the expression in (7.8). Let Π be the set of primes p|q so that r = ω(q), and {F 1 , We may now invoke Lemma 2.1 to find a lattice Λ η of rank n − η and a basis e 1 , · · · , e n−η for Λ η s.t. for every t ∈ Z n , the polynomialsF for every ν ∈ {∞} ∪ Π cr . We also note that deg(F h,t ) = deg(G h,t ) = 2 (this is necessary in order to be able to use the bounds from the previous section). In order to apply the bounds found in the previous section, we must first fix our choice of basis {e 1 , · · · , e n }, and so we will use the same process as earlier when we fixed (b 1,0 , · · · , q 3,n ): We recall that the L used in (2.4) is of size L = O(r + 1) = O(log(q)). Therefore there are at most O(log(q) n ) choices of basis satisfying (2.4), and so by (7.9), and the triangle inequality, there is one such choice for which where denotes that the sum is taken over the vectors h in the original sum for which (7.10) holds for our chosen basis {e 1 , · · · , e n }. For such h, we can now separate the x sum defining T h (q, z) into cosets t + Λ η of Λ η , where t runs over some subset T η ⊂ Z n . All that is left to do is use Proposition 6.3 (or Proposition 6.5 for η = n − 1) on each coset, and determine the size of T η , as this bounds the number of cosets that we have. We claim that if Λ η is chosen according to Lemma 2.1, then #T η = O(P η ). Indeed, consider x in terms of our basis e 1 , · · · , e n , i.e. writing Now, if π i denotes the projection onto the orthogonal subspace spanned by the vectors e j , i = j, we have where Λ ⊂ Z n denotes the full-dimensional lattice spanned by e 1 , · · · , e n and Λ i the lattice spanned by each e j = e i . Now by (2.4) and (2.5), we get that Therefore we must have |u i | P since we need ||x|| P . Hence, since Λ η =< e 1 , · · · , e n−η >, we may conclude that t is of the form t = n i=n−η+1 λ i e i s.t. |λ i | P . We now choose T η to be the collection of such t leading us to conclude that #T η = O(P η ).
In order to complete the hyperplane intersections step, we will now define new weight functions in n − η variables. In particular, we set ω h,t (y 1 , · · · , y n−η ) : This gives us We now need to verify that T h,t (q, z) andω h,t satisfy the various properties that we assumed in order to acquire the results we have found in the previous sections. Firstly, we refer to the proof Proposition 2 of [3] to see thatω h,t ∈ W n−η for t P . We also see that and similarly ||G h,t || P/L P ε H. Next, we note that η ≥ s + 1, and so we automatically have s ∞ (F h,t ,G h,t ) = −1. This covers all conditions that we have needed in the previous sections on exponential sums. Therefore, by (7.11), (7.13), and (7.8): Recall that by (7.2) and (7.9). We will therefore be able to attain our final bound for h H T h (q, z) if we can find a bound for T h,t (q, z) by (7.14). We may use Propositions 6.3 and 6.5 to bound T h,t (q, z) from above when η < n − 1 and η = n − 1 respectively. When η = n, we may proceed by a much simpler argument to bound T h,t (q, z). We trivially have and by Lemma 7.1 (v = ∞, i = n), we have that Returning to η ≤ n − 1: By (7.10), we may use the proof of Proposition 2 from [3] to conclude that for every t ∈ T η , we have when η < n−1. When η = n−1, (p, Cont(F h,t ), Cont(G h,t )) = p if and only if p|F h,t ,G h,t or p P ε . In particular, p|b 1,n b 1/2 2,nq 3,n or p P q ε , and so we again have 2,nq 3,n . Therefore, by (7.14) (7.15), and Propositions 6.3 and 6.5 and (7.16), we may conclude the following: Proposition 7.2. Let q < P 2 , and let for η ∈ {0, · · · , n − 2} and let

Weyl Differencing
In this section, we will derive several auxiliary bounds using Weyl differencing which will serve as complimentary bounds to the more powerful ones coming from van der Corput differencing and Poisson summation. We will need a bound which uses Weyl differencing twice, as well as two bounds which come from applying variations of van der Corput differencing once, followed by a single application of Weyl differencing on the resulting quadratic exponential sum. In the case of the former: The topic of performing Weyl differencing repeatedly on a system of forms has already been covered extensively by Lee in the context of function fields [12]. The Weyl differencing arguments that are used in his paper do not rely on being in a function fields setting, and so we may freely invoke the results in [12,Section 3]. In particular, upon setting d = 3 and R = 2, an application of [12,Lemma 3.7] gives us and F (0) , G (0) are defined to be the top forms of F and G respectively. However, we may use Lemma 2.3 to conclude that σ ≤ σ(F (0) , G (0) ) + 1. Hence, we arrive at the following: Proposition 8.1 (Weyl/Weyl). Let F , G be cubic polynomials such that and σ(F (0) , G (0) ) = σ. Then: . We now aim to bound the exponential sum, that we get after performing van der Corput differencing once. In this case, F and G are quadratic polynomials such that F (0) , G (0) H, for some 1 ≤ H ≤ P . The aforementioned work of Lee has also kept an explicit dependence of the dependence on H throughout the Weyl differencing process. In particular the following lemma is a direct consequence of [12,Equation (3.20)]: Proposition 8.2 (van der Corput/Weyl). Let F , G be quadratic polynomials such that and let σ := σ(F (0) , G (0) ). Then: We refer the readers not familiar with the function field version to the first author's PhD thesis [18, Section 6] for a detailed proof of this result.

Minor Arcs Estimate
In this Section, we will combine all of the approaches we have been developing throughout this paper to finally prove Proposition 3.2. In particular we aim to show that, provided that F, G intersect smoothly, and n ≥ 39, we have S m = O(P n−6−δ ), for some δ > 0. To achieve this, we will split the q sum of S m into square-free, cube-free square, 4th power-free cube, and 4th power-full parts (b 1 , b 2 , b 3 , q 4 respectively), and further split these sums into O(P ε ) dyadic ranges. In particular, we will be focusing on the sum where, R := (R 1 , R 2 , R 3 ), and (the latter is apparent from the definition of D P (R, t, R), but it will be helpful to be able to reference this later). From the definition of S m , we need only consider D P (R, t, R) when Likewise, we must also either have Now, upon bounding S a (q, z) trivially for t ≤ P −5 , we see that Our aim in this section is to show that D P (R, t, R) P n−6−δ for some δ > 0, as this is sufficient to bound our minor arcs by P n−6−δ by (9.4). Note that this is equivalent to proving that for some δ > 0, and for P sufficiently large (so that the implied constant in (9.4) becomes negligible), where Finally, as mentioned in Section 3 we will choose Q P 3/2 , from this point onwards (this choice will be explained in Section 9.3.1). With this last bit of setup, we are now ready to start the process of bounding S m . We will do this by applying a total of five different bounds based on different combinations of van der Corput differencing, Weyl differencing, and Poisson summation to bound D P (R, t, R) for different ranges of R and t. In each range, we will take the minimum of all available bounds. To do so for all possible values of R and t is incredibly complicated. Therefore, instead of the tedius process of manually comparing and simplifying these bounds, a route which is traditionally taken, we take the idea of automatising this process as in [13] one step further. We will directly feed these bounds to the existing Min-Max algorithm in Mathematica and obtain an explicit value of the minimum value of our bounds on the Minor arcs. We have also verified this value using an open source algorithm [19] designed by the first author. In its current form, this algorithm is significantly less efficient than inbuilt one in Mathematica, but it allowed the authors to double check the bounds coming from this inbuilt function. Throughout this section, we will use the following Lemma: where b i is the i-th power, (i+1)th powerfree part of q and let q k+1 be the (k+1)th power-full part of q. Then for every a 1 , · · · , a k+1 ≥ 0.
The proof of this lemma is standard, and is similar to [3, Lemma 20] so we omit here. This Lemma enables us to get away with using slightly worse exponential sum bounds for the perfect square and cube-full parts of q (close inspection of the bounds found in Section 5 will show that our bounds in these cases are indeed worse). We have stated Lemma 9.1 in this level of generality because it will be useful for us when considering the singular series of the major arcs. We will spend the remainder of this section finding our final bounds for the minor arcs in the case when F ,G are non-singular. 9.1. Averaged van der Corput/Poisson. In this section, we will find a bound for B P (φ, τ, φ) := log P (D P (R, t, R)) by combining the improved averaged van der Corput differencing process with Poisson summation. We will aim to show that B P (φ, τ, φ) ≤ n − 6 − δ for some δ > 0, provided that n is sufficiently large. By Proposition 4.5, we have where |z| max{(HP 2 ) −1 , t}. By Proposition 7.2 we have (9.10) and H(q) := max{P 10/(n−2)+ε , P 2/(n+2)+ε q 6/(n+2) } (9.11) V (q, |z|) := 1 + qP ε−1 max{1, HP 2 |z|}. (9.12) Our choice of H is informed by the fact that H needs to be large enough so that the contribution from the η = n term (or equivalently the term 1 in (9.8)) is satisfactory . We note that V (q, |z|) V (q, t) in the range of z that we have. Hence (assuming N is chosen sufficiently large): in (9.13)-(9.14). For the most part, we will continue to use H, V , and Y i instead of H(R), V (R, t) and Y i (R, R 1 , R 3 , R 4 , t) to avoid making the algebra more complicated than it already is. The final assertion is by Lemma 9.1.
We will start by simplifying the right-most bracket: Lemma 9.2. For every R, R 1 , R 3 , R 4 , t satisfying (9.2), we have Proof. For this proof, we will introduce the following sequence: We will prove that this sequence has the following three properties: n η=0 Y η is a sum of three geometric series. Verifying these three facts is sufficient to complete the proof since properties 1 and 2 imply that (1 , and property 2 implies that (Y 0 + Y n ) = (1 + Y 0 ). For property 1, we note that the term outside of the bracket of Y η is equal to the analogous term in Y η . It therefore suffices to bound each term in the bracket of Y η from above by a term in Y η : We clearly have V n−η ≤ R η/n 1 V n−η when η ∈ {1, · · · , n − 2} and R 1/2 1 V ≤ R (n−1)/n 1 V for every n ≥ 1. The third term of Y η and Y η coincide with each other for every η ∈ {1, · · · , n − 1}.
As for the middle term, which is the third term of Y η . Hence we have Y η Y η . Property 2 is trivial so we will move to verifying property 3. Again, we will go term by term: Let If we similarly define Y η,2 and Y η,3 in the obvious way, then we see that Hence we may represent Y η as a sum of three geometric series, as required. This completes the proof.
Hence, if we let then we now may Lemma 9.3 and (9.15) to bound D P (R, t, R) as follows: Finally, note that D P (R, t, R) P n−6−δ for some δ > 0 if log P (D P (R, t, R)) ≤ n − 6 − δ (provided P is chosen large enough) and so it is sensible to consider bounding B P (φ, τ, φ) := log P (D P (R, t, R)). By (9.18) and upon letting R := P φ , R i := P φ i , t := P τ , we have max{φ, log P (X 1 ), log P (X 2 )} + log P (C), (9.19) where C is the implied constant in (9.18). If P is made to be sufficiently large, log P (C) can be absorbed into ε. Hence (recalling (9.11) -(9.12), (9.16)-(9.17)), if we set (for some small ε > 0 that we may choose freely) then (9.19) gives us the following: Lemma 9.4. Let n be fixed, and Then B AV /P (φ, τ, φ 3 , φ 4 ) is a continuous, piecewise linear function, and for every ε > 0, there is a sufficiently large P such that The naming convention used is to make it easier to parse the algorithm's input. For example, τ brac andĤ correspond to Tau bracket and H Poisson respectively in the algorithm's code. 9.1.1. The Limiting Case. In this subsection, we will briefly illustrate why we should expect the condition n ≥ 39 to appear in Proposition 3.2 (or equivalently, why we should expect D P (R, t, R) P n−6−δ to be true for n ≥ 39). In general, we expect the limiting condition on n to be determined by the so-called "generic case" for (R, t, R), which is This is the case where R is as large as possible and is square-free, and t is as large as possible. In this case, we expect the averaged van der Corput/Poisson bound to dominate over the other bounds since it is our main bound. We will therefore pinpoint which component of (9.18) dominates and then solve this part by hand. When we do this, we will see that the condition n ≥ 39 arises naturally.
9.2. Pointwise van der Corput/Poisson. Next, we will find a bound for B P (φ, τ, φ) by combining the improved Pointwise van der Corput differencing process with Poisson summation. This time, we may assume t |z| t. By Propositions 4.1 and 7.2, the fact that the Y i s are a geometric series, and Lemmas 9.2-9.3, (using the same values for Y, V, H), we have: where the X i s are defined as in (9.16)-(9.17). Taking logs and recalling the definitions (9.20)-(9.23) gives us where C is the implied constant in (9.29). Hence, we arrive at the following: Lemma 9.5. Let n be fixed, log P D P (R, t, R) := B P (φ, τ, φ), and Then B P V /P (φ, τ, φ 3 , φ 4 ) is a continuous, piecewise linear function, and for every ε > 0, there is a sufficiently large P such that 9.3. Averaged van der Corput/Weyl. We will now find a bound for B P (φ, τ, φ) using the Averaged van der Corput differencing process discussed in Section 4, followed by one Weyl differencing step as in Section 8. By Proposition 4.5 (upon choosing N to be sufficiently large), we have We may now use Proposition 8.2 and (9.1) -(9.2) to bound T h (q, z) as follows: In this subsection, we will choose (9.32) H max{R 1/6 , (RtP 2 ) 1/5 }.
H is chosen so as to simplify the bounds here, as will be evident from our subsequent results. Note H = (RtP 2 ) 1/5 when t ≥ (HP 2 ) −1 , and H = R 1/6 when t ≤ (HP 2 ) −1 . This is convenient for us since considering these two cases for t separately is natural due to the min bracket in (9.31). Before we substitute (9.31) back into (9.30), we will simplify this expression significantly using the following Lemma: Using the fact that H P 1/4 by (9.33), and Q P 3/2 and Rt Q −1/2 by the assumptions in the Lemma, we see that HR 3 t 3 P 2 P 1/4 Q −3/2 P 2 P 9/4 (P −3/2 ) 3/2 = 1, as required. Finally, This one has a few more steps. Recall that we are trying to show the dominance of the right term for every t and R. By our choice of H and the fact that t (RQ 1/2 ) −1 , R ≤ Q, we have which is true. Hence, for our choices of H and Q, we have shown that H 2 R −1 min{1, (HtP 2 ) −1 )} = H/(RtP 2 ) dominates over all other terms in the expression for every R ≤ Q P 3/2 and (HP 2 ) −1 ≤ t ≤ (RQ 1/2 ) −1 .
A similar set of arguments can be used in the case that t < (HP 2 ) −1 . In this case, we have H = R 1/6 , and Again going from left to right in the bracket of (9.31): which is true since R ≤ Q P 3/2 . Next, which is again true by our assumptions from the Lemma. We used the fact that Rt ≤ Q −1/2 since t ≤ (RQ 1/2 ) −1 . Finally This is also true since R ≤ Q P 3/2 . Hence, we have shown that H 2 R −1 min{1, (HtP 2 ) −1 )} = H 2 R −1 dominates over all other terms in the expression for every R ≤ Q P 3/2 and t ≤ (HP 2 ) −1 . This completes the proof of the lemma.
We could now substitute the results from Lemma 9.6 into (9.30) directly, but the expression is rather complicated so we will instead just focus on the h sum inside of the integral for now. Our treatment of it will be analogous to the proof of the h sum bound in Section 7, but it will be a much simpler process this time around. The reason for our choice of H will also become apparent as we deal with this sum. We aim to show the following: Lemma 9.7. Let q R ≤ Q, Q = P 3/2 , |z| t ≤ (qQ 1/2 ) −1 , and |h| H, where H is defined as in (9.32). Then |h| H |T h (q, z)| n R 2 P n+ε H.
Substituting the result of Lemma 9.7 back into (9.30) gives D P (R, t, R) P n−1+ε q,(9.1) Finally, we split the R sum into its cube-free and cube-full components, and use Lemma 9.1 as follows: Therefore, upon setting R := P φ , R i := P φ i , t := P τ and (recall (9.32)) we have: where C is the implied constant in (9.35). Hence, if P is chosen to be sufficiently large, we may absorb log P (C) into ε, giving us the following: Lemma 9.8. Let n be fixed, and Weyl(φ, τ ) + 2τ brac(φ, τ ).
Then B AV /W (φ, τ, φ 3 , φ 4 ) is a continuous, piecewise linear function, and for every ε > 0, there is a sufficiently large P such that 9.3.1. Explaining the Choice of Q. As an aside, we will briefly explain our choice of Q P 3/2 , as promised in Section 3. We see in the proof of Lemma 9.6, that the optimal choice for Q is P 3/2 . In particular, if we choose any other value for Q, then we cannot simplify the Weyl bound to such a large extent. We normally optimise our choice for Q based on our main bound, which in this case is the averaged van der Corput/Poisson bound. This value for Q turns out to be which is the choice of Q that guarantees HP 2 |z| 1 for every z (optimising our V term), where H and V are defined as in (9.11) -(9.12). In the range of n that we are considering, this value is largest when n = 39, giving us Q P 1.5135··· , which is very close to the optimal choice for the van der Corput/Weyl bounds. In the end, the authors chose Q P 3/2 because it is simpler and it makes the van der Corput/Weyl bounds significantly easier to work with. Most importantly, this choice does not cause any issues for our Poisson bounds, since it is "almost" optimal.
9.6. Proof of Proposition 3.2. Recall that our ultimate goal is to show that S m P n−6−δ , for some δ > 0, for every n ≥ 39. This is equivalent to having log P (S m ) < n − 6.
We assume that ρ is chosen sufficiently small to facilitate average van der Corput differencing bounds. We may now use all of the previous subsections to bound log P (S m ) by a continuous, piecewise linear function in three variables: By (9.4), we have log P (S m ) ≤ log P (c 1 ) + ε + max φ,φ,τ (9.2),(9.3), τ >P −5 {B P (φ, τ, φ), n − 7}, where c 1 is the implied constant. We clearly have that log P (c 1 ) + ε + n − 7 ≤ n − 6 − ε for sufficiently large P , so we will assume that this is the case. Hence by Lemmas 9.8-9.5, we have log P (S m ) ≤ ε + max min (φ,τ,φ 3 ,φ 4 )∈D 1 ∪D 2 B AV /P (φ, τ, φ 3 , φ 4 ), B P V /P (φ, τ, φ 3 , φ 4 ), Since D 1 and D 2 are convex polytopes and the function which we have bounded log P (S m ) is continuous and piecewise linear for every n ∈ N. Each region on which this function is linear is a convex polytope. It is well known that extremum value of such a function must be taken at a vertex of one of these polytopes. Therefore, one may numerically compute the exact maxima in (9.41). We compute this maxima two different ways and check that both values coincide: The first way is to use an inbuilt Min-Max function in Mathematica that compares the two bounds. This algorithm can be found in Appendix A. An executable version of code can also be found in the first author's Github page [19].
We have also verified this using an open source python based algorithm (this can be found in [19]).

Major Arcs
Finally, we will complete the proof of Theorems 1.1-1. if the limits exist. In the following let σ denote the dimension of the singular locus of the complete intersection X. For our application here we only need to establish the σ = −1 case. However, a general version is equally straight-forward. We will start by showing the following: Lemma 10.1. Assume that n − σ ≥ 34 and that S is absolutely convergent, satisfying Then provided that we have ∆ ∈ (0, 1/7), S M = SJP n−6 + O φ (P n−6−δ ).
Following the proof found in [3], the first step towards proving this lemma is to show that (10.1) S(α) = q −n P n S a,q I(zP 3 ) + O(P n−1+2∆ ) where for t ∈ R 2 . In order to achieve this, we need to be able to separate S(α)'s dependence on a from its dependence on z. Write x = u + qv, where u runs over the complete set of residues modulo q and recall that α = a/q + z. Then where Φ u (v) = ω u + qv P e(z 1 F (u + qv) + z 2 G(u + qv)).
In order to have it so that a and z are independent from each other, we will replace our v sum with a crude integral estimate which has no dependence on u. In particular, we can use the fact that Φ u (v + x) = Φ u (v) + O(max y∈[0,1] n |∇Φ u (v + y)|) for any x ∈ [0, 1] n , to conclude the following: Φ u (v) P n q −n (q/P + q|z|P 2 ) = P n−1 q 1−n + |z|P n+2 q 1−n , since S is an n-dimensional cube with sides of order 1 + P/q ≤ 2P/q. Hence, on setting P x = u + qv, we arrive at the following expression for v Φ u (v): v∈Z n Φ u (v) = P n q n R n ω(x)e(z 1 P 3 F (x) + z 2 P 3 G(x))dx + O(P n−1 q 1−n + |z|P n+2 q 1−n ).
We will prove that this assumption is true in the next section. We now aim to show that we can replace J(P ∆ ) with J. In order to do this, we need J to exist, and |J − J(P ∆ )| to be sufficiently small. Now, it is easy to see that J − J(R) = t≥R I(t)dt, and so this motivates us to find a bound for the size of I(t). We will show the following: Lemma 10.2. Let σ := dim Sing C (X F , X G ).
Hence for such α, we may set t = αP 3 and combine these estimates to get I(t) |t| (σ+1−n)/16 P ε + |t|P −1 , when 1 < |t| < P 2 . Finally, we note that this is true for every P ≥ 1 and I(t) does not depend on P at all. Hence we can choose P = |t| (16+n−σ−1)/16 to reach our second estimate of I(t).
For n − σ ≥ 34, this shows that J is absolutely convergent. Finally, replacing J(P ∆ ) by J in (10.5) gives us S M = SJP n−6 + O φ (P n−7+7∆ + P n−6−∆φ + P n−6−∆/16+ε ), which is permissible for Lemma 10.1 provided that ∆ ∈ (0, 1/7), φ > 0, and ε > 0 is taken to be sufficiently small. To see that S converges for n − σ ≥ 35, we will again adopt the approach of Browning and Heath Brown in [3]. We start by noting that is a multiplicative function of q, and so it follows that S is absolutely convergent if and only if p (1 + ∞ k=1 a p (k)) is, where a p (k) := p −kn p k * a |S a,p k |.
But by taking logs, this is equivalent to p ∞ k=1 a p (k) converging. Now by Proposition 8.1 with a = 0, q = p k , |z| < P −3+∆ , ω = χ, we have that (10.6) a p (k) p k(2+(σ+1)/16−n/16)+ε , for any k ≥ 1, and so this enables us to establish that S converges absolutely provided that n − σ ≥ 50. We can use (10.6) far more effectively than this if we are more careful: We will assume that n − σ ≥ 35 from now on. Then by (10.6), we have p k≥16 assuming ε > 0 is sufficiently small. We now need to show that p 1≤k≤15 also converges. For 2 ≤ k ≤ 15, we will use [3,Lemma 25]. This shows that S a,p k k p (k−1)n+sp(a 1 F +a 2 G)+1 .