1. Introduction
Let
${{\mathcal{S}}}_n$
denote the symmetric group on a set of
$n$
elements. Recall that each permutation can be written uniquely as a product of disjoint cycles (see, for example, [Reference Bóna1] or [Reference Stanley24]). For a permutation
$\pi \in {{\mathcal{S}}}_n$
, a fixed set of
$\pi$
is a subset of
$\{1,\ldots , n\}$
that is fixed by
$\pi$
. A fixed set corresponds to a divisor of
$\pi$
, that is, a product of some subset of the disjoint cycles in
$\pi$
(we include both the empty set and the whole set
$\{1,\ldots , n\}$
as fixed sets). Fixed sets play the same role for permutations as divisors do for integers. The existence of fixed sets of a given size has applications to various questions in combinatorial group theory, such as the generation of
${{\mathcal{S}}}_n$
by random permutations and the distribution of transitive subgroups of
${{\mathcal{S}}}_n$
. See, for example, [Reference Cameron and Kantor2–Reference Eberhard, Ford and Koukoulopoulos7, Reference Ford, Green and Koukoulopoulos15, Reference Łuczak and Pyber18, Reference Pemantle, Peres and Rivin20, Reference Weingartner26]. For more information and references, one can refer to [Reference Ford14].
Let
$k, n$
be integers with
$1\leqslant k\leqslant n/2$
. We consider a random permutation
$\pi \in {{\mathcal{S}}}_n$
, each permutation occurring with probability
$1/n!$
. A basic problem is to estimate
$i(n, k)$
, the probability that a random permutation
$\pi \in {{\mathcal{S}}}_n$
has a fixed set of size
$k$
. Here and throughout, we write
$f\ll g$
or
$f=O(g)$
to mean that there is a positive constant
$C$
such that
$|f|\leqslant Cg$
holds on the domain of
$f$
. The notation
$f\asymp g$
means that both
$f\ll g$
and
$g\ll f$
hold. Unless indicated by subscripts, the implied constant
$C$
is absolute – independent of all parameters. For example,
$\log x\ll _\epsilon x^{\epsilon }$
for each fixed
$\epsilon \gt 0$
and all
$x\geqslant 1$
. The upper bound
$i(n, k)\ll k^{-1/100}$
was first established by Łuczak and Pyber [Reference Łuczak and Pyber18]. Fifteen years later, Diaconis, Fulman, and Guralnick [Reference Diaconis, Fulman and Guralnick3] obtained the lower bound
$\lim _{n\rightarrow \infty }i(n, k)\gg \log k/k$
. A sharper estimate was subsequently proved by Pemantle, Peres, and Rivin [Reference Pemantle, Peres and Rivin20], who showed that
$\lim _{n\rightarrow \infty }i(n, k)=k^{-{{\mathcal{E}}}+o(1)}$
, where
is referred to as the Erdős-Tenenbaum-Ford constant by some authors [Reference Luca and Pomerance17, Reference Pollack and Pomerance22]. The authors of [Reference Pemantle, Peres and Rivin20] also pointed out a surprising connection between the problem of estimating
$i(n, k)$
and the problem of estimating the proportion of integers
$n\leqslant x$
with a divisor in a given dyadic interval
$(y, 2y]$
which was solved up to a constant factor by Ford in [Reference Ford10, Reference Ford, De Koninck, Granville and Luca11]. This in turn relates to the classical Erdős multiplication table problem ([Reference Erdős8, Reference Erdős9, Reference Tenenbaum25]), which asks for an estimate of the number
$A(N)$
of distinct products of the form
$ab$
with
$a\leqslant N, b\leqslant N$
. We remark that the same connection was independently noted by Diaconis and Soundararajan [Reference Soundararajan23, p. 14]. Building on the remarkable connection, Eberhard, Ford, and Green [Reference Eberhard, Ford and Green5] formulated a parallel theory of permutations, and ultimately proved that
holds uniformly for
$1\leqslant k\leqslant n/2$
.
In the web seminar on analytic and probabilistic number theory held on December 3, 2020, Ford [Reference Ford12] proposed a more general problem aimed at estimating the proportion of permutations
$\pi \in {{\mathcal{S}}}_n$
for which there exist two divisors
$\pi _1$
and
$\pi _2$
with prescribed properties such that
$\pi =\pi _1\pi _2$
. Let
$r$
be a positive integer. Motivated by the work on rough integers of Ford [Reference Ford13], a natural set of permutations to be considered is
For analogy with rough integers in [Reference Ford13], the permutations in
${{\mathcal{S}}}_n^{(r)}$
are called
$(r-1)$
-rough permutations. It is evident that
${{\mathcal{S}}}_n^{(1)}$
is precisely the symmetric group
${{\mathcal{S}}}_n$
of degree
$n$
. Also note that
${{\mathcal{S}}}_n^{(2)}$
coincides with the set of all derangements, i.e., permutations in
${{\mathcal{S}}}_n$
without fixed points. The asymptotic behaviour of the size of
${{\mathcal{S}}}_n^{(r)}$
has received considerable attention (see, e.g., [Reference Ford14, Reference Granville16, Reference Petuchovas21]). In [Reference Ford13], Ford determined the number of integers
$n\leqslant x$
that have a divisor in
$(y, 2y]$
and no prime factor
$\leqslant w$
uniformly in
$x, y, w$
with
$4\leqslant y\leqslant \sqrt {x}$
and
$4\leqslant w\leqslant y/8$
. Making analogy with Ford’s work in [Reference Ford13], we estimate the proportion
${{\mathcal{R}}}_r(n, k)$
of those permutations
$\pi \in {{\mathcal{S}}}_n$
having a fixed set of size
$k$
and having no cycle factors of length less than
$r$
. In this note, we determine the exact order of magnitude of
${{\mathcal{R}}}_r(n, k)$
uniformly for all
$n, k, r$
with
$2\leqslant r\leqslant k\leqslant n/2$
.
Theorem 1.1.
Let
$n, k, r$
be integers with
$2\leqslant r\leqslant k\leqslant n/2$
and let
$\delta =\frac {\log r}{\log k}$
. Then we have
\begin{align*} \mathcal{R}_r(n, k)\asymp \begin{cases} \frac {1}{r^2},& \text{if}\ 1-1/\log 4\leqslant \delta \leqslant 1,\\ \delta B(r, k)k^{-{{\mathcal{E}}}+\frac {\log (1-\delta )}{\log 2}}, &\text{if} \ 0\lt \delta \lt 1-1/\log 4, \end{cases} \end{align*}
where
${\mathcal{E}}$
is defined as in (
1.1
) and
Note that one can check that
$\delta B(r, k)k^{-{{\mathcal{E}}}+(\log (1-\delta ))/\log 2}$
is of the same order of magnitude as
$\frac {1}{r^2}$
if
$\delta$
approaches
$1-1/\log 4$
from the left, and
$\delta B(r, k)k^{-{{\mathcal{E}}}+(\log (1-\delta ))/\log 2}$
is of the same order of magnitude as
$k^{-{{\mathcal{E}}}}(1+\log k)^{-3/2}(\log r)(r^{-1/\log 2})$
if
$\log r\leqslant \sqrt {\log k}$
. In particular, if
$r=O(1)$
, then
${{\mathcal{R}}}_r(n, k)\asymp _r i(n, k)$
.
For
$r=1$
,
${{\mathcal{R}}}_r(n, k)$
reduces to the probability
$i(n, k)$
that a random permutation
$\pi \in {{\mathcal{S}}}_n$
has a fixed set of size
$k$
, which was determined by Eberhard, Ford, and Green [Reference Eberhard, Ford and Green5]. Our Theorem1.1 generalizes their result. Notably, for
$r=2$
, we have
${{\mathcal{R}}}_2(n, k)\asymp i(n, k)$
; this corresponds to the probability that a random permutation is a derangement with a fixed set of size
$k\ (k\geqslant 2)$
.
Given the strength of the analogy with [Reference Ford13, Theorem1.1], one might hope to be able to quickly deduce Theorem1.1 by directly using transference ideas. Unfortunately, such a direct approach does not seem feasible. Instead, our proof of Theorem1.1 combines the ideas of Ford [Reference Ford13] with those of Eberhard, Ford, and Green [Reference Eberhard, Ford and Green5].
Suppose
$n=2k$
. We say that a permutation
$\sigma \in {{\mathcal{S}}}_n$
is perfectly balanced if it has a divisor of size
$k$
(see [Reference Ford12]). In this case,
$\sigma$
is the product of two divisors, each of size
$k$
. The enumeration of perfectly balanced rough permutations is the direct analogue, in the context of permutations, of the restricted multiplication table problem for rough integers studied by Ford (see [Reference Ford13, Corolary 1.4]). By applying Theorem1.1, one can readily determine the probability that a uniformly random permutation in
${{\mathcal{S}}}_{2k}$
has no cycle of length less than
$r$
and is perfectly balanced.
Corollary 1.2.
Let
$k, r$
be integers with
$2\leqslant r\leqslant k$
. If
$\log r\geqslant (1-1/\log 4)\log k$
, then we have
Otherwise, we have
where
${{\mathcal{S}}}_{2k}^{(r)}$
and
$B(r, k)$
are defined as in (
1.3
) and (
1.4
) respectively.
2. Preliminaries
2.1 Notation
For convenience, we first give some notation:
-
• Let
$\mathbb{N}$
be the set of positive integers, and let
${\mathbb{N}}_0={\mathbb{N}}\bigcup \{0\}$
. For any positive integers
$i, j$
with
$i\leqslant j$
, let
$[i, j]=\{m\in {\mathbb{N}}\,:\, i\leqslant m\leqslant j\}$
and
$[j]=\{m\in {\mathbb{N}}\,:\, 1\leqslant m\leqslant j\}$
. -
•
$C_j(\pi )$
is the number of cycles of length
$j$
in the permutation
$\pi$
, and
$C_I(\pi )$
is the number of cycles whose lengths lie in the set
$I$
. -
•
$\beta | \pi$
means that
$\beta$
is a fixed set or divisor of the permutation
$\pi$
, i.e., a product of some subset of the cycles of
$\pi$
. -
•
$|\beta |$
is the size of
$\beta$
. -
• For any positive integer
$t$
with
$t\geqslant r$
, let
\begin{equation*}{\boldsymbol{{\mathcal{C}}}}_{r, t}=\{(c_1,\ldots ,c_r,\ldots , c_t)\in {\mathbb{N}}_0^t: c_1=\cdots =c_{r-1}=0\}.\end{equation*}
-
•
$H_m=1+1/2+\cdots +1/m$
is the
$m$
-th harmonic sum. -
•
$\unicode {x1D7D9}(S)$
is the indicator function of statement
$S$
;
$\unicode {x1D7D9}(S)=1$
if
$S$
is true, and
$\unicode {x1D7D9}(S)=0$
if
$S$
is false. -
•
${\mathbb{P}}(A)$
is the probability of the event
$A$
. -
•
${\mathbb{E}}(X)$
is the expectation of the random variable
$X$
. -
•
${{\mathcal{C}}}_i^{(n)}$
is the set of cycles of length
$i$
in
${{\mathcal{S}}}_n$
. -
• For a vector
${\mathbf{c}}=(c_1,\ldots , c_k)$
of nonnegative integers, let
\begin{align*} {\mathscr{L}}\,({\mathbf{c}})=\{m_1+2m_2+\cdots +k m_k: 0\leqslant m_j\leqslant c_j\ \text{for}\ j=1, 2,\ldots , k\}\ \text{and}\ S({\mathbf{c}})= \max {\mathscr{L}}\,({\mathbf{c}}). \end{align*}
-
• Let
${\mathbf{X}}_{r, k}$
denote the random vector
$(X_r,\ldots , X_k)$
, where
$X_r,\ldots , X_k$
are independent Poisson random variables with parameters
$1/r,\ldots , 1/k$
, respectively. Furthermore, we write
2.2 Preliminary lemmas
We first record the inequalities
Moreover, we state Stirling’s formula as follows.
Stirling’s formula. For all sufficiently large positive integers
$n$
,
or equivalently,
The following result is an analogue for permutations of a basic lemma from sieve theory. The readers can refer to [Reference Eberhard, Ford and Green5] or [Reference Granville16].
Lemma 2.1.
Suppose that
$m, n$
are integers with
$1\leqslant m\leqslant n$
. Let
$\pi \in {{\mathcal{S}}}_n$
be chosen uniformly at random. Then
We also need standard bounds on the Poisson distribution, see, for example, [Reference Norton19, Section 4]. To facilitate use, we employ the following simplified version proposed by Ford [Reference Ford13, Lemma2.4].
Lemma 2.2.
Uniformly for
$h\leqslant m\leqslant x$
,
\begin{equation*} \sum _{h\leqslant k\leqslant m}\frac {x^k}{k!}\asymp \min \left(\sqrt {x}, \frac {x}{x-m}, m-h+1\right)\frac {x^m}{m!}. \end{equation*}
The following lemma is Cauchy’s classical formula. One may refer to [Reference Ford14, Theorem 1.2] or [Reference Stanley24, Proposition 1.3.2].
Lemma 2.3.
Suppose that
$\pi$
is chosen uniformly at random. If
$m_1+2m_2+\cdots +nm_n=n$
, then
\begin{equation*} {\mathbb{P}} \big (C_1(\pi )=m_1,\ldots , C_n(\pi )=m_n\big )=\prod _{j=1}^n \frac {(1/j)^{m_j}}{m_j!}. \end{equation*}
We will also use the following estimate of Eberhard, Ford, and Green [Reference Eberhard, Ford and Green5], which applies to permutations whose cycle structure is constrained only at small lengths.
Lemma 2.4.
Let
$1\leqslant m\lt n$
and
$c_1,\ldots , c_m$
be nonnegative integers satisfying
Suppose that
$\pi \in {{\mathcal{S}}}_n$
is chosen uniformly at random. Then
Applying Lemmas 2.3 and 2.4, we obtain a crude lower bound for
${{\mathcal{R}}}_r(n, k)$
.
Lemma 2.5.
For any positive integers
$r, k, n$
with
$1\leqslant r\leqslant k\leqslant n/2$
, we have
Proof. If
$n=2k$
, then we obtain by Lemma 2.3 that
If
$n\gt 2k$
, we then have by Lemma 2.4 that
as desired.
By using Lemma 2.1, we derive a crude upper bound for
${{\mathcal{R}}}_r(n, k)$
as follows.
Lemma 2.6.
For any positive integers
$r, k, n$
with
$2\leqslant r\leqslant k\leqslant n/2$
, we have
Proof. For convenience, let
where
${{\mathcal{S}}}_n^{(r)}$
is defined as in (1.3). We then obtain by Lemma 2.1 that
\begin{align*} \begin{split} \big |{{\mathcal{S}}}_n^{(r)}(k)\big |&\leqslant \sum _{I\subseteq [n]\atop |I|=k}\big |\{ \pi _1\in {{\mathcal{S}}}_{I}\,:\, C_{[r-1]}(\pi _1)=0\}\big |\big |\{ \pi _2\in {{\mathcal{S}}}_{[n]\setminus I}: C_{[r-1]}(\pi _2)=0\}\big |\\ &\leqslant \sum _{I\subseteq [n]\atop |I|=k} \frac {k!}{r}\frac {(n-k)!}{r}=\frac {n!}{r^2}, \end{split} \end{align*}
where
${{\mathcal{S}}}_I$
denotes the symmetric group consisting of all permutations of
$I$
, and for
$j=1, 2$
, the symbol
$C_{[r-1]}(\pi _j)$
counts the number of cycles of
$\pi _j$
whose lengths belong to the set
$[r-1]=\{1,\ldots , r-1\}$
. Consequently, we derive that
as required.
Moreover, we also need the cycle lemma from combinatorics. See [Reference Ford, De Koninck, Granville and Luca11, Lemma 2.4], for example.
Lemma 2.7.
For positive real numbers
$x_1,\ldots , x_t$
with product
$A$
, let
$x_{t+i}=x_i$
for
$i\geqslant 1$
. Then
\begin{equation*} \frac {1}{\max (1, A)} \leqslant \sum _{j=0}^{t-1}\left( \sum _{h=1}^t x_{1+j}\cdots x_{h+j}\right)^{-1}\leqslant \frac {1}{\min (1, A)}. \end{equation*}
Finally, we refer to a result derived by Ford [Reference Ford10, Lemma 4.9] for the volume of a specific complex region in
${\mathbb{R}}^t$
with
$t\in {\mathbb{N}}$
.
Lemma 2.8.
Let
$M$
be a sufficiently large integer. Suppose
$v\geqslant 1$
,
$10M\leqslant t\leqslant 100(v-1)$
,
$s\geqslant M/2+1$
and
$0\leqslant t-v\leqslant s-M/3-1$
. Let
$Y_t(s, v)$
be the set of
$\boldsymbol{\xi }=(\xi _1,\ldots , \xi _t)\in {\mathbb{R}}^t$
satisfying the following:
-
(i)
$0\leqslant \xi _1\leqslant \cdots \leqslant \xi _t\leqslant 1$
; -
(ii) For
$1\leqslant i\leqslant \sqrt {t-M}$
,
$\xi _{M+i^2}\gt i/v$
and
$\xi _{t+1-(M+i^2)}\lt 1-i/v$
; -
(iii)
$\sum _{j=1}^t 2^{j-v\xi _j}\leqslant 2^s$
.
Then we have
3. A global-to-local principle
For a random permutation
$\pi \in {{\mathcal{S}}}_n$
, let
$A$
be the event that
$\pi$
has no cycles of length less than
$r$
, and let
$B$
be the event that
$\pi$
has a fixed set of size
$k$
. Let
${\mathbf{c}}(\pi )=(C_1(\pi ), C_2(\pi ),\ldots , C_k(\pi ))$
denote the vector of cycle type only listing the number of cycles of length
$1, 2,\ldots , k$
, respectively, in
$\pi$
. The event that a random permutation
$\pi$
has a fixed set of size
$k$
is equivalent to the condition that
$k$
lies in the set
${\mathscr{L}}\,({\mathbf{c}}(\pi ))$
. Note then that
${{\mathcal{R}}}_r(n, k)={\mathbb{P}}(AB)={\mathbb{P}}(A){\mathbb{P}}(B\mid A)$
and
${\mathbb{P}}(B\mid A)={\mathbb{P}}\big (k\in {\mathscr{L}}\,({\mathbf{c}}(\pi ))\mid \pi \in {{\mathcal{S}}}_n^{(r)}\big )$
, where
${\mathbb{P}}(B\mid A)$
is the conditional probability of
$B$
relative to
$A$
. By Lemma 2.1, we derive that
${\mathbb{P}}(A)={\mathbb{P}}(\pi \in {{\mathcal{S}}}_n^{(r)})$
is of the same order as
$1/r$
. So it is sufficient to estimate the probability
${\mathbb{P}}\big (k\in {\mathscr{L}}\,({\mathbf{c}}(\pi ))\mid \pi \in {{\mathcal{S}}}_n^{(r)}\big )$
. Instead of estimating
${\mathbb{P}}\big (k\in {\mathscr{L}}\,({\mathbf{c}}(\pi ))\mid \pi \in {{\mathcal{S}}}_n^{(r)}\big )$
directly, however, we apply a global-to-local principle used in [Reference Eberhard, Ford and Green5, Reference Ford10, Reference Ford13] to reduce the estimate of
${{\mathcal{R}}}_r(n, k)$
to studying the average size of
${\mathscr{L}}\,({\mathbf{X}}_{r, k})$
. In fact, we prove the following result.
Proposition 3.1.
For any integers
$n, k, r$
with
$1\leqslant k\leqslant n/2$
and
$1\leqslant r\leqslant 0.04k$
, we have
To prove Proposition 3.1, we need the following result due to Eberhard, Ford, and Green [Reference Eberhard, Ford and Green5].
Lemma 3.2.
Let
$k\in {\mathbb{N}}, {\mathbf{c}}=(c_1,\ldots , c_k), {\mathbf{c}}'=(c_1',\ldots , c_k')$
with
${\mathbf{c}}, {\mathbf{c}}'\in {\mathbb{N}}_0^k$
. Suppose that
$I\subset [k]$
,
$c_i'=0$
for
$i\in I$
and
$c_i'=c_i$
for
$i\in [k]\setminus I$
. Then
First, it is easy to see that for any positive integer
$t$
with
$t\geqslant r$
, there is a natural bijection from
${\mathbb{N}}_0^{t-r+1}= \{(c_r,\ldots , c_t): c_i\in {\mathbb{N}}_0\ \text{for}\ r\leqslant i\leqslant t\}$
to
${\boldsymbol{{\mathcal{C}}}}_{r, t}$
by mapping
${\mathbf{c}}'=(c_r,\ldots , c_t)\in {\mathbb{N}}_0^{t-r+1}$
to
${\mathbf{c}}=(c_1,\ldots , c_t)\in {{\boldsymbol{{\mathcal{C}}}_{r, t}}}$
. Moreover, for
${\mathbf{c}}'=(c_r,\ldots , c_t)\in {\mathbb{N}}_0^{t-r+1}$
, we always have
${\mathscr{L}}\,({\mathbf{c}})={\mathscr{L}}\,({\mathbf{c}}')$
with
${\mathbf{c}}=(0,\ldots , 0, c_r,\ldots , c_t)\in {{\boldsymbol{{\mathcal{C}}}_{r, t}}}$
. So we obtain by the independence of
$X_r,\ldots , X_k$
that
For
$1\leqslant r\leqslant l'\leqslant l$
, we establish the connection between
${\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l})|$
and
${\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l'})|$
as follows.
Lemma 3.3.
Suppose that
$1\leqslant r\leqslant l'\leqslant l$
. Then
Proof. By (3.1) and Lemma 3.2, we have
\begin{align*} {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l})|&=\sum _{{\mathbf{c}}=(c_1,\ldots , c_l)\in {\boldsymbol{{\mathcal{C}}}_{r, l}}}|{\mathscr{L}}\,({\mathbf{c}})|{\mathbb{P}}(X_r=c_r,\ldots , X_l=c_l)\\ &\geqslant \sum _{{\mathbf{c}}=(c_1,\ldots , c_l)\in {\boldsymbol{{\mathcal{C}}}_{r, l}}}|{\mathscr{L}}\,((c_1,\ldots , c_{l'}))|{\mathbb{P}}(X_r=c_r,\ldots , X_l=c_l)\\ &=\sum _{{\mathbf{c}}=(c_1,\ldots , c_{l'})\in {\boldsymbol{{\mathcal{C}}}_{r, l'}}}|{\mathscr{L}}\,({\mathbf{c}})|{\mathbb{P}}(X_r=c_r,\ldots , X_{l'}=c_{l'})={\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l'})|. \end{align*}
By Lemma 3.2, we have
$|{\mathscr{L}}\,({\mathbf{X}}_{r, l})|\leqslant (1+X_{l'+1})\cdots (1+X_{l})|{\mathscr{L}}\,({\mathbf{X}}_{r, l'})|$
. Since
${\mathbb{E}} X_i=\frac {1}{i}$
for
$r\leqslant i\leqslant l$
, one can deduce by the independence of
$X_r,\ldots , X_l$
that
\begin{equation*} {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l})|\leqslant \left(\prod _{i=l'+1}^l\big ( 1+{\mathbb{E}} X_i\big )\right){\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l'})| =\frac {l+1}{l'+1}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, l'})|. \end{equation*}
This completes the proof of Lemma 3.3.
Lemma 3.4.
For any
$r, j, k$
with
$1\leqslant r\leqslant j\leqslant k$
, we have
Suppose that
$j_1,\ldots , j_h\in [r, k]$
are distinct integers and that
$a_1,\ldots , a_h$
are positive integers. Then
Proof. Define
${\mathbf{X}}_{r, k}'$
by putting
$X_{j_1}'=\cdots =X_{j_h}'=0$
and
$X_j'=X_j$
for all other
$j\in [r, k]$
. By Lemma 3.2, we have
\begin{align*} |{\mathscr{L}}\,({\mathbf{X}}_{r, k})|X_{j_1}^{a_1}\cdots X_{j_h}^{a_h}&\leqslant |{\mathscr{L}}\,({\mathbf{X}}_{r, k}')|(1+X_{j_1})\cdots (1+X_{j_h})X_{j_1}^{a_1}\cdots X_{j_h}^{a_h}\\ &=|{\mathscr{L}}\,({\mathbf{X}}_{r, k}')|\prod _{i=1}^h(X_{j_i}^{a_i}+X_{j_i}^{a_i+1}). \end{align*}
Note that
${\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, k}')|\leqslant {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|$
. Then by independence of
$X_r,\ldots , X_k$
, we have
\begin{equation*} {\mathbb{E}}\Big (|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|X_{j_1}^{a_1}\cdots X_{j_h}^{a_h}\Big )\leqslant {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, k}')|\prod _{i=1}^h \big({\mathbb{E}} X_{j_i}^{a_i}+{\mathbb{E}} X_{j_i}^{a_i+1}\big)\leqslant {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|\prod _{i=1}^h \big({\mathbb{E}} X_{j_i}^{a_i}+{\mathbb{E}} X_{j_i}^{a_i+1}\big). \end{equation*}
When
$h=a_1=1$
and
$j_1=j$
, we have
So we have
Since
$X_j$
is Poisson random variable with parameter
$1/j$
, we have that
${\mathbb{E}} \binom{X_j}{m}=\frac {(1/j)^m}{m!}$
for
$m\in {\mathbb{N}}_0$
. Note that
$X_j^i$
is the integral linear combination of
$\binom{X_j}{1},\ldots , \binom{X_j}{i}$
. Hence we obtain that
${\mathbb{E}} X_j^i\ll _i 1/j$
for
$i\in {\mathbb{N}}$
. So we have
\begin{equation*}\prod _{i=1}^h \Big ({\mathbb{E}} X_{j_i}^{a_i}+{\mathbb{E}} X_{j_i}^{a_i+1}\Big )\ll _{a_1,\ldots , a_h} \frac {1}{j_1\cdots j_h}.\end{equation*}
This concludes the proof of Lemma 3.4.
We are now in a position to prove Proposition 3.1.
Proof of Proposition 3.1. We begin with the lower bound. If
$k\lt 125$
, then by Lemma 2.5, we have
${{\mathcal{R}}}_r(n, k)\gg 1/k^2\gg \frac {1}{rk}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|$
. So we only need to consider the case that
$k\geqslant 125$
in the following. When
$k\geqslant 125$
, let
$t=\lfloor k/20\rfloor$
, so that
$t\geqslant 6$
and
$r\leqslant 0.04k\leqslant t$
. Consider the permutations
$\pi =\alpha \tau _1\tau _2 \beta \in {{\mathcal{S}}}_n$
, where
$\tau _1, \tau _2$
are cycles,
$|\alpha |\leqslant 4t\lt |\tau _1|\lt |\tau _2|\lt 16t$
,
$\alpha$
is of cycle type
${\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}}_{r, t}$
, all cycles in
$\beta$
have length at least
$16t$
, and
$\alpha$
has a fixed set of size
$k-|\tau _1|-|\tau _2|$
. Then by the size restrictions on
$\alpha , \tau _1, \tau _2$
, we obtain that
$k-|\tau _1|-|\tau _2|\in {\mathscr{L}}\,({\mathbf{c}})$
and
$|\tau _1|+|\tau _2|\leqslant k$
. So we derive from
$n\geqslant 2k$
that
$n-|\alpha |-|\tau _1|-|\tau _2|\geqslant \frac {4}{5}k\geqslant 16t$
. Fix
${\mathbf{c}}=(c_1,\ldots c_r, \ldots ,c_t)\in\,{\boldsymbol{{\mathcal{C}}}}_{r, t}$
with
$S({\mathbf{c}})\leqslant 4t$
, and choose
$l_1, l_2$
with
$4t\lt l_1\lt l_2\lt 16t$
such that
$k-l_1-l_2\in {\mathscr{L}}\,({\mathbf{c}})$
. By Lemma 2.4, the probability that a random permutation
$\pi \in {{\mathcal{S}}}_n$
has
$c_i$
cycles of length
$i\ (1\leqslant i\leqslant t)$
, one cycle each of length
$l_1, l_2$
, and no other cycles of length less than
$16t$
is
Now we have
${\mathscr{L}}\,({\mathbf{c}})\subset \{0\}\bigcup [r, 4t]$
. Note that
$t=\lfloor k/20\rfloor$
implies that
$20t\leqslant k\lt 20(t+1)$
. Hence, for any
$l_1$
satisfying
$4t+20\leqslant l_1\leqslant 8t-1$
and any
$u\in {\mathscr{L}}\,({\mathbf{c}})$
, the unique
$l_2$
with
$k-l_1-l_2=u$
satisfies
$l_2\leqslant k-l_1\lt 20(t+1)-l_1\leqslant 20(t+1)-(4t+20)=16t$
and
$l_2\geqslant k-l_1-4t\geqslant 20t-(8t-1)-4t=8t+1\gt l_1$
. Since
\begin{equation*} {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, t})|\unicode {x1D7D9}\big (S({\mathbf{X}}_{r, t})\leqslant 4t\big )={{\mathrm{e}}}^{H_{r-1}-H_t} \sum _{{\mathbf{c}} \in {\boldsymbol{{\mathcal{C}}}_{r, t}}\atop S({\mathbf{c}})\leqslant 4t}|{\mathscr{L}}\,({\mathbf{c}})|\prod _{i=r}^t \frac {(1/i)^{c_i}}{c_i!} \end{equation*}
and
$8t-1-(4t+20)+1=4t-20\geqslant 0.5 t$
, we derive from (2.1) that
\begin{align} {{\mathcal{R}}}_r(n, k)\gg \frac {1}{t^2}\sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}}_{r, t}\atop S({\mathbf{c}})\leqslant 4t}\frac {|{\mathscr{L}}\,({\mathbf{c}})|}{\prod _{i=r}^tc_i!i^{c_i}}\gg \frac {1}{rt}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, t})|\unicode {x1D7D9}\big (S({\mathbf{X}}_{r, t})\leqslant 4t\big )\geqslant \frac {1}{rt}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, t})|\left( 1-\frac {S({\mathbf{X}}_{r, t})}{4t}\right). \end{align}
By Lemma 3.4, we have
\begin{equation*} {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, t})|S({\mathbf{X}}_{r, t})=\sum _{j=r}^t{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, t})|jX_j\leqslant 3t {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, t})|. \end{equation*}
It then follows from (3.2) and Lemma 3.3 that
as desired.
For the upper bound, we first impose a total ordering on the set of all cycles
$\bigcup _{i=r}^n {{\mathcal{C}}}_i^{(n)}$
, first ordering them by length, then imposing an arbitrary ordering of the cycles of a given length. Let
$\pi \in {{\mathcal{S}}}_n^{(r)}$
have a divisor of size
$k$
. Let
$k_1=k$
and
$k_2=n-k$
. Then
$\pi =\pi _1\pi _2$
, where
$|\pi _1|=k_1$
and
$|\pi _2|=k_2$
. For some
$j\in \{1, 2\}$
, the largest cycle in
$\pi$
, with respect to our total ordering, lies in
$\pi _{3-j}$
. Let
$\sigma$
be the largest cycle in
$\pi _j$
, and note that
$|\sigma |\leqslant \min (k_1, k_2)=k$
. Write
$\pi =\alpha \sigma \beta$
, where
$\alpha$
is the product of all cycles dividing
$\pi$
which are smaller than
$\sigma$
and have length
$\geqslant r$
, and
$\beta$
is the product of all cycles which are larger than
$\sigma$
. In particular,
$|\beta |\geqslant |\sigma |$
since
$\beta$
contains the largest cycle in
$\pi$
, and thus
By definition of
$\sigma$
and
$\alpha$
,
$\alpha$
has a divisor of size
$k_j-|\sigma |$
. Suppose
$|\sigma |=\ell , C_j(\alpha )=c_j$
for
$j\leqslant \ell$
. Then the cycle type of
$\alpha$
is
${\mathbf{c}}=(c_1, c_2, \ldots , c_\ell )\in {{\boldsymbol{\mathcal{C}}}_{r, \ell }}$
and
$k_j-\ell \in {\mathscr{L}}\,({\mathbf{c}})$
. Hence by using Lemma 2.3, the number of possible pairs
$\alpha , \sigma$
is at most
\begin{equation*} \binom{n}{|\alpha |}|\alpha |!\left (\prod _{i=1}^\ell \frac {(1/i)^{c_i}}{c_i!}\right ) \binom{n-|\alpha |}{\ell } (\ell -1)! =\frac {n!}{\ell (n-|\alpha |-\ell )!}\prod _{i=r}^\ell \frac {(1/i)^{c_i}}{c_i!}. \end{equation*}
Given
$\alpha$
and
$\sigma$
, it follows from Lemma 2.1 and (3.3) that the number of choices for
$\beta$
is at most
$(n-|\alpha |-\ell )!/\ell$
. So we have
\begin{align} {{\mathcal{R}}}_r(n, k)\leqslant \sum _{j=1}^2\sum _{\ell =r}^k \frac {1}{\ell ^2}\sum _{{\mathbf{c}}\in {{\boldsymbol{{\mathcal{C}}}_{r, \ell }}}\atop k_j-\ell \in {\mathscr{L}}\,({\mathbf{c}})}\prod _{i=r}^\ell \frac {(1/i)^{c_i}}{c_i!}\leqslant \sum _{j=1}^2\sum _{{\mathbf{c}}\in {{\boldsymbol{{\mathcal{C}}}_{r, k}}}}\prod _{i=r}^k \frac {(1/i)^{c_i}}{c_i!} \sum _{m({\mathbf{c}})\leqslant \ell \leqslant k \atop k_j-\ell \in {\mathscr{L}}\,({\mathbf{c}})}\frac {1}{\ell ^2} \end{align}
where
$m({\mathbf{c}})=\max \{\max \{i\,:\, c_i\gt 0\}, 1\}$
. From
$k_j-\ell \in {\mathscr{L}}\,({\mathbf{c}})$
, we derive that
$k_j-\ell \leqslant S({\mathbf{c}})$
and
$\ell \geqslant k_j-S({\mathbf{c}})$
. So we have
$\ell \geqslant \max \{m({\mathbf{c}}), k_j-S({\mathbf{c}})\}$
for
${\mathbf{c}}\in {{\boldsymbol{{\mathcal{C}}}_{r, k}}}$
. Note that the number of
$\ell$
satisfying
$k_j-\ell \in {\mathscr{L}}\,({\mathbf{c}})$
is at most
$|{\mathscr{L}}\,({\mathbf{c}})|$
. Thus for any given
${\mathbf{c}} \in {{\boldsymbol{{\mathcal{C}}}_{r, k}}}$
, we have
\begin{align} \sum _{m({\mathbf{c}})\leqslant \ell \leqslant k \atop k_j-\ell \in {\mathscr{L}}\,({\mathbf{c}})}\frac {1}{\ell ^2}\leqslant \frac {|{\mathscr{L}}\,({\mathbf{c}})|}{\max \{m({\mathbf{c}}), k_j-S({\mathbf{c}})\}^2}\leqslant \frac {|{\mathscr{L}}\,({\mathbf{c}})|}{\max \{m({\mathbf{c}}), k-S({\mathbf{c}})\}^2}. \end{align}
Since
\begin{align*} {\mathbb{E}}\left (\frac {|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|}{\max \{m({\mathbf{X}}_{r, k}), k-S({\mathbf{X}}_{r, k})\}^2}\right ) &=\sum _{{\mathbf{c}}\in {{\boldsymbol{{\mathcal{C}}}_{r, k}}}}\frac {|{\mathscr{L}}\,({\mathbf{c}})|}{\max \{m({\mathbf{c}}), k-S({\mathbf{c}})\}^2}\left (\prod _{i=r}^k \frac {(1/i)^{c_i}}{c_i!}{{\mathrm{e}}}^{-1/i}\right )\\ &={{\mathrm{e}}}^{H_{r-1}-H_k}\sum _{{\mathbf{c}}\in {{\boldsymbol{{\mathcal{C}}}_{r, k}}}}\frac {|{\mathscr{L}}\,({\mathbf{c}})|}{\max \{m({\mathbf{c}}), k-S({\mathbf{c}})\}^2}\left (\prod _{i=r}^k \frac {(1/i)^{c_i}}{c_i!}\right ), \end{align*}
it follows from (2.1), (3.4), and (3.5) that
As in [Reference Eberhard, Ford and Green5], using the inequality
$ \frac {1}{\max (m, k-S)^2}\leqslant \frac {4}{k^2}(1+\frac {S^2}{m^2}),$
one can derive from (3.6) that
Since
$\unicode {x1D7D9}\big (X_m\geqslant 1\big )\leqslant X_m$
, it follows from (2.1) and conditioning on
$m=m({\mathbf{X}}_{r, k})$
that
\begin{align*} \begin{split} &{\mathbb{E}}\frac {|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|S({\mathbf{X}}_{r, k})^2}{m({\mathbf{X}}_{r, k})^2}\\ &=\sum _{m=r}^k \frac {1}{m^2} \sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}_{r, k}}, c_m\geqslant 1\atop c_{m+1}=\cdots =c_k=0}|{\mathscr{L}}\,({\mathbf{c}})|S({\mathbf{c}})^2 {\mathbb{P}}\big ({\mathbf{X}}_{r, m}=(c_r,\ldots , c_m)\big )\prod _{t=m+1}^k{{\mathbb{P}}\big (X_t=c_t\big )}\\ &=\sum _{m=r}^k \frac {1}{m^2}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, m})|S({\mathbf{X}}_{r, m})^2 \unicode {x1D7D9}\big (X_m\geqslant 1\big ){{\mathrm{e}}}^{H_m-H_k}\\ &\leqslant \frac {{{\mathrm{e}}}}{k}\sum _{m=r}^k \frac {1}{m}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, m})|S({\mathbf{X}}_{r, m})^2 X_m. \end{split} \end{align*}
Hence we deduce by expanding
$S({\mathbf{X}}_{r, m})^2=(rX_r+\cdots +mX_m)^2$
that
\begin{align} {\mathbb{E}}\frac {|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|S({\mathbf{X}}_{r, k})^2}{m({\mathbf{X}}_{r, k})^2}\ll \frac {1}{k}\sum _{m=r}^k \frac {1}{m}\sum _{i_1, i_2=r}^m i_1i_2{\mathbb{E}} |{\mathscr{L}}\,({\mathbf{X}}_{r, m})|X_{i_1}X_{i_2}X_m. \end{align}
Using Lemma 3.4, we obtain the estimate on the innermost sum as follows.
\begin{align} i_1i_2{\mathbb{E}} |{\mathscr{L}}\,({\mathbf{X}}_{r, m})|X_{i_1}X_{i_2}X_m{\large \ll } \begin{cases} \frac {1}{m}{\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, m})|, &\text{if}\ r\leqslant i_1\ne i_2\lt m,\\ m{\mathbb{E}} |{\mathscr{L}}\,({\mathbf{X}}_{r, m})|, &\text{if}\ i_1=i_2=m,\\ {\mathbb{E}} |{\mathscr{L}}\,({\mathbf{X}}_{r, m})|, &\text{otherwise.} \end{cases} \end{align}
By Lemma 3.3, we obtain that
${\mathbb{E}} |{\mathscr{L}}\,({\mathbf{X}}_{r, m})|\leqslant {\mathbb{E}} |{\mathscr{L}}\,({\mathbf{X}}_{r, k})|$
for
$r\leqslant m\leqslant k$
. It then follows from (3.8) and (3.9) that
So we derive from (3.7) that
as desired. The Proof of Proposition 3.1 is completed.
4. Proof of Theorem1.1
Let
$r_0$
be a sufficiently large integer, and let
$\varepsilon \gt 0$
be a sufficiently small constant. If either
$k\leqslant r_0$
, or
$k\gt r_0$
and
$r\gt \varepsilon k$
, then we obtain the desired result in Theorem1.1 by using Lemmas 2.5 and 2.6 directly. In the following, we only need to deal with the case that
$k\gt r_0$
and
$r\leqslant \varepsilon k$
. In this case, we can use the global-to-local principle – Proposition 3.1.
Taking
$t=k$
in (3.1), we derive by (2.1) that
\begin{align*} {\mathbb{E}}|{\mathscr{L}}\,({\mathbf{X}}_{r, k})|=\!\sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}_{r, k}}}|{\mathscr{L}}\,({\mathbf{c}})|\prod _{i=r}^k \frac {(1/i)^{c_i}}{c_i!} {{\mathrm{e}}}^{-(\frac {1}{r}+\cdots +\frac {1}{k})}={{\mathrm{e}}}^{H_{r-1}-H_k}\!\sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}_{r, k}}}\frac {|{\mathscr{L}}\,({\mathbf{c}})|} {\prod _{i=r}^k c_i!i^{c_i}}\asymp \frac {r}{k}\!\sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}_{r, k}}}\frac {|{\mathscr{L}}\,({\mathbf{c}})|} {\prod _{i=r}^k c_i!i^{c_i}}. \end{align*}
Hence by Proposition 3.1, we have
\begin{align} {{\mathcal{R}}}_r(n, k)\asymp \frac {1}{k^2}\sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}_{r, k}}}\frac {|{\mathscr{L}}\,({\mathbf{c}})|} {\prod _{i=r}^k c_i!i^{c_i}}. \end{align}
As in [Reference Eberhard, Ford and Green5], for all
${\mathbf{c}}=(0,\ldots 0, c_r,\ldots , c_k)\in {\boldsymbol{{\mathcal{C}}}_{r, k}}$
with
$t=c_r+\cdots +c_k$
, we have
\begin{align} \sum _{{\mathbf{c}}\in {\boldsymbol{{\mathcal{C}}}_{r, k}}\atop c_r+\cdots +c_k=t}\frac {|{\mathscr{L}}\,({\mathbf{c}})|} {\prod _{i=r}^k c_i!i^{c_i}}=\frac {1}{t!}\sum _{a_1,\ldots , a_t=r}^k \frac {|{\mathscr{L}}^{\,\,*}({{\mathbf{a}}})|}{a_1\cdots a_t}, \end{align}
where
${{\mathbf{a}}}=(a_1,\ldots , a_t)$
and
\begin{align} {\mathscr{L}}^{\,\,*}({{\mathbf{a}}})=\left\{\sum _{i\in I}a_i\,:\, I\subseteq [t]\right\}. \end{align}
For convenience, let
4.1 The lower bound in Theorem 1.1
For a vector
${\mathbf{b}}=(b_1,\ldots , b_{J_2})$
of nonnegative integers, let
$\mathscr{A}\,({\mathbf{b}})$
be the set of permutations
$\pi \in S_n$
composed of exactly
$b_j$
cycle factors with length lying in
$[2^{j-1}, 2^j-1]$
for
$1\leqslant j\leqslant J_2$
.
Suppose that
$b_1,\ldots , b_{J_2}$
are arbitrary nonnegative integers with sum
$t$
and there are exactly
$b_j$
of the
$a_i$
in each interval
$[2^{j-1}, 2^j-1]$
, where
$ b_j=0\ \text{for}\ j\lt J_1$
. Writing
\begin{equation*}{\mathscr{D}}({\mathbf{b}})=\prod _{j=J_1}^{J_2}\{2^{j-1},\ldots , 2^j-1\}^{b_j},\end{equation*}
We then have by (4.2) and (4.3) that
\begin{align} \frac {1}{t!}\sum _{a_1,\ldots , a_t=2^{J_1-1}}^{2^{J_2}-1}\frac {|{\mathscr{L}}^{\,\,*}({{\mathbf{a}}})|}{a_1\cdots a_t} =\sum _{b_{J_1}+\cdots +b_{J_2}=t}\frac {1}{b_{J_1}!\cdots b_{J_2}!} \sum _{{\mathbf{d}}\in {\mathscr{D}} ({\mathbf{b}})}\frac {|{\mathscr{L}}^{\,\,*}({\mathbf{d}})|}{d_1\cdots d_t}. \end{align}
Combining (4.1), (4.2), and (4.5), we derive that
\begin{align} {{\mathcal{R}}}_r(n, k)\gg \frac {1}{k^2}\sum _t \sum _{b_{J_1}+\cdots +b_{J_2}=t}\frac {1}{b_{J_1}!\cdots b_{J_2}!}\sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}})}\frac {|{\mathscr{L}}^{\,\,*}({\mathbf{d}})|}{d_1\cdots d_t}. \end{align}
We now give the estimate on
$\sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}})}\frac {|{\mathscr{L}}^{\,\,*}({\mathbf{d}})|}{d_1\cdots d_t}$
as follows.
Lemma 4.1.
Let
$t$
be a positive integer. Then for any vector
${\mathbf{b}}=(b_1,\ldots , b_{J_2})$
of nonnegative integers such that
$b_j=0$
for
$j\lt J_1$
and
$b_{J_1}+\cdots +b_{J_2}=t$
, we have
\begin{equation*} \sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}})}\frac {|{\mathscr{L}}^{\,\,*}({\mathbf{d}})|}{d_1\cdots d_t}\gg \frac {(2\log 2)^t}{\max \Big (\sum _{i=J_1}^{J_2} 2^{b_{J_1}+\cdots +b_i-i}, 1\Big )}. \end{equation*}
Proof. As in the proof of Lemma 4.1 in [Reference Eberhard, Ford and Green5], for given
$\ell \in {\mathbb{N}}$
, let
$R({\mathbf{d}}, \ell )$
be the number of
$I\subseteq [t]$
with
$\ell =\sum _{i\in I}d_i$
. Also, define
$\lambda _i=\sum _{j=2^{i-1}}^{2^i-1}\frac {1}{j}$
for
$J_1\leqslant i\leqslant J_2$
. One then has
$\log 2\leqslant \lambda _i\leqslant 1$
for
$J_1\leqslant i\leqslant J_2$
. Since
$\sum _{\ell }R({\mathbf{d}}, \ell )=2^t$
, we obtain by Cauchy-Schwarz that
\begin{align} \begin{split} 2^{2t}\prod _{j=J_1}^{J_2}\lambda _j^{2b_j}&=\bigg(\sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}})}\frac {1}{d_1\cdots d_t}\sum _{\ell } R({\mathbf{d}}, \ell )\bigg)^2\\ &\leqslant \bigg( \sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}})}\frac {\big (\sum _{\ell \in {\mathscr{L}}^{\,\,*}({\mathbf{d}})}R({\mathbf{d}}, \ell )^2\big )^{\frac {1}{2}}} {\sqrt {d_1\cdots d_t}}\frac {|{\mathscr{L}}^{\,\,*}({\mathbf{d}})|^{\frac {1}{2}}}{\sqrt {d_1\cdots d_t}}\bigg)^2\\ &\leqslant \bigg( \sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}}), \ell }\frac {R({\mathbf{d}}, \ell )^2}{d_1\cdots d_t}\bigg)\bigg(\sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}})}\frac {|{\mathscr{L}}^{\,\,*}({\mathbf{d}})|}{d_1\cdots d_t}\bigg). \end{split} \end{align}
Applying the argument on pages 6725–6726 in [Reference Eberhard, Ford and Green5], one can derive that
\begin{align*} \prod _{j=J_1}^{J_2}\lambda _j^{-b_j}\sum _{{\mathbf{d}}\in {\mathscr{D}}({\mathbf{b}}), \ell }\frac {R({\mathbf{d}}, \ell )^2}{d_1\cdots d_t} &\ll 2^t+2^t\sum _{i=J_1}^{J_2} 2^{b_{J_1}+\cdots +b_i-i}\ll 2^t\max \bigg(\sum _{i=J_1}^{J_2} 2^{b_{J_1}+\cdots +b_i-i}, 1\bigg). \end{align*}
Comparing with (4.7), and using again the fact that
$\lambda _i\geqslant \log 2$
, we obtain the desired result.
If
$k\gt r_0$
,
$r\leqslant \varepsilon k$
and
$r\leqslant \sqrt {r_0}$
, then
$J_2-J_1+1\asymp J_2$
. Consider those vectors
${\mathbf{b}}=(b_1,\ldots , b_{J_2})$
of nonnegative integers such that
$b_j=0$
for
$j\lt J_1$
and
$b_{J_1}+\cdots +b_{J_2}=t_0$
. Setting
$x_i=2^{b_{J_1+i-1}-1}$
, we then have
$x_1\cdots x_{t_0}=1$
. And hence by Lemma 2.7, we obtain that
Using multinomial theorem and Stirling’s formula, we deduce from (4.4) that
\begin{align*} \sum _{b_{J_1}+\cdots +b_{J_2}=t_0}\frac {1}{b_{J_1}!\cdots b_{J_2}!\sum _{i=J_1}^{J_2}2^{b_{J_1}+\cdots +b_i-i}} =\frac {t_0^{t_0}}{t_0!}\frac {2^{J_1-1}}{t_0}\asymp \frac {t_0^{t_0}{{\mathrm{e}}}^{t_0}}{\sqrt {2\pi t_0}t_0^{t_0}} \frac {2^{J_1-1}}{t_0}\asymp \frac {k^{\frac {1}{\log 2}}}{(\log k)^{3/2}}. \end{align*}
It then follows from Lemma 4.1 and (4.6) that
\begin{align*} {{\mathcal{R}}}_r(n, k)\gg \frac {1}{k^2}(2\log 2)^{t_0} \frac {k^{\frac {1}{\log 2}}}{(\log k)^{3/2}}\gg \frac {1}{k^{{\mathcal{E}}} (1+\log k)^{3/2}} \asymp i(n, k) \end{align*}
as desired.
In the following, we assume that
$k\gt r_0$
and
$\sqrt {r_0}\lt r\leqslant \varepsilon k$
.
Take
such that
$M$
is a sufficiently large constant because
$r_0$
is. Choose
$\varepsilon$
such that
Let
$\mathscr{B}_t$
be the set of vectors
${\mathbf{b}}=(b_1,\ldots , b_{J_1},\ldots , b_{J_2})$
which satisfy the following:
-
(a)
$b_1=\cdots =b_{J_1-1}=0$
; -
(b)
$b_{J_1}+\cdots +b_{J_2}=t$
; -
(c)
$\sum _{j=J_1}^{J_2} 2^{b_{J_1}+\cdots +b_{j}-j}\leqslant 2^{-M}$
; -
(d)
$b_{J_1+i-1}\leqslant M+i^2\quad (i\geqslant 1)$
; -
(e)
$b_{J_2-i+1}\leqslant M+i^2\quad (i\geqslant 1)$
.
We then obtain by Lemma 4.1 that
\begin{align} {{\mathcal{R}}}_r(n, k)\gg \frac {1}{k^2}\sum _{t_1\leqslant t\leqslant t_2}\sum _{{\mathbf{b}}\in \mathscr{B}_t}\frac {(\log 4)^t}{b_{J_1}!\cdots b_{J_2}!} \end{align}
for any two positive integers
$t_1, t_2$
with
$t_1\leqslant t_2$
. Take
By (4.9), we have
$t_2\geqslant 100M$
. Choose
$t_1$
such that
Now let
Setting
for
$i\geqslant 1$
, we obtain by (c) that
\begin{equation*} \sum _{i=1}^{t_0} 2^{-i+g_1+\cdots +g_i}\leqslant 2^{J_1-1}\cdot 2^{-M}=2^{s+1}, \end{equation*}
where
$t_0$
is defined as in (4.4).
By (d) and (e) in the definition of
$\mathscr{B}_t$
,
$g_i\leqslant M+i^2$
and
$g_{t_0+1-i}\leqslant M+i^2$
for every
$i\geqslant 1$
. Applying the argument on top of Page 419 of [Reference Ford10], we deduce that
for
$t_1\leqslant t\leqslant t_2$
, where
$Y_t(s, t_0)$
is defined as in Lemma 2.8.
Since
$r_0$
is sufficiently large (implying that
$M$
is sufficiently large) and
$\varepsilon$
is sufficiently small, we obtain by (4.4), (4.9), (4.11), (4.12) and (4.13) that
\begin{align*} t_0&=J_2-J_1+1\geqslant 200M\geqslant 1,\\ 10M&\leqslant t_1\leqslant t_2\leqslant (\log 4)(J_2-J_1)=(\log 4)(t_0-1)\leqslant 100(t_0-1),\\ s&\geqslant \log r-M\geqslant M/2+1,\\ 0&\leqslant t_2-t_0\leqslant (J_2-2M)-(J_2-J_1+1)=s-M+3\leqslant s-M/3-1. \end{align*}
Therefore, it follows from (4.10), (4.14), and Lemma 2.8 that
\begin{align} {{\mathcal{R}}}_r(n, k)\gg \frac {1}{k^2}\sum _{t=t_1}^{t_2} \frac {(t_0\log 4)^t}{t!} \bigg(\frac {t-t_0+1}{t+1}\bigg). \end{align}
We now consider the following two cases.
Case 1.
$\delta \geqslant 1-1/\log 4$
. In this case, we have
$\log r\geqslant (1-1/\log 4)\log k$
. By (4.11), we have
Take
It is easy to check that
$t_1\geqslant 10M$
. With these choices and (4.4), we obtain for
$t_1\leqslant t\leqslant t_2$
that
Applying Lemma 2.2 to the sum in (4.15), we obtain
as desired.
Case 2.
$0\lt \delta \lt 1-1/\log 4$
. By (4.11), we have
We now divide the proof of Case 2 into the following two subcases.
Subcase 2.1.
$\frac {1}{10}\lt \delta \lt 1-1/\log 4$
. In this subcase, we take
so that
$t-t_0+1\asymp t\asymp \log k$
for
$t_1\leqslant t\leqslant t_2$
. Thus for
$t_1\leqslant t\leqslant t_2$
, we have
Hence, recalling the definition of
${\mathcal{E}}$
and
$B(r, k)$
(i.e., (1.1) and (1.4)), we obtain by applying Lemma 2.2 and Stirling’s formula to the sum in (4.15) that
\begin{align*} {{\mathcal{R}}}_r(n, k)&\gg k^{-2}\frac {(t_0\log 4)^{t_2}}{t_2!}\min \left( (t_0\log 4)^{1/2}, \frac {t_0\log 4}{t_0\log 4-t_2}, t_2-t_1+1\right)\\ &\gg k^{-{{\mathcal{E}}}+\log (1-\delta )/\log 2}(\log k)^{-1/2}\min \Big ( (\log k)^{1/2}, \frac {1-\delta }{1-\delta -1/\log 4}+O(1/\log k)\Big )\\ &\gg k^{-{{\mathcal{E}}}+\log (1-\delta )/\log 2}(\log k)^{-1/2}(\log k)^{1/2}\min \Big ( 1, \frac {1}{(1-\delta )\log 4-1}(\log k)^{-1/2}\Big )\\ &\gg \delta B(r, k)k^{-{{\mathcal{E}}}+\log (1-\delta )/\log 2} \end{align*}
as required.
Subcase 2.2.
$0\lt \delta \lt \frac {1}{10}$
. In this case, we take
We now have
Applying Stirling’s formula to the sum in (4.15), we obtain that
as desired. This completes the proof of the lower bound in Theorem1.1.
4.2 The upper bound in Theorem 1.1
By the discussion in the first paragraph of this section, we only need to deal with the case that
$k\gt r_0$
and
$r\leqslant \varepsilon k$
. If
$\delta =\frac {\log r}{\log k}\geqslant (1-1/\log 4)$
, then by Lemma 2.6, one can get
which is just the desired upper bound in Theorem1.1.
From now on, we assume that
We divide our proof into the following two cases.
Case 1.
$1/10\lt \delta \lt 1-1/\log 4$
. By the definition of
${\mathscr{L}}^{\,\,*}({{\mathbf{a}}})$
(see (4.3)), we have
It then follows from (4.1), (4.2), and (4.16) that
\begin{align} \begin{split} {{\mathcal{R}}}_r(n, k)&\ll \frac {1}{k^2} \sum _{t\geqslant 1}\frac {1}{t!}\sum _{a_1,\ldots , a_t=r}^k \frac {|{\mathscr{L}}^{\,\,*}({{\mathbf{a}}})|}{a_1\cdots a_t}\\ &\leqslant \frac {1}{k^2}\sum _{t\leqslant J_2}\frac {1}{t!}\sum _{a_1,\ldots , a_t=r}^k \frac {2^t}{a_1\cdots a_t}+\frac {1}{k^2}\sum _{t\gt J_2}\frac {1}{t!}\sum _{a_1,\ldots , a_t=r}^k \frac {a_1+\cdots +a_t+1}{a_1\cdots a_t}\\ &\ll \frac {1}{k^2}\sum _{t\leqslant J_2} \frac {(2\log \frac {k}{r})^t}{t!}+ \frac {1}{k^2}\sum _{t\gt J_2}\frac {t(k-r+1)(\log \frac {k}{r})^{t-1}}{t!}. \end{split} \end{align}
Since
$J_2=\lfloor \frac {\log k}{\log 2}\rfloor$
and
$1/10\leqslant \delta =\frac {\log r}{\log k}$
, we have
$ \sqrt {2}\log \frac {k}{r}\leqslant J_2\leqslant 2\log \frac {k}{r}$
. So we derive from Lemma 2.2 and Stirling’s formula that
\begin{align} \begin{split} \frac {1}{k^2}\sum _{t\leqslant J_2} \frac {(2\log \frac {k}{r})^t}{t!} &\ll \frac {1}{k^2} \frac {(2\log \frac {k}{r})^{J_2}}{J_2!}\min \Bigg( \sqrt {2\log \frac {k}{r}}, \frac {2\log \frac {k}{r}}{2\log \frac {k}{r}-J_2}, J_2\Bigg)\\ &\ll k^{-{{\mathcal{E}}}+\frac {\log (1-\delta )}{\log 2}} \min \Big (1, (\log k)^{-\frac {1}{2}}\big ((1-\delta )\log 4-1\big )^{-1}\Big ). \end{split} \end{align}
Since
$J_2\gt \sqrt {2}\log \frac {k}{r}$
, we obtain by Stirling’s formula that
\begin{align} \frac {1}{k^2}\sum _{t\gt J_2}\frac {(k-r+1)t(\log \frac {k}{r})^{t-1}}{t!} \ll \frac {k}{k^2}\sum _{t\geqslant J_2}^\infty \frac {(\log \frac {k}{r})^{t}}{t!} \ll \frac {1}{k}\frac {(\log \frac {k}{r})^{J_2}}{J_2!}\ll k^{-{{\mathcal{E}}}+\frac {\log (1-\delta )}{\log 2}}(\log k)^{-\frac {1}{2}}. \end{align}
So we derive from (4.17), (4.18), and (4.19) that
as required in this case.
Case 2.
$0\lt \delta \leqslant 1/10$
. For any positive integer
$t$
, let
\begin{align} G_t= \sum _{a_1,\ldots , a_t=r}^k \frac {|{\mathscr{L}}^{\,\,*}({{\mathbf{a}}})|}{a_1\cdots a_t}, \end{align}
where
${\mathscr{L}}^{\,\,*}({{\mathbf{a}}})$
is defined as in (4.3). Then by (4.1) and (4.2), we have
In this case, we bound
$G_t$
in a manner similar to that in [Reference Eberhard, Ford and Green5]. Let
$\widetilde {a_1}, \widetilde {a_2}, \ldots , \widetilde {a_t}$
be the increasing rearrangement of the sequence
$\{a_i\}_{i=1}^t$
, so that
$\widetilde {a_1}\leqslant \widetilde {a_2}\leqslant \cdots \leqslant \widetilde {a_t}$
. For any
$j$
satisfying
$0\leqslant j\leqslant t$
, we have
\begin{align*} {\mathscr{L}}^{\,\,*}({{\mathbf{a}}})\subseteq \left\{ m+\sum _{i\in I}\widetilde {a_i}: r\leqslant m\leqslant \sum _{i=1}^j \widetilde {a_i}, I\subseteq \{j+1,\ldots , t\} \right\}, \end{align*}
which implies that
where
It is reasonable to expect that
\begin{align} \sum _{a_1,\ldots , a_t=r}^k \frac {F({{\mathbf{a}}})}{a_1\cdots a_t} \thicksim \int _r^k \cdots \int _r^k \frac {F({{\mathbf{x}}})}{x_1\cdots x_t} d{{\mathbf{x}}} =\big (\log \frac {k}{r}\big )^t \int _0^1\cdots \int _0^1 F(r{{\mathrm{e}}}^{\xi _1\log \frac {k}{r}},\ldots , r{{\mathrm{e}}}^{\xi _t\log \frac {k}{r}})d{\boldsymbol \xi }. \end{align}
We may also prove an approximate version of (4.24) as in [Reference Eberhard, Ford and Green5].
Lemma 4.2.
Suppose that
$k, r$
are sufficiently large integers, with
$\log r\leqslant (1-1/\log 4)\log k$
. Let
Then for any
$t\geqslant 1$
, we have
where
with
Proof. As in [Reference Eberhard, Ford and Green5], define the product sets
\begin{equation*} R({{\mathbf{a}}})=\prod _{i=1}^t[\exp (H_{a_i-1}), \exp (H_{a_i})]. \end{equation*}
Then we obtain by (4.22) that
\begin{align*} G_t\leqslant \sum _{a_1,\ldots , a_t=r}^k \frac {F({{\mathbf{a}}})}{a_1\cdots a_t}=\sum _{a_1,\ldots , a_t=r}^k F({{\mathbf{a}}})\int _{R({{\mathbf{a}}})}\frac {d{{\mathbf{x}}}}{x_1\cdots x_t}. \end{align*}
For
${{\mathbf{x}}}\in R({{\mathbf{a}}})$
, we write
$\widetilde {x_1}\leqslant \widetilde {x_2}\leqslant \cdots \leqslant \widetilde {x_t}$
for the non-decreasing rearrangement of the components of the vector
${\mathbf{x}}$
. We then derive from (2.1) that
So we have
for all
${{\mathbf{x}}}\in R({{\mathbf{a}}})$
. It follows that
\begin{equation*} \sum _{a_1,\ldots , a_t=r}^k F({{\mathbf{a}}}) \int _{R({{\mathbf{a}}})}\frac {d{{\mathbf{x}}}}{x_1\cdots x_t} \leqslant \sum _{a_1,\ldots , a_t=r}^k \int _{R({{\mathbf{a}}})}\frac {F({{\mathbf{x}}})}{x_1\cdots x_t}d{{\mathbf{x}}} =\int _{\exp (H_{r-1})}^{\exp (H_k)}\cdots \int _{\exp (H_{r-1})}^{\exp (H_k)}\frac {F({{\mathbf{x}}})}{x_1\cdots x_t}d{{\mathbf{x}}}. \end{equation*}
We then obtain by making the change of variables
$x_i=r{{\mathrm{e}}}^{\xi _i(H_k-H_{r-1})}$
that
\begin{align*} G_t &\leqslant (2(H_k-H_{r-1}))^t t! \int _{\Omega _t} \min _{0\leqslant j\leqslant t}2^{-j} \Big ( r{{\mathrm{e}}}^{\xi _1(H_k-H_{r-1})}+\cdots + r{{\mathrm{e}}}^{\xi _j(H_k-H_{r-1})}+1\Big )d{\boldsymbol \xi }\\ &\ll \big (2\log \frac {k}{r}\big )^t t! \int _{\Omega _t} \min _{0\leqslant j\leqslant t}2^{-j} \Big (r\big (\frac {k}{r}\big )^{\xi _1}+\cdots +r\big (\frac {k}{r}\big )^{\xi _j}+1\Big )\textrm {d}{\boldsymbol \xi }\\ &\ll \big (2\log \frac {k}{r}\big )^t t! U_t(v_0, u_0) \end{align*}
as desired.
To bound
$U_t(v_0, u_0)$
, we apply the following estimate established by Ford [Reference Ford13].
Lemma 4.3.
Suppose that
$t, u, v$
are integers satisfying
$1\leqslant t\leqslant 10v$
and
$u\geqslant 1$
. Then
Since
$v_0=\lfloor \frac {\log k-\log r}{\log 2}\rfloor$
, we have
If
$0\lt \delta \leqslant 1/10$
, then by (4.4) and (4.25), we have
$J_2=v_0+u_0+O(1)$
and
$8J_2\leqslant 10v_0$
. Hence
$2\log \frac {k}{r}=2(\log k-\log r)\geqslant 2\times 0.9\log k\gt {1.2J_2}$
. Note that
$\frac {\log k-\log r}{J_2}\lt \frac {5}{7}$
. Hence, using Lemmas 4.2 and 4.3, we obtain by (4.25) that
\begin{equation} \begin{split} \sum _{1\leqslant t\leqslant 8J_2}\frac {G_t}{t!} &\ll \sum _{1\leqslant t\lt J_2}\left(2\log \frac {k}{r}\right)^t U_t(v_0, u_0)+\sum _{J_2\leqslant t\leqslant 8J_2}\left(2\log \frac {k}{r}\right)^t U_t(v_0, u_0)\\ &\ll \sum _{1\leqslant t\lt J_2}\frac {u_0(1+(t-J_2)^2)}{(t+1)!(2^{t-J_2}+1)}\left(2\log \frac {k}{r}\right)^t\\ &\quad +\sum _{J_2\leqslant t\leqslant 8J_2}\frac {u_0(1+(t-J_2)^2)}{(t+1)!(2^{t-J_2}+1)}\left(2\log \frac {k}{r}\right)^t\\ &\ll u_0 \frac {\big(2\log \frac {k}{r}\big)^{J_2-1}}{J_2!}+u_0\sum _{l=0}^{7J_2}\frac {1+l^2}{(J_2+1+l)! 2^l}\left(2\log \frac {k}{r}\right)^{J_2+l}\\ &\ll u_0 \frac {(2\log \frac {k}{r})^{J_2-1}}{J_2!}+ u_0\frac {(2\log \frac {k}{r})^{J_2}}{(J_2+1)!}\\ &\ll (\log r)\frac {(2\log \frac {k}{r})^{J_2}}{(J_2+1)!}. \end{split} \end{equation}
If
$t\gt 8J_2$
, one can infer from (4.26) that
$U_t(v_0, u_0)\leqslant \int _{\Omega _t} 1 d{\boldsymbol \xi }=\frac {1}{t!}$
. Moreover, one can derive from
$2\log \frac {k}{r}\leqslant 2\log k$
and
$J_2=\lfloor \frac {\log k}{\log 2}\rfloor$
that
$\frac {2\log \frac {k}{r}}{8J_2}\leqslant 1/4$
. Hence we have by Lemma 4.3 that
\begin{align} \sum _{t\gt 8J_2}\frac {G_t}{t!}\ll \sum _{t\gt 8J_2}\frac {\big (2\log \frac {k}{r}\big )^t }{t!} \ll \frac {(2\log \frac {k}{r})^{8J_2}}{(8J_2)!}\ll \frac {(2\log \frac {k}{r})^{J_2}}{(J_2+1)!}. \end{align}
If
$0\lt \delta \leqslant \frac {1}{10}$
, then by (1.4) we have
$B(r, k)\asymp (\log k)^{-1/2}$
. So we derive from (4.21), (4.27), (4.28), and Stirling’s formula that
as desired if
$0\lt \delta \leqslant \frac {1}{10}$
.
Acknowledgements
The author would like to thank the two anonymous referees for their careful reading and valuable suggestions on the manuscript. Their constructive comments have been instrumental in improving the presentation and overall readability of this work.
Funding statement
This research was supported partially by National Natural Science Foundation of China under Grant No. 12371333.








