Hostname: page-component-65b85459fc-cljkw Total loading time: 0 Render date: 2025-10-18T13:59:24.771Z Has data issue: false hasContentIssue false

THE NUMBER OF PARTS IN A RANDOM t-REGULAR PARTITION

Published online by Cambridge University Press:  16 October 2025

TAPAS BHOWMIK
Affiliation:
Department of Mathematics, University of South Carolina , Columbia, SC 29208, USA e-mail: tbhowmik@email.sc.edu
WEI-LUN TSAI*
Affiliation:
Department of Mathematics, University of South Carolina , Columbia, SC 29208, USA
Rights & Permissions [Opens in a new window]

Abstract

For any integer $t \geq 2$, we prove a local limit theorem (LLT) with an explicit convergence rate for the number of parts in a uniformly chosen t-regular partition. When $t = 2$, this recovers the LLT for partitions into distinct parts, as previously established in the work of Szekeres [‘Asymptotic distributions of the number and size of parts in unequal partitions’, Bull. Aust. Math. Soc. 36 (1987), 89–97].

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Australian Mathematical Publishing Association Inc

1 Introduction and statement of results

A partition $\lambda $ of size n is a nonincreasing sequence of positive integers ${\lambda = (\lambda _1, \lambda _2, \dots , \lambda _k)}$ whose entries, called parts, sum to n. For any integer $t \geq 2$ , a t-regular partition of a positive integer n is a partition in which each part appears less than t times. Let $p_t(n)$ denote the number of such partitions of n. These partitions have been extensively studied and are connected to a wide range of problems in combinatorics and number theory. For example, when $t = 2$ , this gives the partitions into distinct parts, also called distinct partitions. This special case has rich arithmetic significance. Indeed, Ono [Reference Ono11] provided explicit recursions that relate distinct partitions to special values of L-functions associated to elliptic curves. More recently, Ballantine et al. [Reference Ballantine, Burson, Craig, Folsom and Wen2] analysed hook length biases arising in comparisons between distinct and odd partitions. For general $t\geq 2,$ Hagis [Reference Hagis7] applied modular transformations and exponential sum estimates to obtain a Rademacher-type formula for $p_t(n)$ , from which an asymptotic formula follows directly.

From a probabilistic perspective, Erdős and Lehner [Reference Erdős and Lehner5] were the first to study the distribution of the number of parts in distinct partitions. They proved that the associated random variable is asymptotically normal (that is, it satisfies a central limit theorem (CLT)). A local limit theorem (LLT) for this case was later established by Szekeres [Reference Szekeres14] using a more delicate analysis [Reference Szekeres13]. See also the work of Hwang [Reference Hwang8] and Mutafchiev [Reference Mutafchiev10] for further generalisations developed along different directions. For $t \geq 3$ , a CLT was obtained by Ralaivaosaona [Reference Ralaivaosaona12]. In this paper, we strengthen and extend these results by proving an LLT for all $t\geq 2.$ To keep the paper self-contained, our unified analysis also provides an alternative proof of the CLT.

To study the limiting distribution of the number of parts in t-regular partitions in a systematic way, we make use of the bivariate generating function

(1.1) $$ \begin{align} G_t(w,z)=\sum_{n=0}^\infty \sum_{m=0}^\infty p_t(m,n)\, w^m z^n=\prod_{j\geq 1} (1+wz^j+w^2z^{2j}+\cdots +w^{t-1}z^{(t-1)j}), \end{align} $$

where $p_t(m,n)$ counts the number of t-regular partitions of n with exactly m parts. For every $n\in \mathbb {Z}_{\geq 1}$ , under the associated uniform measure, we can consider a random variable $\mathrm Y_t(n)$ given by

$$ \begin{align*} \mathbb{P}( \mathrm Y_t(n)=m):=\frac{p_t(m,n)}{p_t(n)}. \end{align*} $$

The following theorem [Reference Erdős and Lehner5, Reference Ralaivaosaona12] illustrates the limiting behaviour of $\mathrm Y_t(n)$ in distribution.

Theorem 1.1. For a fixed $t\in \mathbb {Z}_{\geq 2}$ , the sequence $\{\mathrm Y_t(n)\}$ is asymptotically normal, that is,

$$ \begin{align*} \mathrm Y_t(n)\sim \frac{\sqrt{n}\,\log t}{C}+ \sqrt{K} n^{1/4}\cdot \mathcal{N}(0,1), \end{align*} $$

where

(1.2) $$ \begin{align} C=C_t:=\frac{\pi\sqrt{t-1}}{\sqrt{6t}}, \quad K=K_t:= \frac{t-1}{2C}-\frac{(\log t)^2}{2 C^3}. \end{align} $$

More precisely, for every $x\in \mathbb {R}$ ,

(1.3) $$ \begin{align} \lim_{n\rightarrow\infty}\mathbb{P}\bigg(\mathrm Y_t(n)\leq \frac{\sqrt{n}\log t}{C}+\sqrt{K} n^{1/4}\, x\bigg)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{-{u^2}/{2}}\, du=:\Phi(x). \end{align} $$

Example 1.2 (4-regular partitions).

For $t=4$ and $n=1000,$ we compute

$$ \begin{align*} \sum_{m\geq 0}p_4(m,1000)w^m&=w + 500w^2+83333w^3+\cdots \\ &\quad +841211289w^{73}+8936481w^{74}+24502w^{75}. \end{align*} $$

In Figure 1, we plot the coefficients $p_4(m,1000)$ and give a table which illustrates an approximation of the left-hand side of (1.3), denoted by $\mathbb {P}_{1000}(x)$ , showing that $\mathbb {P}_{1000}(x) \approx \Phi (x).$

Figure 1 $p_4(m,1000)$ and asymptotics for the cumulative distribution for $n=1000$ .

Next, we provide an asymptotic formula for $p_t(m,n)$ when m varies within a range depending on $n,$ which gives a local insight into how the mass function approximates the normal curve (compare Figure 1).

Theorem 1.3. Fix $t\in \mathbb {Z}_{\geq 2}$ , and define C and K as in (1.2). Assume the positive integers m and n are such that

(1.4) $$ \begin{align} \rho_{m,n}=\rho:=m-\frac{\sqrt{n}\,\log t}{C}=\mathcal{O}_t (n^{{5}/{18}}). \end{align} $$

Then, as $n\rightarrow \infty $ ,

$$ \begin{align*} p_t(m,n)=\frac{\sqrt{2C}}{4\pi n\sqrt{K t}}\exp\bigg( 2C\sqrt{n}-\frac{\rho_{m,n}^2}{2K\sqrt{n}}\bigg) (1+\mathcal{O}_t ( n^{-{1}/{10}})). \end{align*} $$

As an immediate application of Theorem 1.3, we obtain an LLT for $\mathrm Y_t(n).$

Corollary 1.4. Fix $t\in \mathbb {Z}_{\geq 2}$ and let X be an arbitrary bounded subset of $\mathbb {R}$ . Then, as $n\rightarrow \infty $ ,

$$ \begin{align*} \sup_{x\in X}\bigg |\sqrt{K} n^{1/4}\, \mathbb{P}\bigg( \mathrm Y_t(n)= \bigg\lfloor\frac{\sqrt{n}\log t}{C}+\sqrt{K} n^{1/4}\, x\bigg\rfloor\bigg)-\frac{1}{\sqrt{2\pi}}\, e^{-{x^2}/{2}}\bigg|=\mathcal{O}_t (n^{-{1}/{10}}). \end{align*} $$

This paper is organised as follows. In Section 2, we recall several standard properties of the dilogarithm function and derive some key estimates using a variant of the Euler–Maclaurin summation formula. In Section 3, we first obtain a certain asymptotic formula by applying the saddle-point method, and use it to determine the limiting distribution through the moment generating function and a continuity theorem from probability theory. In Section 4, we implement the saddle-point method in two variables to prove Theorem 1.3, which directly implies Corollary 1.4.

2 Preliminaries

2.1 The dilogarithm function and a related identity

We now mention some properties of the dilogarithm function and refer to [Reference Andrews, Askey and Roy1, Reference Zagier, Cartier, Moussa, Julia and Vanhove16] for details. The series expansion of the dilogarithm function for $|z|<1$ is

$$ \begin{align*} \mathrm{Li}_2(z)=\sum_{n=1}^\infty \frac{z^n}{n^2}. \end{align*} $$

The integral representation,

$$ \begin{align*} \mathrm{Li}_2(z)=-\int_{0}^z\frac{\log(1-u)}{u}\, du, \end{align*} $$

implies that $\mathrm {Li}_2(z)$ can be defined for every $z\in \mathbb {C}$ . The dilogarithm function has a logarithmic branch point at $z=1$ , where it is continuous and satisfies $\mathrm {Li}_2(1)=\pi ^2/6$ . A standard choice for the branch cut is along the positive real line starting from $1$ , which corresponds to the principal branch of the logarithm, so that $\mathrm {Li}_2(z)$ is analytic in the cut plane $\mathbb {C}\setminus [1,\infty )$ .

Lemma 2.1. Fix $t\in \mathbb {Z}_{\geq 2}$ and define C as in (1.2). Then,

$$ \begin{align*} \int_0^\infty \log (1+e^{-x}+e^{-2x}+\cdots+ e^{-(t-1)x})\, dx=C^2. \end{align*} $$

Proof. By the change of variable $v=e^{-x}$ , the left-hand side becomes

$$ \begin{align*} \int_0^1 \log (1+v+v^2+\cdots+ v^{t-1})\,\frac{dv}{v}& = \lim_{\delta\rightarrow 1^-} \bigg( \int_0^\delta \frac{\log(1-v^t)}{v}\, dv-\int_0^\delta \frac{\log(1-v)}{v}\, dv\bigg)\\ &=\lim_{\delta\rightarrow 1^-} \bigg( \mathrm{Li}_2(\delta)-\frac{\mathrm{Li}_2(\delta^t)}{t} \bigg)=C^2, \end{align*} $$

where we use the continuity and the special value of $\mathrm {Li}_2(z)$ at $z=1$ .

2.2 The Euler–Maclaurin summation formula and some auxiliary estimates

To study the asymptotic behaviour of various functions represented by infinite products, it is convenient to take the logarithm and analyse the associated infinite series with the help of the Euler–Maclaurin summation formula. A function $f:(0,\infty )\rightarrow \mathbb {C}$ has rapid decay at infinity if there exists some $\epsilon>0 $ such that $f(x)=\mathcal {O}(x^{-1-\epsilon })$ as $x\rightarrow \infty $ . In [Reference Zagier and Zeidler15], Zagier showed that the classical Euler–Maclaurin summation formula can be applied to derive an asymptotic expansion of the form

(2.1) $$ \begin{align} \sum_{j\geq 0} f((j+\varrho)\beta)= \frac{1}{\beta}\int_0^\infty f(x)\, dx-\sum_{k=0}^{N-1}\frac{B_{k+1}(\varrho)f^{(k)}(0)}{(k+1)!}\beta^k+\mathcal{O}_N(\beta^N), \end{align} $$

as $\beta \rightarrow 0^+ $ , where $B_l(x)$ are the Bernoulli polynomials, $\varrho \in \mathbb {R}^+$ , $N\in \mathbb {Z}_{\geq 1}$ and ${f:(0,\infty )\rightarrow \mathbb {C}}$ is a smooth function such that f and all of its derivatives have rapid decay at infinity. For an analogue of (2.1) in the case of complex variables, we refer to [Reference Bringmann, Jennings-Shaffer and Mahlburg3, Theorems 1.2 and 1.3].

We now derive some auxiliary estimates for the proof of our main results.

Lemma 2.2. For a fixed $t\in \mathbb {Z}_{\geq 2}$ and an arbitrary positive real number w, as $\beta \rightarrow 0^+$ ,

(2.2) $$ \begin{align} \sum_{j\geq 1} \log (1+we^{-j\beta}+\cdots+w^{t-1}e^{-{(t-1)}j\beta}) &=\frac{B(w)}{\beta} -\frac{1}{2}\log (1+w+\cdots+w^{t-1}) +\mathcal{O}_t(\beta), \end{align} $$
(2.3) $$ \begin{align} \sum_{j\geq 1} \bigg( \frac{wj}{e^{j\beta}-w}-\frac{tw^tj}{e^{jt\beta}-w^t}\bigg) &=\frac{B(w)}{\beta^2} +\mathcal{O}_t(1), \end{align} $$
(2.4) $$ \begin{align} \sum_{j\geq 1} \bigg( \frac{wj^2 e^{j\beta}}{(e^{j\beta}-w)^2}-\frac{w^tj^2t^2 e^{jt\beta}}{(e^{jt\beta}-w^t)^2}\bigg) &=\frac{2B(w)}{\beta^3}+\mathcal{O}_t(1), \end{align} $$

where

(2.5) $$ \begin{align} B(w):=\int_0^\infty \log (1+we^{-x}+w^2e^{-2x}+\cdots +w^{t-1}e^{-(t-1)x})\, dx.\end{align} $$

Proof. Note that the left-hand side of (2.2) can be written as $\sum _{j\geq 0} f_w((j+1)\beta )$ , where

$$ \begin{align*} f_w(x):=\log (1+we^{-x}+w^2e^{-2x}+\cdots+w^{t-1}e^{-(t-1)x}) \end{align*} $$

for $x>0$ , and $f_w(0)=\log (1+w+w^2+\cdots +w^{t-1})$ . Clearly, the function $f_w(x)$ is infinitely differentiable at zero and all of its derivatives have rapid decay at infinity. In particular,

(2.6) $$ \begin{align} f_w^{(1)}(x)=-\frac{w e^{-x}+2w^2e^{-2x}+\cdots +(t-1)w^{t-1}e^{-(t-1)x}}{1+we^{-x}+w^2 e^{-2x}+\cdots +w^{t-1}e^{-(t-1)x}}=\frac{tw^t}{e^{tx}-w^t}-\frac{w}{e^x-w}. \end{align} $$

Applying (2.1) with $N=1$ ,

$$ \begin{align*} \sum_{j\geq 0} f_w( (j+1)\beta) =\frac{1}{\beta}\int_0^\infty f_w(x)\, dx -B_1(1)f_w(0)+\mathcal{O}_t(\beta), \end{align*} $$

which implies the estimate in (2.2).

To prove (2.3), note that from (2.6), the left-hand side can be written as ${\beta ^{-1}}\sum _{j\geq 0} g_w ((j+1)\beta ),$ where $g_w(x):=-xf_w^{(1)}(x)$ . Then, the estimate follows by using (2.1) with $N=1$ and the fact that $g_w(0)=0$ .

For (2.4), we write the left-hand side as ${\beta ^{-2}}\sum _{j\geq 0} h_w ((j+1)\beta )$ , where $h_w(x):=x^2f_w^{(2)}(x)$ . From $\lim _{x\rightarrow \infty } x^2f_w^{(1)}(x)=\lim _{x\rightarrow \infty } xf_w(x)=0$ , we have

$$ \begin{align*} \int_0^\infty h_w(x)\, dx=-2\int_0^\infty xf_w^{(1)}(x)=2\int_0^\infty f_w(x)\, dx. \end{align*} $$

Thus, the claimed estimate follows from (2.1) with $N=2$ and the fact that $h_w(0)=h_w^{(1)}(0)=0$ .

Lemma 2.3. With the same assumptions as in Lemma 2.2, as $\beta \rightarrow 0^+$ ,

(2.7) $$ \begin{align} \sum_{j\geq 1}\bigg( \frac{w}{e^{j\beta}-w}-\frac{t w^t}{e^{jt\beta}-w^t}\bigg) & =\frac{1}{\beta}\log (1+w+w^2+\cdots+w^{t-1}) +\mathcal{O}_t(1), \end{align} $$
(2.8) $$ \begin{align} \sum_{j\geq 1}\bigg( \frac{w e^{j\beta}}{(e^{j\beta}-w)^2}-\frac{t^2 w^t e^{jt\beta}}{(e^{jt\beta}-w^t)^2}\bigg) & =\frac{1}{\beta}\bigg( \frac{w+2w^2+\cdots+ (t-1)w^{t-1}}{1+w+w^2+\cdots+w^{t-1}} \bigg) +\mathcal{O}_t(1),\end{align} $$
(2.9) $$ \begin{align} \sum_{j\geq 1}\bigg( \frac{jw e^{j\beta}}{(e^{j\beta}-w)^2}-\frac{t^2 w^t j e^{jt\beta}}{(e^{jt\beta}-w^t)^2}\bigg) & =\frac{1}{\beta^2}\log (1+w+w^2+\cdots+w^{t-1}) +\mathcal{O}_t(1). \end{align} $$

Proof. Using (2.6), we can write the left-hand side of (2.7) as $\sum _{j\geq 0}\mathfrak {f}_w ((j+1)\beta )$ , where

$$ \begin{align*} \mathfrak{f}_w(x):=\frac{w e^{-x}+2w^2e^{-2x}+\cdots +(t-1)w^{t-1}e^{-(t-1)x}}{1+we^{-x}+w^2 e^{-2x}+\cdots +w^{t-1}e^{-(t-1)x}}. \end{align*} $$

Note that $ \int _0^\infty \mathfrak {f}_w(x)\, dx=\log (1+w+w^2+\cdots +w^{t-1}), $ and also that $\mathfrak {f}_w (x)$ and its derivatives have rapid decay at infinity. Thus, the claimed estimate in (2.7) follows after using (2.1) with $N=1$ . For the remaining estimates, we again use (2.6) to express the left-hand sides of the equations as $\sum _{j\geq 0}\mathfrak {g}_w( (j+1)\beta )$ and ${\beta ^{-1}}\sum _{j\geq 0}\mathfrak {h}_w( (j+1)\beta )$ , where $\mathfrak {g}_w(x):=-\mathfrak {f}_w^{(1)}(x)$ and $\mathfrak {h}_w(x):=-x\mathfrak {f}_w^{(1)}(x)$ . Then, the claims in (2.8) and (2.9) follow after applying (2.1) with $N=1$ and $N=2$ , respectively.

If w belongs to a bounded subset of positive reals, then the error terms for the estimates in Lemmas 2.2 and 2.3 are uniform with respect to w. In particular, we can allow w to vary in a neighbourhood of $1$ as $\beta \rightarrow 0^+$ . We will require these estimates in a hybrid fashion, where $w=e^{-\alpha }$ for some real number $\alpha \rightarrow 0$ and $\beta \rightarrow 0^+$ at the same time. The main terms of the estimates in Lemmas 2.2 and 2.3 can be simplified using the following expansions:

(2.10) $$ \begin{align} B(w) = C^2-\alpha\log t+\frac{t-1}{4}\alpha^2+\mathcal{O}_t(\alpha^3), \end{align} $$
(2.11) $$ \begin{align} \log (1+w+w^2+\cdots+w^{t-1}) =\log t-\frac{t-1}{2}\alpha+\mathcal{O}_t(\alpha^2), \end{align} $$
(2.12) $$ \begin{align} \frac{w+2w^2+\cdots+ (t-1)w^{t-1}}{1+w+w^2+\cdots+w^{t-1}} =\frac{t-1}{2}\alpha-\frac{t^2-1}{12}\alpha^2+\mathcal{O}_t(\alpha^3). \end{align} $$

The expansions in (2.11) and (2.12) follow from an elementary computation. For (2.10), we put $w=e^{-\alpha }$ in (2.5) and compute the Taylor expansion at $\alpha =0$ . In particular, the constant term $C^2$ follows from Lemma 2.1.

2.3 A continuity theorem from probability theory

A classical result for determining the limiting distribution of a sequence of random variables $\{X_n\}$ , not necessarily in the same probability space, is Lévy’s continuity theorem, which relates the associated sequence of characteristic functions with convergence in distribution. Later, Curtiss derived a corresponding result in the form of a moment generating function (MGF), which is more convenient in many applications if the corresponding MGFs exist (see [Reference Curtiss4] for the definition of MGF and related results). We will apply a refined version of the Curtiss result, which allows us to simplify our analysis.

Theorem 2.4 [Reference Mukherjea, Rao and Suen9, Theorem 2].

Let $ a < b$ , and let $M(X_n;r)$ and $M(X;r)$ be MGFs of the random variables $X_n$ and X, respectively. If

$$ \begin{align*} \lim_{n\rightarrow\infty} M(X_n;r)=M(X;r) \quad\mbox{for } a<r<b, \end{align*} $$

then the sequence $\{X_n\}$ converges to X in distribution.

3 Proof of Theorem 1.1

We rewrite the generating function from (1.1) as

(3.1) $$ \begin{align} G_t(w,z)=\sum_{n\geq 0} P_{t;n}(w)\, z^n=\prod_{j\geq 1} (1+wz^j+w^2z^{2j}+\cdots+w^{t-1}z^{(t-1)j}), \end{align} $$

where $P_{t;0}(w)=1$ and, for every $n\geq 1$ ,

$$ \begin{align*} P_{t;n}(w):=\sum_{m\geq 0}p_t(m,n)w^m. \end{align*} $$

To identify the distribution of $\mathrm {Y}_t(n)$ , we will apply Theorem 2.4 to a normalised version of $\mathrm {Y}_t(n).$ Hence, we require an asymptotic formula for $P_{t;n}(w)$ as w tends to $1$ at a certain rate with respect to n.

Proposition 3.1. For an arbitrary $u\in \mathbb {R}_{\geq 0}$ , let $w_n:= \exp (-u/n^{{1}/{4}})$ . Then, as $n\rightarrow \infty $ ,

$$ \begin{align*} P_{t;n}(w_n)=\frac{\sqrt{C}}{2\sqrt{\pi t} \,n^{3/4}}\exp\bigg( 2C\sqrt{n}-\frac{n^{1/4}u\log t}{C}+\frac{K u^2}{2}\bigg) (1+\mathcal{O}_t (n^{-{1}/{7}})). \end{align*} $$

Remark. Although the choice of the quantity $n^{1/4}$ in $w_n$ might not be obvious at this point, it is essentially the magnitude of the standard deviation of $\mathrm {Y}_t(n)$ . While we could proceed only considering $w_n$ as a sequence converging to $1$ and later identify the right choice from a certain condition, this explicit choice allows us to simplify the expressions for the error terms in our analysis. One can apply Wright’s circle method to obtain an asymptotic formula for the variance of $\mathrm {Y}_t(n).$

Proof of Proposition 3.1.

Applying Cauchy’s theorem to (3.1), for $0<z_0<1$ ,

(3.2) $$ \begin{align} P_{t;n}(w_n)=\frac{1}{2\pi }\int_{-\pi}^\pi \exp (g_t(w_n;z_0e^{i\theta})) \, d\theta , \end{align} $$

where $g_t(w_n;z):=\operatorname {\mathrm {Log}}( z^{-n}G_t(w_n,z))$ for $0<|z|<1$ . Here, $\operatorname {\mathrm {Log}} {(\cdot )}$ denotes the principal value of the logarithm, with the imaginary part belonging to $(-\pi ,\pi ]$ . We apply the saddle-point method to estimate the integral in (3.2). We first need to determine ${z_0=e^{-\beta _n}}$ such that $g_t^{(1)}(w_n;z_0)=0$ . Since

$$ \begin{align*} g_t^{(1)}(w_n;z)=\bigg[ \sum_{j\geq 1}\bigg(\frac{jw_n z^j }{1-w_nz^j}-\frac{jtw_n^t z^{tj}}{1-w_n^tz^{tj}}\bigg)-n\bigg]\frac{1}{z}, \end{align*} $$

the condition $g_t^{(1)}(w_n;e^{-\beta _n})=0$ yields the equation

$$ \begin{align*} \sum_{j\geq 1}\bigg( \frac{jw_n}{e^{j\beta_n}-w_n}-\frac{jtw_n^t}{e^{jt\beta_n}-w_n^t}\bigg) =n. \end{align*} $$

Substituting the estimate from Lemma 2.2 and then solving for $\beta _n$ , we obtain

(3.3) $$ \begin{align} \beta_n=\frac{\sqrt{B(u;n)}}{\sqrt{n}} (1+\mathcal{O}_t (n^{-1})), \end{align} $$

where $B(u;n):=B(w_n)$ and, by the expansion (2.10),

$$ \begin{align*} B(u;n)=C^2-\frac{u\log t}{n^{1/4}}+\frac{(t-1)u^2}{4\sqrt{n}}+\mathcal{O}_t(n^{-{3}/{4}}). \end{align*} $$

Next, we estimate $g_t^{(2)}(w_n;z_0)$ , applying Lemma 2.2 together with $\beta _n$ from (3.3),

(3.4) $$ \begin{align} g_t^{(2)}(w_n;e^{-\beta_n})&=e^{2\beta_n}\sum_{j\geq 1}\bigg(\frac{j^2 w_n e^{j\beta_n}}{( e^{j\beta_n}-w_n)^2}-\frac{j^2t^2w_n^t e^{jt\beta_n} }{( e^{jt\beta_n}-w_n^t)^2} \bigg)-e^{\beta_n}g_t^{(1)}(w_n;e^{-\beta_n})\nonumber \\ &=e^{2\beta_n}\bigg(\frac{2B(u;n)}{\beta_n^3}+\mathcal{O}_t( 1)\bigg)\nonumber \\ &=e^{2\beta_n}\bigg( \frac{2n^{3/2}}{\sqrt{B(u;n)}} (1+\mathcal{O}_t (n^{-1}))+\mathcal{O}_t(1)\bigg)\nonumber \\ &=\frac{2n^{3/2}}{C} (1+\mathcal{O}_t (n^{-{1}/{4}})). \end{align} $$

By a similar argument, $ g_t^{(3)}(w_n;e^{-\beta _n})=\mathcal {O}_t(n^2). $ Then, we split the integral in (3.2) as

$$ \begin{align*}P_{t;n}(w_n)=I_1+I_2,\end{align*} $$

where

$$ \begin{align*} I_1:=\frac{1}{2\pi}\int_{|\theta|\leq n^{-5/7}}\exp\{ g_t (w_n;e^{-\beta_n+i\theta})\}\, d\theta,\quad I_2:=\frac{1}{2\pi}\int_{|\theta|>n^{-5/7}} \exp\{ g_t (w_n;e^{-\beta_n+i\theta})\}\, d\theta. \end{align*} $$

To estimate $I_1$ , we consider the Taylor expansion of $g_t(w_n;z)$ around $z_0=e^{-\beta _n}$ ,

$$ \begin{align*} g_t(w_n;z)=g_t(w_n;e^{-\beta_n})+\tfrac{1}{2}g_t^{(2)}(w_n;e^{-\beta_n})(z-e^{-\beta_n})^2+\mathcal{O}_t (n^2 (z-e^{-\beta_n})^3). \end{align*} $$

For $z=e^{-\beta _n+i\theta }$ with $|\theta |\leq n^{-5/7}$ , by applying (3.3),

$$ \begin{align*} z-e^{-\beta_n}= (1+O(\beta_n)) (i\theta+O(n^{-{10}/{7}}) ) =i\theta+\mathcal{O}_t (n^{-{17}/{14}}) \end{align*} $$

and hence,

$$ \begin{align*} g_t (w_n;e^{-\beta_n}) =g_t(w_n;e^{-\beta_n})-\frac{\theta^2}{2}\,g_t^{(2)}(w_n;e^{-\beta_n})+\mathcal{O}_t (n^{-{1}/{7}}). \end{align*} $$

Thus,

(3.5) $$ \begin{align} I_1=\frac{\exp (g_t(w_n;e^{-\beta_n}))} {2\pi}\bigg[\int_{-n^{-5/7}}^{n^{-5/7}} \exp\bigg(-\frac{\theta^2}{2}\,g_t^{(2)}(w_n;e^{-\beta_n})\bigg)\, d\theta\bigg] (1+\mathcal{O}_t (n^{-{1}/{7}}) ). \end{align} $$

We now show that the integral in $I_1$ can be taken from $-\infty $ to $\infty $ with a negligible error. More precisely, the contribution from the tails tends to zero sub-exponentially with respect to n, as we have the estimate

(3.6) $$ \begin{align} \int_{n^{-5/7}}^\infty \exp\bigg( -\frac{\theta^2}{2}\,g_t^{(2)}(w_n;e^{-\beta_n})\bigg)\,d\theta &\leq \int_{n^{-5/7}}^\infty \exp\bigg( -\frac{\theta}{2} n^{-{5}/{7}}g_t^{(2)}(w_n;e^{-\beta_n})\bigg)\,d\theta \nonumber\\ &= \frac{2}{n^{-{5}/{7}}g_t^{(2)}(w_n;e^{-\beta_n})}\exp\bigg( -\frac{1}{2}n^{-{10}/{7}}g_t^{(2)} (w_n;e^{-\beta_n})\bigg) \nonumber\\ &\leq \exp\bigg( -\frac{1}{C}\,n^{{1}/{14}}\bigg), \end{align} $$

and the same bound for the integral from $-\infty $ to $-n^{-5/7}$ because the integrand is an even function. Computing the Gaussian integral and applying the estimate from (3.4),

(3.7) $$ \begin{align} \int_{-\infty}^\infty \exp\bigg( -\frac{\theta^2}{2}\,g_t^{(2)}(w_n;e^{-\beta_n})\bigg)\,d\theta=\sqrt{\frac{2\pi}{g_t^{(2)}(w_n;e^{-\beta_n})}}=\frac{\sqrt{C\pi}}{n^{3/4}} (1+\mathcal{O}_t (n^{-{1}/{4}})). \end{align} $$

Rewriting the integral in (3.5), and substituting the evaluations from (3.6) and (3.7),

$$ \begin{align*} I_1=\sqrt{\frac{C}{4\pi}}\cdot\frac{\exp (g_t(w_n;e^{-\beta_n}))}{ n^{3/4}} (1+\mathcal{O}_{t} (n^{-{1}/{7}}) ). \end{align*} $$

Next, we show that $I_2$ does not contribute to the asymptotic formula of $P_{t;n}(w_n)$ . To estimate $I_2$ , we express the absolute value of the integrand as

$$ \begin{align*} |\!\exp (g_t(w_n;e^{-\beta_n+i\theta}))|=\exp (g_t(w_n;e^{-\beta_n})) \bigg|\frac{G_t (w_n,e^{-\beta_n+i\theta})}{G_t (w_n,e^{-\beta_n})}\bigg|. \end{align*} $$

By some algebraic manipulations,

$$ \begin{align*} |G_t (w_n,e^{-\beta_n+i\theta}) |^2= \prod_{j\geq 1} \bigg[\frac{(1-w_n^te^{-jt\beta_n})^2+2w_n^te^{-jt\beta_n} (1-\cos{jt\theta})}{(1-w_n e^{-j\beta_n})^2+2w_n e^{-j\beta_n} (1-\cos{j\theta})}\bigg], \end{align*} $$

and hence,

$$ \begin{align*} \bigg|\frac{G_t (w_n,e^{-\beta_n+i\theta})}{G_t (w_n,e^{-\beta_n})}\bigg|^2=\prod_{j\geq 1}\left[ \frac{1+\dfrac{2w_n^te^{-jt\beta_n}}{(1-w_n^te^{-jt\beta_n})^2} (1-\cos{jt\theta})}{1+\dfrac{2w_n e^{-j\beta_n}}{(1-w_n e^{-j\beta_n})^2} (1-\cos{j\theta})}\right]. \end{align*} $$

Since $w_n\leq 1$ , for every $j\geq 1$ ,

$$ \begin{align*} \frac{2w_n^t}{ (1-w_n^te^{-jt\beta_n})^2}\leq \frac{2w_n}{(1-w_n e^{-jt\beta_n})^2}, \end{align*} $$

and so, we can write

$$ \begin{align*} \bigg|\frac{G_t (w_n,e^{-\beta_n+i\theta})}{G_t (w_n,e^{-\beta_n})}\bigg|^2 &\leq \prod_{\substack{j\geq 1\\ t\nmid j}} \bigg[1+\frac{2w_n e^{-j\beta_n}}{(1-w_n e^{-j\beta_n})^2}( 1-\cos{j\theta})\bigg]^{-1}\\ &\leq \prod_{\substack{ \sqrt{n}\leq j\leq 2\sqrt{n}\\ t\nmid j }} \bigg[1+\frac{2w_n e^{-j\beta_n}}{(1-w_n e^{-j\beta_n})^2} (1-\cos{j\theta})\bigg]^{-1}, \end{align*} $$

where the last inequality follows because each factor of the infinite product is bounded above by $1$ . Note that $w_n\rightarrow 1^-$ and $\beta _n\sim {C}/{\sqrt {n}}$ as $n\rightarrow \infty $ . Thus, for all sufficiently large n and $\sqrt {n}\leq j\leq 2\sqrt {n}$ , there is a positive constant $\kappa $ such that

$$ \begin{align*} \frac{2w_n e^{-j\beta_n}}{( 1-w_n e^{-j\beta_n})^2}\geq \kappa, \end{align*} $$

uniformly in n and j. Whenever $n^{-5/7}<|\theta |\leq \pi $ , this yields

(3.8) $$ \begin{align} \bigg|\frac{G_t (w_n,e^{-\beta_n+i\theta})}{G_t (w_n,e^{-\beta_n})}\bigg|^2 &\leq \prod_{\substack{ \sqrt{n}\leq j\leq 2\sqrt{n}\\ t\nmid j }} [1+\kappa (1-\cos{j\theta})]^{-1}\leq \exp (-2n^{\delta}) \end{align} $$

for some explicit constant $\delta>0$ . The final inequality in (3.8) follows by a standard argument (see, for example, [Reference Griffin, Ono and Tsai6, pages 430–431]), and so we do not repeat it here. Applying (3.8), for sufficiently large n,

$$ \begin{align*} |I_2|\leq \frac{\exp (g_t ( w_n;e^{-\beta_n}))}{2\pi}\int_{n^{-5/7}<|\theta|\leq \pi} \bigg|\frac{G_t (w_n,e^{-\beta_n+i\theta})}{G_t (w_n,e^{-\beta_n})}\bigg|\, d\theta\leq \exp (g_t ( w_n;e^{-\beta_n})-n^{\delta}). \end{align*} $$

Combining the asymptotics of $I_1$ and $I_2$ , we obtain

(3.9) $$ \begin{align} P_{t;n}(w_n)=\sqrt{\frac{C}{4\pi}}\cdot\frac{\exp (g_t(w_n;e^{-\beta_n}))}{ n^{3/4}} (1+\mathcal{O}_t (n^{-{1}/{7}}) ). \end{align} $$

Then, we estimate $g_t(w_n;e^{-\beta _n})$ by applying Lemma 2.2 and (3.3),

$$ \begin{align*} g_t(w_n;e^{-\beta_n})&=\sum_{j\geq 1}\log (1+w_ne^{-j\beta_n}+w_n^2e^{-2j\beta_n}+\cdots+w_n^{t-1}e^{-(t-1)j\beta_n}) +n\beta_n\\ &=\frac{B(u;n)}{\beta_n}-\frac{1}{2}\log t+n\beta_n +\mathcal{O}_t (n^{-{1}/{4}})\\ &=2\sqrt{n B(u;n)} (1+\mathcal{O}_t( n^{-1})) -\frac{1}{2}\log t+\mathcal{O}_t(n^{-{1}/{4}})\\ &=2C\sqrt{n}-\frac{n^{1/4}u\log t}{C}+\frac{K u^2}{2}-\frac{1}{2}\log t+\mathcal{O}_t( n^{-{1}/{4}}). \end{align*} $$

Finally, we substitute this estimate of $g_t(w_n;z_0)$ in (3.9) to get the desired asymptotic formula.

Proof of Theorem 1.1.

For every $n\geq 1$ , we normalise the random variable $\mathrm {Y}_{t}(n)$ by

$$ \begin{align*} Z_t(n):=\frac{\mathrm{Y}_t(n)-{\sqrt{n}(\log t)}/{C}}{\sqrt{K}n^{1/4}}. \end{align*} $$

For arbitrary $r\in \mathbb {R}_{\geq 0}$ , we compute the moment generating function

$$ \begin{align*} M (Z_t(n);-r) & =\sum_{m\geq 0}\frac{p_t(m,n)}{p_t(n)}\exp\bigg( -\frac{mr}{\sqrt{K}n^{1/4}}+\frac{n^{1/4}r\log t}{\sqrt{K}C}\bigg) \\ & =\frac{P_{t;n}(w_n)}{P_{t;n}(1)}\exp\bigg(\frac{n^{1/4}r\log t}{\sqrt{K}C}\bigg), \end{align*} $$

where $w_n:=\exp (-r/\sqrt {K} n^{1/4})$ for all $n\geq 1$ . Taking $u=r/\sqrt {K}$ in Proposition 3.1, we directly get an asymptotic formula for $P_{t;n}(w_n)$ . In particular, for $u=0$ ,

(3.10) $$ \begin{align} P_{t;n}(1)=\frac{\sqrt{C}}{2\sqrt{\pi t} \,n^{3/4}}\, \exp (2C\sqrt{n}) (1+\mathcal{O}_t ( n^{-{1}/{7}})). \end{align} $$

Substituting these asymptotic expansions and then letting $n\rightarrow \infty $ , we obtain

$$ \begin{align*} \lim_{n\rightarrow\infty }M(Z_t(n);-r)=e^{r^2/2}. \end{align*} $$

Hence, Theorem 2.4 implies that $\{Z_t(n)\}$ converges in distribution to $\mathcal {N}(0,1).$

4 Proofs of Theorem 1.3 and Corollary 1.4

Proof of Theorem 1.3.

For arbitrary $(\alpha ,\beta )\in \mathbb {R}\times \mathbb {R}_{>0}$ , we let

$$ \begin{align*} \mathscr{D}:=\{w=w_0e^{i\phi}: w_0=e^{-\alpha}, -\pi<\phi\leq \pi \},\quad \mathscr{D'}:=\{z=z_0e^{i\theta}: z_0=e^{-\beta}, -\pi<\theta\leq \pi \}. \end{align*} $$

Then, applying Cauchy’s theorem to (3.1),

(4.1) $$ \begin{align} p_t(m,n)=\frac{1}{(2\pi i)^2}\int_{\mathscr{D'}}\int_{\mathscr{D}}\frac{G_t(w,z)}{w^{m+1}z^{n+1}}\, dw\, dz=\frac{1}{4\pi^2}\int_{-\pi}^\pi\int_{-\pi}^\pi \exp( g_t(w,z))\, d\phi\, d\theta, \end{align} $$

where $g_t(w,z):=\operatorname {\mathrm {Log}} G_t(w,z)-m\operatorname {\mathrm {Log}} w-n\operatorname {\mathrm {Log}} z$ . To estimate (4.1) via the saddle-point method, we need to choose $\alpha $ and $\beta $ such that

$$ \begin{align*}\frac{\partial}{\partial w}g_t(w,z)|_{(w_0,z_0)}=\frac{\partial}{\partial z}g_t(w,z)|_{(w_0,z_0)}=0\end{align*} $$

up to suitable error. These conditions yield the saddle-point equations

(4.2) $$ \begin{align} \sum_{j\geq 1}\bigg(\frac{w_0}{e^{j\beta}-w_0}-\frac{tw_0^t}{e^{jt\beta}-w_0^t}\bigg)=m,\quad \sum_{j\geq 1}\bigg(\frac{jw_0}{e^{j\beta}-w_0}-\frac{jtw_0^t}{e^{jt\beta}-w_0^t}\bigg)=n. \end{align} $$

For m and n satisfying (1.4) and $\rho =m-{\sqrt {n}\,\log t}/{C}$ , we choose $\alpha $ and $\beta $ depending on n by

$$ \begin{align*} \alpha=\alpha_n:=-\frac{\rho}{K\sqrt{n}},\quad \beta=\beta_n:=\frac{C}{\sqrt{n}}+\frac{\rho \log t}{2CK n}, \end{align*} $$

where the constants C and K are the same as in Theorem 1.1. Applying Lemmas 2.2 and 2.3 together with (2.11) and (2.12), we see that with these values of $\alpha $ and $\beta $ , the saddle-point conditions (4.2) become

$$ \begin{align*} \sum_{j\geq 1}\bigg(\frac{w_0}{e^{j\beta}-w_0}-\frac{tw_0^t}{e^{jt\beta}-w_0^t}\bigg)&=\frac{1}{\beta}\bigg(\log t-\frac{t-1}{2}\alpha+\mathcal{O}_t(\alpha^2)\bigg)+\mathcal{O}_t(1)\nonumber\\ &=\bigg( \frac{\sqrt{n}}{C}-\frac{\rho\log t}{2C^3K}+\mathcal{O}_t (n^{1/18})\bigg) \bigg(\log t -\frac{t-1}{2}\alpha\bigg) +\mathcal{O}_t(n^{1/18}) \nonumber\\ &=\frac{\sqrt{n}\log t}{C}+\rho+\mathcal{O}_t (n^{1/18}) =m+\mathcal{O}_t (n^{1/18}), \end{align*} $$

and similarly,

$$ \begin{align*} \sum_{j\geq 1}\bigg(\frac{jw_0}{e^{j\beta}-w_0}-\frac{jtw_0^t}{e^{jt\beta}-w_0^t}\bigg)&=\frac{1}{\beta^2} (C^2-\alpha\log t +\mathcal{O}_t (\alpha^2))+\mathcal{O}_t(1)\nonumber\\ &=\bigg( \frac{n}{C^2}-\frac{\rho\sqrt{n}\log t}{C^4K}+\mathcal{O}_t (n^{5/9}) \bigg) \bigg( C^2+\frac{\rho \log t}{K \sqrt{n}}\bigg)+\mathcal{O}_t (n^{5/9})\nonumber\\ &=n+\mathcal{O}_t (n^{5/9}), \end{align*} $$

which are sufficient for our purposes. We break the double integral (4.1) into two parts and focus on the arcs near $\phi =0$ and $\theta =0$ ,

(4.3) $$ \begin{align} \{w=e^{-\alpha_n+i\phi}, |\phi|\leq n^{-{1}/{5}} \} \quad\mbox{and}\quad \{z=e^{-\beta_n+i\theta}: |\theta|<n^{-{5}/{7}} \}, \end{align} $$

as some estimates analogous to (3.8) ensure that the integrals over the complementary arcs have subexponentially small contribution compared with (4.3). As in the proof of Proposition 3.1, we now expand the integrand over (4.3) in a Taylor series expansion centred at $(w_0,z_0)$ . Recall that

$$ \begin{align*} \operatorname{\mathrm{Log}} G_t(w,z)=\sum_{j\geq 1}\operatorname{\mathrm{Log}} (1+wz^j+w^2z^{2j}+\cdots+w^{t-1}z^{(t-1)j}). \end{align*} $$

After computing the partial derivatives, we see that

$$ \begin{align*} g_t(w,z) & =\log G_t(w_0,z_0) - (\mathcal{A}_n\phi^2+2\mathcal{B}_n\phi\theta+\mathcal{C}_n\theta^2) \\ &\quad +i\phi \sum_{j\geq 1}\bigg( \frac{w_0}{e^{j\beta}-w_0}-\frac{tw_0^t}{e^{jt\beta}-w_0^t}\bigg)+i\theta \sum_{j\geq 1}\bigg(\frac{jw_0}{e^{j\beta}-w_0}-\frac{jtw_0^t}{e^{jt\beta}-w_0^t}\bigg) \\ &\quad +\mathcal{O}_t\bigg(\max\bigg\{ \frac{|\phi^3|}{\beta},\frac{|\phi^2\theta|}{\beta^2},\frac{|\phi\theta^2|}{\beta^3},\frac{|\theta^3|}{\beta^4}\bigg\}\bigg) +m\alpha-im\phi+n\beta-in\theta,\nonumber \end{align*} $$

where

$$ \begin{align*} \mathcal{A}_n&:= \frac{1}{2}\sum_{j\geq 1}\bigg(\frac{e^{j\beta}w_0}{(e^{j\beta}-w_0)^2}-\frac{t^2w_0^te^{jt\beta}}{(e^{jt\beta}-w_0^t)^2}\bigg),\quad \mathcal{B}_n:= \frac{1}{2}\sum_{j\geq 1}\bigg(\frac{je^{j\beta}w_0}{(e^{j\beta}-w_0)^2}-\frac{jt^2w_0^te^{jt\beta}}{(e^{jt\beta}-w_0^t)^2}\bigg) \end{align*} $$

and

$$ \begin{align*} \mathcal{C}_n:= \frac{1}{2}\sum_{j\geq 1}\bigg( \frac{j^2e^{j\beta}w_0}{ (e^{j\beta}-w_0)^2}-\frac{j^2t^2w_0^te^{jt\beta}}{ (e^{jt\beta}-w_0^t)^2}\bigg). \end{align*} $$

Applying the estimates from (2.2) and (2.7),

$$ \begin{align*} & i\phi\bigg[ \sum_{j\geq 1}\bigg(\frac{w_0}{e^{j\beta}-w_0}-\frac{tw_0^t}{e^{jt\beta}-w_0^t}\bigg)-m\bigg]=\mathcal{O}_t (|\phi|\,n^{1/18})=\mathcal{O}_t\ (n^{-{13}/{90}}),\nonumber\\ &i\theta\bigg[\sum_{j\geq 1}\bigg(\frac{jw_0}{e^{j\beta}-w_0}-\frac{jtw_0^t}{e^{jt\beta}-w_0^t}\bigg)-n\bigg]=\mathcal{O}_t (|\theta|\,n^{5/9}) =\mathcal{O}_t (n^{-{10}/{63}}).\nonumber \end{align*} $$

Also note that

$$ \begin{align*} \frac{|\phi^3|}{\beta}=\mathcal{O}_t (n^{-{1}/{10}}),\quad \frac{|\phi^2\theta|}{\beta^2}=\mathcal{O}_t (n^{-{4}/{35}}),\quad \frac{|\phi\theta^2|}{\beta^3}=\mathcal{O}_t (n^{-{9}/{70}}),\quad \frac{|\theta^3|}{\beta^4}=\mathcal{O}_t (n^{-{1}/{7}}). \end{align*} $$

Substituting these estimates in the expansion of $g_t(w,z)$ , and combining with (4.1),

$$ \begin{align*} p_t(m,n)& =\frac{\exp (g_t(w_0,z_0))}{4\pi^2} \int_{-n^{-5/7}}^{n^{-5/7}}\int_{-n^{-1/5}}^{n^{-1/5}} \exp\{ -( \mathcal{A}_n\phi^2+2\mathcal{B}_n\phi\theta+\mathcal{C}_n\theta^2)\} \,d\phi \,d\theta \\ &\quad \times (1+\mathcal{O}_t (n^{-{1}/{10}})). \end{align*} $$

By Lemma 2.3 together with the expansions from (2.11) and (2.12),

$$ \begin{align*}\begin{aligned} \mathcal{A}_n=\frac{t-1}{4\beta}+\mathcal{O}_t (n^{5/18}),\quad \mathcal{B}_n=\frac{\log t}{2\beta^2}+\mathcal{O}_t(n^{14/18}), \quad \mathcal{C}_n=\frac{C^2}{\beta^3}+\mathcal{O}_t (n^{23/18}). \end{aligned}\end{align*} $$

For $|\phi |=n^{-1/5}$ and $|\theta |=n^{-5/7}$ , as $n\rightarrow \infty $ ,

$$ \begin{align*} \mathcal{A}_n\phi^2 \sim \frac{t-1}{4C}\,n^{1/10},\quad \mathcal{B}_n|\phi\theta|\sim \frac{\log t}{2C^2}\,n^{3/35},\quad \mathcal{C}_n\theta^2\sim \frac{1}{C}\,n^{1/14}. \end{align*} $$

Thus, the same argument as in (3.6) shows that the lower and upper limits of the double integral in $p_t(m,n)$ can be replaced by $-\infty $ and $+\infty $ , respectively, with an error term that goes to zero subexponentially with respect to n. Then, we estimate the completed double integral via the formula

$$ \begin{align*} \begin{aligned} \int_{-\infty}^\infty\int_{-\infty}^\infty \exp\{ -( \mathcal{A}_n\phi^2+2\mathcal{B}_n\phi\theta+\mathcal{C}_n\theta^2)\}\, d\phi\, d\theta&=\frac{\pi}{\sqrt{\mathcal{A}_n\mathcal{C}_n-\mathcal{B}_n^2}}, \end{aligned} \end{align*} $$

where

$$ \begin{align*} \mathcal{A}_n \mathcal{C}_n-\mathcal{B}_n^2=\frac{1}{\beta^4}\bigg(\frac{(t-1)C^2}{4}-\frac{(\log t)^2}{4}\bigg)+\mathcal{O}_t (n^{16/9}) =\frac{K n^2}{2C} (1+\mathcal{O}_t (n^{-{2/9}}) ). \end{align*} $$

Substituting these estimates in the expression for $p_t(m,n)$ above gives

(4.4) $$ \begin{align} p_t(m,n)=\frac{\sqrt{2C}}{4\pi n\sqrt{K }}\exp ({g_t(w_0,z_0)}) (1+\mathcal{O}_t \ (n^{-{1}/{10}})). \end{align} $$

Then, we estimate $g_t(w_0,z_0)$ by using Lemma 2.2 and (2.10) to give

$$ \begin{align*} g_t(w_0,z_0)&=\log G_t(e^{-\alpha},e^{-\beta})+m\alpha+n\beta\\ &=\frac{1}{\beta}\bigg(C^2-\alpha\log t+\frac{t-1}{4}\alpha^2\bigg) -\frac{1}{2}\log t+ \bigg(\rho+\frac{\sqrt{n}\log t}{C}\bigg) \alpha+n\beta+\mathcal{O}_t (n^{-{1}/{6}})\\ &=2C\sqrt{n}-\frac{\rho^2}{2K\sqrt{n}}-\frac{1}{2}\log t+\mathcal{O}_t (n^{-{1/6}}). \end{align*} $$

Substituting this in (4.4) yields the claimed asymptotic formula for $p_t(m,n)$ .

Proof of Corollary 1.4.

Recall the asymptotic formula for $p_t(n)$ from (3.10),

$$ \begin{align*} p_t(n)=\frac{\sqrt{C}}{2\sqrt{\pi t} \,n^{3/4}} \exp (2C\sqrt{n}) (1+\mathcal{O}_t (n^{-{1}/{7}})). \end{align*} $$

By Theorem 1.3,

(4.5) $$ \begin{align} \frac{p_t(m,n)}{p_t(n)}=\frac{1}{\sqrt{2\pi K}n^{1/4}}\exp \bigg(-\frac{\rho_{m,n}^2}{2K\sqrt{n}}\bigg) (1+\mathcal{O}_t (n^{-{1}/{10}})) \end{align} $$

for all m satisfying $\rho _{m,n}=m-{\sqrt {n}(\log t)}/{C}=\mathcal {O}_t(n^{5/18})$ . For a bounded set $X\subset \mathbb {R}$ and an arbitrary $x\in X$ , we take

$$ \begin{align*} m:=\bigg\lfloor\frac{\sqrt{n}\log t}{C}+ \sqrt{K} n^{1/4}x \bigg\rfloor. \end{align*} $$

Note that $ \rho _{m,n}^2=K\sqrt {n }\,x^2+\mathcal {O}_t( n^{1/4})$ uniformly for $x\in X$ . From (4.5),

$$ \begin{align*} \sup_{x\in X}\bigg|\sqrt{K} n^{1/4}\,\mathbb{P}\bigg(\mathrm{Y}_t(n)=\bigg\lfloor\frac{\sqrt{n}\log t}{C}+ \sqrt{K} n^{1/4}x \bigg\rfloor\bigg) -\frac{1}{\sqrt{2\pi}}\, e^{-{x^2}/{2}}\bigg|=\mathcal{O}_t (n^{-{1}/{10}}). \end{align*} $$

This completes the proof.

Acknowledgement

We thank the referee for a careful reading and for helpful comments that improved the paper.

Footnotes

The second author was partially supported by the AMS–Simons Travel Grant and the SEC Faculty Grant.

References

Andrews, G. E., Askey, R. and Roy, R., Special Functions, Encyclopedia of Mathematics and its Applications, 71 (Cambridge University Press, Cambridge, 1999), 102106.CrossRefGoogle Scholar
Ballantine, C., Burson, H., Craig, W., Folsom, A. and Wen, B., ‘Hook length biases and general linear partition inequalities’, Res. Math. Sci. 10(4) (2023), Article no. 41.10.1007/s40687-023-00402-1CrossRefGoogle Scholar
Bringmann, K., Jennings-Shaffer, C. and Mahlburg, K., ‘On a Tauberian theorem of Ingham and Euler–Maclaurin summation’, Ramanujan J. 61 (2023), 5586.10.1007/s11139-020-00377-5CrossRefGoogle Scholar
Curtiss, J., ‘A note on the theory of moment generating functions’, Ann. Math. Stat. 13 (1942), 430433.CrossRefGoogle Scholar
Erdős, P. and Lehner, J., ‘The distribution of the number of summands in the partitions of a positive integer’, Duke Math. J. 8 (1941), 335345.10.1215/S0012-7094-41-00826-8CrossRefGoogle Scholar
Griffin, M., Ono, K. and Tsai, W.-L., ‘Distributions of hook lengths in integer partitions’, Proc. Amer. Math. Soc. Ser. B 11(38) (2024), 422435.10.1090/bproc/139CrossRefGoogle Scholar
Hagis, P. Jr., ‘Partitions with a restriction on the multiplicity of the summands’, Trans. Amer. Math. Soc. 155 (1971), 375384.10.1090/S0002-9947-1971-0272735-1CrossRefGoogle Scholar
Hwang, H.-K., ‘Limit theorems for the number of summands in integer partitions’, J. Combin. Theory Ser. A 96 (2001), 89126.10.1006/jcta.2000.3170CrossRefGoogle Scholar
Mukherjea, A., Rao, M. and Suen, S., ‘A note on moment generating functions’, Statist. Probab. Lett. 76 (2006), 11851189.10.1016/j.spl.2005.12.026CrossRefGoogle Scholar
Mutafchiev, L. R., ‘On the maximal multiplicity of parts in a random integer partition’, Ramanujan J. 9(3) (2005), 305316.10.1007/s11139-005-1870-9CrossRefGoogle Scholar
Ono, K., ‘Partitions into distinct parts and elliptic curves’, J. Combin. Theory Ser. A 82(2) (1998), 193201.CrossRefGoogle Scholar
Ralaivaosaona, D., ‘A phase transition in the distribution of the length of integer partitions’, Discrete Math. Theor. Comput. Sci. AQ (2012), 265281.Google Scholar
Szekeres, G., ‘An asymptotic formula in the theory of partitions, II’, Q. J. Math. 4 (1953), 96111.10.1093/qmath/4.1.96CrossRefGoogle Scholar
Szekeres, G., ‘Asymptotic distributions of the number and size of parts in unequal partitions’, Bull. Aust. Math. Soc. 36 (1987), 8997.10.1017/S0004972700026320CrossRefGoogle Scholar
Zagier, D., ‘The Mellin transform and other useful analytic techniques’, in: Quantum Field Theory I: Basics in Mathematics and Physics (ed. Zeidler, E.) (Springer, Berlin–Heidelberg, 2006), 307323.Google Scholar
Zagier, D., ‘The dilogarithm function’, in: Frontiers in Number Theory, Physics, and Geometry II (eds. Cartier, P., Moussa, P., Julia, B. and Vanhove, P.) (Springer, Berlin–Heidelberg, 2007), 365.Google Scholar
Figure 0

Figure 1 $p_4(m,1000)$ and asymptotics for the cumulative distribution for $n=1000$.