1 Introduction
 Consider 
 $\mathcal {X}\subset [0,1]$
 and
$\mathcal {X}\subset [0,1]$
 and 
 $f:\mathcal {X}\to [0,1]$
 a topologically transitive piecewise expanding Markov map equipped with an ergodic invariant probability measure
$f:\mathcal {X}\to [0,1]$
 a topologically transitive piecewise expanding Markov map equipped with an ergodic invariant probability measure 
 $\mu $
. We want to study the cover times for points in the repeller
$\mu $
. We want to study the cover times for points in the repeller 
 $\Lambda $
, that is, given
$\Lambda $
, that is, given 
 $x\in \Lambda $
, let
$x\in \Lambda $
, let 
 $$ \begin{align*}\tau_r(x):=\inf\{k:\text{for all } y\in\Lambda, \text{there exists } j\le k: d(f^j(x),y)<r\}.\end{align*} $$
$$ \begin{align*}\tau_r(x):=\inf\{k:\text{for all } y\in\Lambda, \text{there exists } j\le k: d(f^j(x),y)<r\}.\end{align*} $$
The first quantitative result of expected cover times  was obtained for Brownian motions in [Reference MatthewsM]. It is generalised in recent works [Reference Bárány, Jurga and KolossváryBJK, Reference Jurga and ToddJT] for chaos games associated to iterative function systems and one-dimensional dynamical systems. In [Reference Bárány, Jurga and KolossváryBJK], an almost sure convergence for
 was obtained for Brownian motions in [Reference MatthewsM]. It is generalised in recent works [Reference Bárány, Jurga and KolossváryBJK, Reference Jurga and ToddJT] for chaos games associated to iterative function systems and one-dimensional dynamical systems. In [Reference Bárány, Jurga and KolossváryBJK], an almost sure convergence for 
 $-\log \tau _r/\log r$
 was also demonstrated, assuming the invariant measure
$-\log \tau _r/\log r$
 was also demonstrated, assuming the invariant measure 
 $\mu $
 supported on the attractor of the iterated function systems (IFS) satisfies rapid mixing conditions. All results suggest that the asymptotic behaviour of
$\mu $
 supported on the attractor of the iterated function systems (IFS) satisfies rapid mixing conditions. All results suggest that the asymptotic behaviour of 
 $\tau _r$
 is crucially linked to the Minkowski dimensions: for each
$\tau _r$
 is crucially linked to the Minkowski dimensions: for each 
 $r>0$
, let
$r>0$
, let 
 $M_\mu (r):=\min _{x\in \mathrm {supp}(\mu )}\mu (B(x,r))$
. The upper and lower Minkowski dimensions of
$M_\mu (r):=\min _{x\in \mathrm {supp}(\mu )}\mu (B(x,r))$
. The upper and lower Minkowski dimensions of 
 $\mu $
 are defined respectively by
$\mu $
 are defined respectively by 
 $$ \begin{align*} \overline{{\mathrm{dim}}}_M(\mu):=\limsup_{r\to0}\frac{\log M_\mu(r)}{\log r},\quad\underline{{\mathrm{dim}}}_M(\mu):=\liminf_{r\to0}\frac{\log M_\mu(r)}{\log r}. \end{align*} $$
$$ \begin{align*} \overline{{\mathrm{dim}}}_M(\mu):=\limsup_{r\to0}\frac{\log M_\mu(r)}{\log r},\quad\underline{{\mathrm{dim}}}_M(\mu):=\liminf_{r\to0}\frac{\log M_\mu(r)}{\log r}. \end{align*} $$
We write 
 ${\mathrm {dim}}_M(\mu )$
 when the two quantities coincide. In other words, these dimension-like quantities reflect the decay rate of the minimal
${\mathrm {dim}}_M(\mu )$
 when the two quantities coincide. In other words, these dimension-like quantities reflect the decay rate of the minimal 
 $\mu $
-measure ball at scale r and they are closely related to the box-counting dimension of the ambient space (see [Reference Falconer, Fraser and KäemäkiFFK] for more details). In addition, the Minkowski dimensions of
$\mu $
-measure ball at scale r and they are closely related to the box-counting dimension of the ambient space (see [Reference Falconer, Fraser and KäemäkiFFK] for more details). In addition, the Minkowski dimensions of 
 $\mu $
 govern the asymptotic behaviour of hitting times associated to the balls which are most ‘unlikely’ to be visited at small scales. Our first result below gives an almost sure asymptotic growth rate of cover times in terms of
$\mu $
 govern the asymptotic behaviour of hitting times associated to the balls which are most ‘unlikely’ to be visited at small scales. Our first result below gives an almost sure asymptotic growth rate of cover times in terms of 
 $\overline {{\mathrm {dim}}}_M(\mu )$
 and
$\overline {{\mathrm {dim}}}_M(\mu )$
 and 
 $\underline {{\mathrm {dim}}}_M(\mu )$
.
$\underline {{\mathrm {dim}}}_M(\mu )$
.
Theorem 1.1. Let 
 $(f,\mu )$
 be a probability-preserving system where f is topologically transitive, Markov and piecewise expanding. If
$(f,\mu )$
 be a probability-preserving system where f is topologically transitive, Markov and piecewise expanding. If 
 $\overline {\mathrm {dim}}_M(\mu )<\infty $
, then for
$\overline {\mathrm {dim}}_M(\mu )<\infty $
, then for 
 $\mu $
-almost every (a.e.) x in the repeller,
$\mu $
-almost every (a.e.) x in the repeller, 
 $$ \begin{align*} \limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge\overline{\mathrm{dim}}_M(\mu),\quad\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge\underline{{\mathrm{dim}}}_M(\mu). \end{align*} $$
$$ \begin{align*} \limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge\overline{\mathrm{dim}}_M(\mu),\quad\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge\underline{{\mathrm{dim}}}_M(\mu). \end{align*} $$
If 
 $(f,\mu )$
 is exponentially
$(f,\mu )$
 is exponentially 
 $\psi $
-mixing, then for
$\psi $
-mixing, then for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
, the inequalities above are improved to
$x\in \Lambda $
, the inequalities above are improved to 
 $$ \begin{align*} \limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}=\overline{\mathrm{dim}}_M(\mu),\quad\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}=\underline{{\mathrm{dim}}}_M(\mu). \end{align*} $$
$$ \begin{align*} \limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}=\overline{\mathrm{dim}}_M(\mu),\quad\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}=\underline{{\mathrm{dim}}}_M(\mu). \end{align*} $$
In particular, it is true if the invariant measure in question is doubling.
Remark 1.2. We remark that systems with finite Minkowski dimensions, or at least 
 $\overline {\mathrm {dim}}_M(\mu )<\infty $
, are fairly common. In particular, if
$\overline {\mathrm {dim}}_M(\mu )<\infty $
, are fairly common. In particular, if 
 $\mu $
 is doubling, that is, there exists a constant
$\mu $
 is doubling, that is, there exists a constant 
 $D>0$
 such that for all
$D>0$
 such that for all 
 $x\in \mathrm {supp}(\mu )$
 and
$x\in \mathrm {supp}(\mu )$
 and 
 $r>0$
,
$r>0$
, 
 $D\mu (B(x,r))\ge \mu (B(x,2r))>0$
, then
$D\mu (B(x,r))\ge \mu (B(x,2r))>0$
, then 
 $\overline {\mathrm {dim}}_M(\mu )<\infty $
. This can be seen from the following: for each
$\overline {\mathrm {dim}}_M(\mu )<\infty $
. This can be seen from the following: for each 
 $n\in \mathbb N$
, let
$n\in \mathbb N$
, let 
 $x_n\in \mathrm {supp}(\mu )$
 be such that
$x_n\in \mathrm {supp}(\mu )$
 be such that 
 $\mu (B(x_n,2^{-n}))=M_\mu (2^{-n})$
. By the doubling property,
$\mu (B(x_n,2^{-n}))=M_\mu (2^{-n})$
. By the doubling property, 
 $$ \begin{align*}M_\mu(2^{-n})&=\mu(B(x_n,2^{-n}))\ge D^{-1}\mu(B(x_n,2^{-n+1}))\\ &\ge D^{-1}M_\mu(2^{-n+1})=D^{-1}\mu(B(x_{n-1}2^{-n+1})).\end{align*} $$
$$ \begin{align*}M_\mu(2^{-n})&=\mu(B(x_n,2^{-n}))\ge D^{-1}\mu(B(x_n,2^{-n+1}))\\ &\ge D^{-1}M_\mu(2^{-n+1})=D^{-1}\mu(B(x_{n-1}2^{-n+1})).\end{align*} $$
Reiterating this, one gets 
 $M_\mu (2^{-n})\ge D^{-n+1}M_\mu (1/2)$
. In other words,
$M_\mu (2^{-n})\ge D^{-n+1}M_\mu (1/2)$
. In other words, 
 $$ \begin{align*} \frac{\log M_\mu(2^{-n})}{-n\log 2}\le \frac{-(n-1)\log D+\log M_\mu(1/2)}{-n\log 2}.\end{align*} $$
$$ \begin{align*} \frac{\log M_\mu(2^{-n})}{-n\log 2}\le \frac{-(n-1)\log D+\log M_\mu(1/2)}{-n\log 2}.\end{align*} $$
As for all 
 $r>0$
, there is unique
$r>0$
, there is unique 
 $n\in \mathbb N$
 such that
$n\in \mathbb N$
 such that 
 $2^{-n}<r\le 2^{-n+1}$
 and
$2^{-n}<r\le 2^{-n+1}$
 and 
 $({\log 2^{-n}}/ {\log 2^{-n+1}})=1$
,
$({\log 2^{-n}}/ {\log 2^{-n+1}})=1$
, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log M_\mu(r)}{\log r}=\limsup_{n\to\infty}\frac{\log M_\mu(2^{-n})}{-n\log 2}\le \frac{\log D}{\log 2}<\infty.\end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log M_\mu(r)}{\log r}=\limsup_{n\to\infty}\frac{\log M_\mu(2^{-n})}{-n\log 2}\le \frac{\log D}{\log 2}<\infty.\end{align*} $$
 However, the Minkowski dimensions are not always finite due to non-doubling behaviours or more extreme decay of 
 $M_\mu (r)$
 (see Example 3.2). Hence, we need a new notion of dimension, invariant under scalar multiplication (replacing
$M_\mu (r)$
 (see Example 3.2). Hence, we need a new notion of dimension, invariant under scalar multiplication (replacing 
 $M_\mu (r)$
 by
$M_\mu (r)$
 by 
 $M_\mu (cr)$
 for any
$M_\mu (cr)$
 for any 
 $c>0$
, the limit does not change), to capture such decay rate in r.
$c>0$
, the limit does not change), to capture such decay rate in r.
Definition 1.3. Define the upper and lower stretched Minkowski dimensions by
 $$ \begin{align*}\overline{{\mathrm{dim}}}_M^s(\mu):=\limsup_{r\to0}\frac{\log|\log M_\mu(r)|}{-\log r},\quad \underline{{\mathrm{dim}}}_M^s(\mu):=\liminf_{r\to0}\frac{\log \log|M_\mu(r)|}{-\log r}.\end{align*} $$
$$ \begin{align*}\overline{{\mathrm{dim}}}_M^s(\mu):=\limsup_{r\to0}\frac{\log|\log M_\mu(r)|}{-\log r},\quad \underline{{\mathrm{dim}}}_M^s(\mu):=\liminf_{r\to0}\frac{\log \log|M_\mu(r)|}{-\log r}.\end{align*} $$
 Those quantities should be of independent interest. Our second theorem below deals with almost sure cover times for systems in which 
 $M_\mu (r)$
 decays at stretched-exponential rates.
$M_\mu (r)$
 decays at stretched-exponential rates.
Theorem 1.4. Let 
 $(f,\mu )$
 be an ergodic probability preserving system where f is topologically transitive, Markov and piecewise expanding. If
$(f,\mu )$
 be an ergodic probability preserving system where f is topologically transitive, Markov and piecewise expanding. If 
 $\overline {\mathrm {dim}}_M(\mu )=\infty $
, but
$\overline {\mathrm {dim}}_M(\mu )=\infty $
, but 
 $0<\underline {\mathrm { dim}}_M^s(\mu ),\overline {\mathrm {dim}}_M^s(\mu )<\infty $
, then for
$0<\underline {\mathrm { dim}}_M^s(\mu ),\overline {\mathrm {dim}}_M^s(\mu )<\infty $
, then for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align}\liminf_{r\to0}\frac{\log \log\tau_r(x)}{-\log r}\ge\underline{{\mathrm{dim}}}_M^s(\mu), \quad \limsup_{r\to0}\frac{\log \log \tau_r(x)}{-\log r}\ge\overline{{\mathrm{dim}}}_M^s(\mu).\end{align} $$
$$ \begin{align}\liminf_{r\to0}\frac{\log \log\tau_r(x)}{-\log r}\ge\underline{{\mathrm{dim}}}_M^s(\mu), \quad \limsup_{r\to0}\frac{\log \log \tau_r(x)}{-\log r}\ge\overline{{\mathrm{dim}}}_M^s(\mu).\end{align} $$
If 
 $(f,\mu )$
 is exponentially
$(f,\mu )$
 is exponentially 
 $\psi $
-mixing, then for
$\psi $
-mixing, then for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align} \liminf_{r\to0}\frac{\log \log\tau_r(x)}{-\log r}=\underline{{\mathrm{dim}}}_M^s(\mu), \quad \limsup_{r\to0}\frac{\log \log\tau_r(x)}{-\log r}=\overline{{\mathrm{dim}}}_M^s(\mu). \end{align} $$
$$ \begin{align} \liminf_{r\to0}\frac{\log \log\tau_r(x)}{-\log r}=\underline{{\mathrm{dim}}}_M^s(\mu), \quad \limsup_{r\to0}\frac{\log \log\tau_r(x)}{-\log r}=\overline{{\mathrm{dim}}}_M^s(\mu). \end{align} $$
1.1 Layout of the paper
 Basic definitions are introduced in §2 and we delay the proofs of the main theorems to §4. Several examples that satisfy Theorems 1.1 and 1.4 will be discussed in §3. In §5, we will also prove that for irrational rotations, which are known to have no mixing behaviour, Theorem 1.1 fails for almost every point when the rotations are of type 
 $\eta $
 (see Definition 5.1) for some
$\eta $
 (see Definition 5.1) for some 
 $\eta>1$
. Lastly, in §6, we show that similar results hold for flows under some natural conditions.
$\eta>1$
. Lastly, in §6, we show that similar results hold for flows under some natural conditions.
2 Setup
 Let 
 $\mathcal {A}$
 be a finite or countable index set and
$\mathcal {A}$
 be a finite or countable index set and 
 $\mathcal {P}=\{P_a\}_{a\in \mathcal {A}}$
 a collection of subintervals in
$\mathcal {P}=\{P_a\}_{a\in \mathcal {A}}$
 a collection of subintervals in 
 $[0,1]$
 with disjoint interiors covering
$[0,1]$
 with disjoint interiors covering 
 $\mathcal {X}$
. We say
$\mathcal {X}$
. We say 
 $f:\mathcal {X}\rightarrow [0,1]$
 is a piecewise expanding Markov map if:
$f:\mathcal {X}\rightarrow [0,1]$
 is a piecewise expanding Markov map if: 
- 
(1) for any  $a\in \mathcal A$
, $a\in \mathcal A$
, $f_a:=f|_{P_a}$
 is continuous, injective and $f_a:=f|_{P_a}$
 is continuous, injective and $f(P_a)$
 a union of elements in $f(P_a)$
 a union of elements in $\mathcal P$
; $\mathcal P$
;
- 
(2) there is a uniform constant  $\gamma>1$
 such that for all $\gamma>1$
 such that for all $a\in \mathcal A$
, $a\in \mathcal A$
, $|Df_a|\ge \gamma $
. $|Df_a|\ge \gamma $
.
 The repeller of f, denoted by 
 $\Lambda $
, is the collection of points with all their forward iterates contained in
$\Lambda $
, is the collection of points with all their forward iterates contained in 
 $\mathcal P$
, namely
$\mathcal P$
, namely 
 $$ \begin{align*}\Lambda:=\bigg\{x\in\mathcal{X}:f^k(x)\in\bigcup_{a\in\mathcal A}P_a\text{ for all }k\ge0 \bigg\}.\end{align*} $$
$$ \begin{align*}\Lambda:=\bigg\{x\in\mathcal{X}:f^k(x)\in\bigcup_{a\in\mathcal A}P_a\text{ for all }k\ge0 \bigg\}.\end{align*} $$
We study the dynamics of 
 $f:\Lambda \to \Lambda $
 together with an ergodic invariant probability measure
$f:\Lambda \to \Lambda $
 together with an ergodic invariant probability measure 
 $\mu $
 supported on
$\mu $
 supported on 
 $\Lambda $
. There is a shift system associated to f: let M be an
$\Lambda $
. There is a shift system associated to f: let M be an 
 $\mathcal {A}\times \mathcal {A}$
 matrix such that
$\mathcal {A}\times \mathcal {A}$
 matrix such that 
 $M_{ab}=1$
 if
$M_{ab}=1$
 if 
 $f(P_a)\cap P_b\neq \emptyset $
 and 0 otherwise. Here, f is topologically transitive if for all
$f(P_a)\cap P_b\neq \emptyset $
 and 0 otherwise. Here, f is topologically transitive if for all 
 $a,b\in \mathcal A$
, there exists k such that
$a,b\in \mathcal A$
, there exists k such that 
 $M^k_{ab}>0$
. Let
$M^k_{ab}>0$
. Let 
 $\Sigma $
 denote the space of all infinite admissible words, that is,
$\Sigma $
 denote the space of all infinite admissible words, that is, 
 $$ \begin{align*}\Sigma:=\{{x}=(x_0,x_1,\ldots)\in\mathcal A^{\mathbb N_0}:M_{x_k,x_{k+1}}=1\;\text{ for all } \,k\ge0\}.\end{align*} $$
$$ \begin{align*}\Sigma:=\{{x}=(x_0,x_1,\ldots)\in\mathcal A^{\mathbb N_0}:M_{x_k,x_{k+1}}=1\;\text{ for all } \,k\ge0\}.\end{align*} $$
A natural choice of metric on 
 $\Sigma $
 is
$\Sigma $
 is 
 $d_s(x,y):=2^{-\inf \{j\ge 0:\,x_j\neq y_j\}}$
, and we define the projection map
$d_s(x,y):=2^{-\inf \{j\ge 0:\,x_j\neq y_j\}}$
, and we define the projection map 
 $\pi :\Sigma \to \Lambda $
 by
$\pi :\Sigma \to \Lambda $
 by 
 $$ \begin{align*}{x}=\pi(x_0,x_1,\ldots) \quad\text{if and only if } x\in\bigcap_{i=0}^{\infty}f^{-i}P_{x_i}.\end{align*} $$
$$ \begin{align*}{x}=\pi(x_0,x_1,\ldots) \quad\text{if and only if } x\in\bigcap_{i=0}^{\infty}f^{-i}P_{x_i}.\end{align*} $$
The dynamics on 
 $\Sigma $
 is the left shift
$\Sigma $
 is the left shift 
 $\sigma :\Sigma \to \Sigma $
 given by
$\sigma :\Sigma \to \Sigma $
 given by 
 $\sigma (x_0,x_1,\ldots ,)=(x_1,x_2,\ldots )$
, then
$\sigma (x_0,x_1,\ldots ,)=(x_1,x_2,\ldots )$
, then 
 $\pi $
 defines a semi-conjugacy
$\pi $
 defines a semi-conjugacy 
 $f\circ \pi =\pi \circ \sigma $
. The corresponding symbolic measure
$f\circ \pi =\pi \circ \sigma $
. The corresponding symbolic measure  of
 of 
 $\mu $
 is given by
$\mu $
 is given by  , that is, for all Borel-measurable sets
, that is, for all Borel-measurable sets 
 $B\in \mathcal B([0,1])$
,
$B\in \mathcal B([0,1])$
,  .
.
 Denote 
 $\mathcal {P}^n:=\bigvee _{j=0}^{n-1}f^{-j}\mathcal {P}$
, each
$\mathcal {P}^n:=\bigvee _{j=0}^{n-1}f^{-j}\mathcal {P}$
, each 
 $P\in \mathcal P^n$
 corresponds to an n-cylinder in
$P\in \mathcal P^n$
 corresponds to an n-cylinder in 
 $\Sigma $
: let
$\Sigma $
: let 
 ${\Sigma _n\subseteq \mathcal A^n}$
 denote all finite words of length n and for any
${\Sigma _n\subseteq \mathcal A^n}$
 denote all finite words of length n and for any 
 $\textbf {i}\in \Sigma $
, the n-cylinder defined by
$\textbf {i}\in \Sigma $
, the n-cylinder defined by 
 $\textbf {i}$
 is
$\textbf {i}$
 is 
 $$ \begin{align*}[\textbf{i}]=[i_0,\ldots,i_{n-1}]:=\{y\in\Sigma:y_j=i_j,\,j=0,\ldots,n-1\}.\end{align*} $$
$$ \begin{align*}[\textbf{i}]=[i_0,\ldots,i_{n-1}]:=\{y\in\Sigma:y_j=i_j,\,j=0,\ldots,n-1\}.\end{align*} $$
Then, 
 $\pi [i_0,i_1,\ldots ,i_{n-1}]=\bigcap _{j=0}^{n-1}f^{-j}P_{i_j}=:P_{\textbf {i}}$
. The depth of a cylinder
$\pi [i_0,i_1,\ldots ,i_{n-1}]=\bigcap _{j=0}^{n-1}f^{-j}P_{i_j}=:P_{\textbf {i}}$
. The depth of a cylinder 
 $[\textbf {i}]$
 is the length of
$[\textbf {i}]$
 is the length of 
 $\textbf {i}$
.
$\textbf {i}$
.
 Furthermore, 
 $(f,\mu )$
 is required to have the following mixing property.
$(f,\mu )$
 is required to have the following mixing property.
Definition 2.1. Say 
 $\mu $
 is exponentially
$\mu $
 is exponentially 
 $\psi $
-mixing if there are
$\psi $
-mixing if there are 
 $C_1,\rho>0$
 and a monotone decreasing function
$C_1,\rho>0$
 and a monotone decreasing function 
 $\psi (k)\le C_1e^{-\rho k}$
 for all
$\psi (k)\le C_1e^{-\rho k}$
 for all 
 $k\in \mathbb N$
, such that the corresponding symbolic measure
$k\in \mathbb N$
, such that the corresponding symbolic measure  satisfies: for all
 satisfies: for all 
 $n,k\in \mathbb N$
,
$n,k\in \mathbb N$
, 
 $\textbf {i}\in \Sigma _n$
 and
$\textbf {i}\in \Sigma _n$
 and 
 $\textbf {j}\in \Sigma ^*=\bigcup _{l\ge 1}\Sigma _l$
,
$\textbf {j}\in \Sigma ^*=\bigcup _{l\ge 1}\Sigma _l$
, 

3 Examples
Theorem 1.4 is applicable to the following systems.
Example 3.1. Finitely branched Gibbs–Markov maps: let f be a topologically transitive piecewise expanding Markov map with 
 $\mathcal A$
 finite. Here, f is said to be Gibbs–Markov if for some potential
$\mathcal A$
 finite. Here, f is said to be Gibbs–Markov if for some potential 
 $\phi :\Sigma \to \mathbb R$
 which is locally Hölder with respect to the symbolic metric
$\phi :\Sigma \to \mathbb R$
 which is locally Hölder with respect to the symbolic metric 
 $d_s$
, there exists
$d_s$
, there exists 
 $G>0$
 and
$G>0$
 and 
 $P\in \mathbb R$
 such that for all
$P\in \mathbb R$
 such that for all 
 $n\in \mathbb N$
, all
$n\in \mathbb N$
, all 
 $x=(x_0,x_1,\ldots )\in \Sigma $
,
$x=(x_0,x_1,\ldots )\in \Sigma $
, 

For maps of this kind, 
 $|Df|$
 is uniformly bounded; thus for each ball at scale r, it is possible to approximate any ball with finitely many cylinders of the same depth (see for example the proof of [Reference Jurga and ToddJT, Lemma 3.2]), and by the Gibbs property of
$|Df|$
 is uniformly bounded; thus for each ball at scale r, it is possible to approximate any ball with finitely many cylinders of the same depth (see for example the proof of [Reference Jurga and ToddJT, Lemma 3.2]), and by the Gibbs property of  , the asymptotic decay rate converges so
, the asymptotic decay rate converges so 
 ${\mathrm {dim}}_M(\mu )$
 exists and is finite. Since Gibbs measures are exponentially
${\mathrm {dim}}_M(\mu )$
 exists and is finite. Since Gibbs measures are exponentially 
 $\psi $
-mixing (see [Reference BowenBow, Proposition 1.14]), by Theorem 1.1, we have
$\psi $
-mixing (see [Reference BowenBow, Proposition 1.14]), by Theorem 1.1, we have 
 $$ \begin{align*}\lim_{r\to0}\frac{\log\tau_r(x)}{-\log r}={\mathrm{dim}}_M(\mu)\end{align*} $$
$$ \begin{align*}\lim_{r\to0}\frac{\log\tau_r(x)}{-\log r}={\mathrm{dim}}_M(\mu)\end{align*} $$
for 
 $\mu $
-a.e. x in the repeller of f.
$\mu $
-a.e. x in the repeller of f.
 In the next example, when 
 $r\to 0$
 at a polynomial rate,
$r\to 0$
 at a polynomial rate, 
 $M_\mu (r)$
 decays exponentially; hence,
$M_\mu (r)$
 decays exponentially; hence, 
 $\overline {\mathrm {dim}}_M(\mu )$
 is infinite and the stretched Minkowski dimensions are needed.
$\overline {\mathrm {dim}}_M(\mu )$
 is infinite and the stretched Minkowski dimensions are needed.
Example 3.2. Similar to [Reference Jurga and ToddJT, Example 7.4], consider the following class of infinitely full-branched maps: pick 
 $\kappa>1$
 and set
$\kappa>1$
 and set 
 $c=\zeta (\kappa )=\sum _{n\in \mathbb N}(1/{n^{\kappa }})$
. Let
$c=\zeta (\kappa )=\sum _{n\in \mathbb N}(1/{n^{\kappa }})$
. Let 
 $a_0=0$
,
$a_0=0$
, 
 $a_j=\sum _{j=1}^n(1/{cj^{\kappa }})$
 and define f by
$a_j=\sum _{j=1}^n(1/{cj^{\kappa }})$
 and define f by 
 $$ \begin{align*} \text{ for all }\,n\in\mathbb N_0=\mathbb N\cup\{0\},\quad f(x)=cn^\kappa(x-a_{n-1})\text{ for }x\in[a_{n-1},a_{n})=:P_{n}. \end{align*} $$
$$ \begin{align*} \text{ for all }\,n\in\mathbb N_0=\mathbb N\cup\{0\},\quad f(x)=cn^\kappa(x-a_{n-1})\text{ for }x\in[a_{n-1},a_{n})=:P_{n}. \end{align*} $$
Then, f is an infinitely full-branched affine map and we can associate this map with a full-shift system on 
 $\mathbb N$
:
$\mathbb N$
: 
 $x=\pi (i_0,i_1,\ldots )$
 if for all
$x=\pi (i_0,i_1,\ldots )$
 if for all 
 $j\ge 1$
,
$j\ge 1$
, 
 $f^j(x)\in P_{i_j}$
.
$f^j(x)\in P_{i_j}$
.
 Let 
 $\omega>1$
 and construct
$\omega>1$
 and construct  the finite Bernoulli measure by
 the finite Bernoulli measure by 

so the push-forward measure  has
 has 
 $\mu (P_n)=\omega ^{-n}$
.
$\mu (P_n)=\omega ^{-n}$
.
Proposition 3.3. For 
 $(f,\mu)$
 defined in the example above,
$(f,\mu)$
 defined in the example above, 
 $\overline {{\mathrm {dim}}}_M(\mu )=\infty $
, but
$\overline {{\mathrm {dim}}}_M(\mu )=\infty $
, but 
 ${\mathrm {dim}}_M^s(\mu )=1/({\kappa -1})$
.
${\mathrm {dim}}_M^s(\mu )=1/({\kappa -1})$
.
Proof. For each 
 $r>0$
, the r-ball of minimum measure is found near
$r>0$
, the r-ball of minimum measure is found near 
 $1$
. In particular, along the sequence
$1$
. In particular, along the sequence 
 $r_n=({1}/{2c})\sum _{j\ge n}{j^{-\kappa }}\approx {1}/{2c(\kappa -1)n^{\kappa -1}}$
, the ball that realises
$r_n=({1}/{2c})\sum _{j\ge n}{j^{-\kappa }}\approx {1}/{2c(\kappa -1)n^{\kappa -1}}$
, the ball that realises 
 $M_\mu (r_n)$
 is contained in
$M_\mu (r_n)$
 is contained in 
 $\bigcup _{j=n}^\infty P_j$
; hence,
$\bigcup _{j=n}^\infty P_j$
; hence, 
 $$ \begin{align*}\omega^{-n}\le M_\mu(r_n)\le \frac{\omega^{-n}}{1-\omega^{-1}}.\end{align*} $$
$$ \begin{align*}\omega^{-n}\le M_\mu(r_n)\le \frac{\omega^{-n}}{1-\omega^{-1}}.\end{align*} $$
Therefore,
 $$ \begin{align*}\overline{{\mathrm{dim}}}_M(\mu)\ge\limsup_{n\to\infty}\frac{n\log \omega}{(\kappa-1)\log n} =\infty,\end{align*} $$
$$ \begin{align*}\overline{{\mathrm{dim}}}_M(\mu)\ge\limsup_{n\to\infty}\frac{n\log \omega}{(\kappa-1)\log n} =\infty,\end{align*} $$
whereas for all n,
 $$ \begin{align*}\frac{\log n+\log(\log\omega+{\log(1-1/\omega)}/{n})}{(\kappa-1)\log n+\log(2c(\kappa-1))}&\le\frac{\log|\log M_\mu(r_n)|}{-\log r_n}\\&\le \frac{\log n+\log\log \omega}{(\kappa-1)\log n+\log (2c(\kappa-1))}.\end{align*} $$
$$ \begin{align*}\frac{\log n+\log(\log\omega+{\log(1-1/\omega)}/{n})}{(\kappa-1)\log n+\log(2c(\kappa-1))}&\le\frac{\log|\log M_\mu(r_n)|}{-\log r_n}\\&\le \frac{\log n+\log\log \omega}{(\kappa-1)\log n+\log (2c(\kappa-1))}.\end{align*} $$
As for all 
 $r>0$
, there is unique
$r>0$
, there is unique 
 $n\in \mathbb N$
 such that
$n\in \mathbb N$
 such that 
 $r_{n+1}\le r <r_n$
, while
$r_{n+1}\le r <r_n$
, while 
 $\lim _{n\to \infty }({\log r_{n+1}}/{\log r_n})=1$
. We can conclude with
$\lim _{n\to \infty }({\log r_{n+1}}/{\log r_n})=1$
. We can conclude with 
 ${\mathrm { dim}}^{s}_M(\mu )={1}/({\kappa -1})$
.
${\mathrm { dim}}^{s}_M(\mu )={1}/({\kappa -1})$
.
 As in [Reference Jurga and ToddJT, Example 7.4], it is very difficult for the system to cover small neighbourhoods of 1, so Theorem 1.1 says 
 $\limsup _{r\to 0}({\log \tau _r(x)}/{-\log r})\ge \overline {\mathrm {dim}}_M(\mu )=\infty $
, but since
$\limsup _{r\to 0}({\log \tau _r(x)}/{-\log r})\ge \overline {\mathrm {dim}}_M(\mu )=\infty $
, but since  is Bernoulli and hence
 is Bernoulli and hence 
 $\psi $
-mixing, Theorem 1.4 asserts that
$\psi $
-mixing, Theorem 1.4 asserts that 
 $$ \begin{align*}\lim_{r\to0}\frac{\log\log\tau_r(x)}{-\log r}=\frac1{\kappa-1}\quad \mu\text{-almost everywhere}.\end{align*} $$
$$ \begin{align*}\lim_{r\to0}\frac{\log\log\tau_r(x)}{-\log r}=\frac1{\kappa-1}\quad \mu\text{-almost everywhere}.\end{align*} $$
4 Proof of Theorem 1.4
 The proofs in this section are adapted from those of [Reference Bárány, Jurga and KolossváryBJK, Propositions 3.1 and 3.2]. We will only demonstrate the proofs for Theorem 1.4, that is, the asymptotics are determined by stretched Minkowski dimensions; the proofs for Theorem 1.1 are obtained by replacing all stretched exponential sequences in the proofs below by some exponential sequence, for example, for a given constant 
 $s\in \mathbb R$
,
$s\in \mathbb R$
, 
 $e^{\pm n^{s}}$
 will be replaced by
$e^{\pm n^{s}}$
 will be replaced by 
 $2^{\pm ns}$
.
$2^{\pm ns}$
.
 Assuming the inequalities in (1.1), we first prove (1.2), which requires the exponentially 
 $\psi $
-mixing condition.
$\psi $
-mixing condition.
Remark 4.1. Assuming the conditions of Theorem 1.4, we will prove that the statements hold along the subsequence 
 $r_n=n^{-1}$
 such that for each
$r_n=n^{-1}$
 such that for each 
 $r>0$
, there is a unique
$r>0$
, there is a unique 
 $n\in \mathbb N$
 with
$n\in \mathbb N$
 with 
 $r_{n+1}<r\le r_n$
, while
$r_{n+1}<r\le r_n$
, while 
 $\lim _{n\to \infty }({\log r_{n+1}}/{\log r_n})=1$
 (if
$\lim _{n\to \infty }({\log r_{n+1}}/{\log r_n})=1$
 (if 
 $\overline {{\mathrm {dim}}}_M(\mu )$
 or
$\overline {{\mathrm {dim}}}_M(\mu )$
 or 
 $\underline {{\mathrm {dim}}}_M(\mu )$
 are finite, we choose
$\underline {{\mathrm {dim}}}_M(\mu )$
 are finite, we choose 
 $r_n=2^{-n}$
 instead). Since
$r_n=2^{-n}$
 instead). Since 
 $\log \tau _{r}(x)$
 is increasing as
$\log \tau _{r}(x)$
 is increasing as 
 $r\to 0$
,
$r\to 0$
, 
 $$ \begin{align*}\limsup_{n\to\infty}\frac{\log\log\tau_{r_n}(x)}{-\log r_n}=\limsup_{r\to0}\frac{\log\log\tau_r(x)}{-\log r},\end{align*} $$
$$ \begin{align*}\limsup_{n\to\infty}\frac{\log\log\tau_{r_n}(x)}{-\log r_n}=\limsup_{r\to0}\frac{\log\log\tau_r(x)}{-\log r},\end{align*} $$
and similarly for liminfs.
4.1 Proof of (1.2)
Proposition 4.2. Suppose 
 $(f,\mu )$
 is exponentially
$(f,\mu )$
 is exponentially 
 $\psi $
-mixing and the upper stretched Minkowski dimension
$\psi $
-mixing and the upper stretched Minkowski dimension 
 $\overline {\mathrm {dim}}_M^s(\mu )$
 is finite, then for
$\overline {\mathrm {dim}}_M^s(\mu )$
 is finite, then for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align*}\limsup_{n\rightarrow\infty}\frac{\log \log\tau_{r}(x)}{-\log r}\le \overline{\mathrm{dim}}^s_M(\mu).\end{align*} $$
$$ \begin{align*}\limsup_{n\rightarrow\infty}\frac{\log \log\tau_{r}(x)}{-\log r}\le \overline{\mathrm{dim}}^s_M(\mu).\end{align*} $$
Proof. Let 
 $\varepsilon>0$
 and, for simplicity, denote
$\varepsilon>0$
 and, for simplicity, denote 
 $\overline \alpha :=\overline {{\mathrm {dim}}}^s_M(\mu )$
.
$\overline \alpha :=\overline {{\mathrm {dim}}}^s_M(\mu )$
.
 For any finite k-word 
 $\textbf {i}=x_0,\ldots ,x_{k-1}\in \Sigma _k$
, let
$\textbf {i}=x_0,\ldots ,x_{k-1}\in \Sigma _k$
, let 
 $\textbf {i}^-=x_0,\ldots ,x_{k-2}$
, that is,
$\textbf {i}^-=x_0,\ldots ,x_{k-2}$
, that is, 
 $\textbf {i}$
 dropping the last letter. Recall that for each
$\textbf {i}$
 dropping the last letter. Recall that for each 
 $\textbf {i}\in \Sigma ^*$
,
$\textbf {i}\in \Sigma ^*$
, 
 $P_{\textbf {i}}=\pi [\textbf {i}]$
, and we define
$P_{\textbf {i}}=\pi [\textbf {i}]$
, and we define 
 $$ \begin{align*}\mathcal W_r:=\{\textbf{i}\in\Sigma^*:\,\text{diam}(P_{\textbf{i}})\le r<\text{diam}(P_{\textbf{i}^-})\}.\end{align*} $$
$$ \begin{align*}\mathcal W_r:=\{\textbf{i}\in\Sigma^*:\,\text{diam}(P_{\textbf{i}})\le r<\text{diam}(P_{\textbf{i}^-})\}.\end{align*} $$
By expansion, for each 
 $n\in \mathbb N$
, the lengths of the words in
$n\in \mathbb N$
, the lengths of the words in 
 $\mathcal W_{n^{-1}}$
 are bounded from above; hence, we can define
$\mathcal W_{n^{-1}}$
 are bounded from above; hence, we can define 
 $$ \begin{align*}L(n):=\frac{\log n}{\log\gamma}+1 \ge\max\{|\textbf{i}|:\textbf{i}\in \mathcal W_{n^{-1}}\}.\end{align*} $$
$$ \begin{align*}L(n):=\frac{\log n}{\log\gamma}+1 \ge\max\{|\textbf{i}|:\textbf{i}\in \mathcal W_{n^{-1}}\}.\end{align*} $$
Given 
 $y\in [0,1]$
 and
$y\in [0,1]$
 and 
 $r>0$
 such that
$r>0$
 such that 
 $B(y,r)\subset \mathrm {supp}(\mu )$
, define the corresponding symbolic balls by
$B(y,r)\subset \mathrm {supp}(\mu )$
, define the corresponding symbolic balls by 

Note that if for some 
 $x\in P_{\textbf {i}}$
 and
$x\in P_{\textbf {i}}$
 and  ,
, 
 $d(x,y)\le r+\text {diam}(P_{\textbf {i}})\le 2r$
, then
$d(x,y)\le r+\text {diam}(P_{\textbf {i}})\le 2r$
, then 

 Let 
 $\mathcal Q_{n}$
 be a cover of
$\mathcal Q_{n}$
 be a cover of 
 $\Lambda $
 with balls of radius
$\Lambda $
 with balls of radius 
 $r_n=1/2n$
. Denote the collection of their centres by
$r_n=1/2n$
. Denote the collection of their centres by 
 $\mathcal Y_n$
 and
$\mathcal Y_n$
 and 
 $\#\mathcal Q_n=\#\mathcal Y_n\le n$
. Let
$\#\mathcal Q_n=\#\mathcal Y_n\le n$
. Let 
 $\tau (\mathcal Q_n,x)$
 be the minimum time for the orbit of x to have visited each element of
$\tau (\mathcal Q_n,x)$
 be the minimum time for the orbit of x to have visited each element of 
 $\mathcal Q_n$
 at least once,
$\mathcal Q_n$
 at least once, 
 $$ \begin{align*}\tau(\mathcal Q_n,x):=\min\{k\in\mathbb N:\text{for all }Q\in\mathcal Q_n,\text{there exists }0\le j\le k: f^j(x)\in Q\}.\end{align*} $$
$$ \begin{align*}\tau(\mathcal Q_n,x):=\min\{k\in\mathbb N:\text{for all }Q\in\mathcal Q_n,\text{there exists }0\le j\le k: f^j(x)\in Q\}.\end{align*} $$
Then, 
 $\tau _{1/n}(x)\le \tau (\mathcal Q_n,x)$
 for all n and all x since for all
$\tau _{1/n}(x)\le \tau (\mathcal Q_n,x)$
 for all n and all x since for all 
 $y\in \Lambda $
, there is
$y\in \Lambda $
, there is 
 $Q\in \mathcal Q_n$
 and
$Q\in \mathcal Q_n$
 and 
 $j\le \tau (\mathcal Q_n,x)$
 such that
$j\le \tau (\mathcal Q_n,x)$
 such that 
 $f^j(x)\in Q$
 and
$f^j(x)\in Q$
 and 
 $y\in Q$
; hence,
$y\in Q$
; hence, 
 $d(f^j(x),y)\le 1/n.$
 Let
$d(f^j(x),y)\le 1/n.$
 Let 
 $\varepsilon>0$
 be an arbitrary number and for each
$\varepsilon>0$
 be an arbitrary number and for each 
 $k\in \mathbb N$
, set
$k\in \mathbb N$
, set 
 $L'(k)=\lceil L(k)+1/{\rho }(k^{\overline \alpha +\varepsilon }+\log C_1)\rceil $
, where
$L'(k)=\lceil L(k)+1/{\rho }(k^{\overline \alpha +\varepsilon }+\log C_1)\rceil $
, where 
 $C_1$
,
$C_1$
, 
 $\rho $
 were given in Definition 2.1 and
$\rho $
 were given in Definition 2.1 and 
 $\lceil t\rceil $
 takes the least integer no smaller than t. We have
$\lceil t\rceil $
 takes the least integer no smaller than t. We have 
 $$ \begin{align} \begin{aligned} &\mu(x:\tau_{1/n}(x)>e^{n^{\overline\alpha+\varepsilon}}L'(4n))\le \mu(x:\tau(\mathcal Q_n,x)>e^{n^{\overline\alpha+\varepsilon}}L'(4n))\\ &\quad=\mu(x:\text{there exists } y\in\mathcal Y_n:f^j(x)\not\in B(y,{1}/{2n})\;\text{for all } j\le e^{n^{\overline\alpha+\varepsilon}}L'({4n}))\\ &\quad\le \mu(x:\text{there exists } y\in\mathcal Y_n:f^{jL'(4n)}(x)\not \in B(y,1/{2n})\;\text{for all } j\le e^{n^{\overline\alpha+\varepsilon}})\\ &\quad=\mu\bigg(\bigcup_{y\in\mathcal Y_n}\bigcap_{j=1}^{e^{n^{\overline\alpha+\varepsilon}}}(f^{-jL'(4n)}B(y,1/2n))^c\bigg)\le \sum_{y\in\mathcal Y_n}\mu\bigg(\bigcap_{j=1}^{e^{n^{\overline\alpha+\varepsilon}}}(f^{-jL'( 4n)} B(y,1/2n))^c\bigg). \end{aligned} \end{align} $$
$$ \begin{align} \begin{aligned} &\mu(x:\tau_{1/n}(x)>e^{n^{\overline\alpha+\varepsilon}}L'(4n))\le \mu(x:\tau(\mathcal Q_n,x)>e^{n^{\overline\alpha+\varepsilon}}L'(4n))\\ &\quad=\mu(x:\text{there exists } y\in\mathcal Y_n:f^j(x)\not\in B(y,{1}/{2n})\;\text{for all } j\le e^{n^{\overline\alpha+\varepsilon}}L'({4n}))\\ &\quad\le \mu(x:\text{there exists } y\in\mathcal Y_n:f^{jL'(4n)}(x)\not \in B(y,1/{2n})\;\text{for all } j\le e^{n^{\overline\alpha+\varepsilon}})\\ &\quad=\mu\bigg(\bigcup_{y\in\mathcal Y_n}\bigcap_{j=1}^{e^{n^{\overline\alpha+\varepsilon}}}(f^{-jL'(4n)}B(y,1/2n))^c\bigg)\le \sum_{y\in\mathcal Y_n}\mu\bigg(\bigcap_{j=1}^{e^{n^{\overline\alpha+\varepsilon}}}(f^{-jL'( 4n)} B(y,1/2n))^c\bigg). \end{aligned} \end{align} $$
 A cylinder 
 $[\textbf {i}]$
 in
$[\textbf {i}]$
 in  has depth at most
 has depth at most 
 $L(4n)$
, then by our choice of
$L(4n)$
, then by our choice of 
 $L'(4n)$
 and the exponentially
$L'(4n)$
 and the exponentially 
 $\psi $
-mixing property of
$\psi $
-mixing property of  ,
, 

Similar calculations hold for 
 $\mu (\bigcap _{j=1}^{e^{n^{\overline \alpha +\varepsilon }}}(f^{-jL'( 4n)} B(y,1/2n))^c)$
 since the compliment of
$\mu (\bigcap _{j=1}^{e^{n^{\overline \alpha +\varepsilon }}}(f^{-jL'( 4n)} B(y,1/2n))^c)$
 since the compliment of  can be written as a countable union of cylinders of depths no greater than
 can be written as a countable union of cylinders of depths no greater than 
 $L(4n)$
.
$L(4n)$
.
 As  for all z and all
 for all z and all 
 $r>0$
,
$r>0$
, 

By definition of 
 $\overline \alpha $
, for all n large such that
$\overline \alpha $
, for all n large such that 
 $(\varepsilon /4)\log n\ge (\overline \alpha +\varepsilon /4)\log 4$
, we have
$(\varepsilon /4)\log n\ge (\overline \alpha +\varepsilon /4)\log 4$
, we have 
 $$ \begin{align*} \log\bigg(-\log M_\mu\bigg(\frac1{4n}\bigg)\bigg)\le (\overline{\alpha}+\varepsilon/4)(\log 4n)\le (\overline\alpha+\varepsilon/2)\log n. \end{align*} $$
$$ \begin{align*} \log\bigg(-\log M_\mu\bigg(\frac1{4n}\bigg)\bigg)\le (\overline{\alpha}+\varepsilon/4)(\log 4n)\le (\overline\alpha+\varepsilon/2)\log n. \end{align*} $$
So for all 
 $y\in \mathrm {supp}(\mu )$
 and all n large enough,
$y\in \mathrm {supp}(\mu )$
 and all n large enough, 
 $$ \begin{align*} \mu\bigg(B\bigg(y,\frac1{4n}\bigg)\bigg)\ge e^{-n^{\overline\alpha+\varepsilon/2}}\ge \frac{e^{n^{\varepsilon/2}}}{e^{n^{\overline\alpha+\varepsilon}}}. \end{align*} $$
$$ \begin{align*} \mu\bigg(B\bigg(y,\frac1{4n}\bigg)\bigg)\ge e^{-n^{\overline\alpha+\varepsilon/2}}\ge \frac{e^{n^{\varepsilon/2}}}{e^{n^{\overline\alpha+\varepsilon}}}. \end{align*} $$
As for all 
 $u\in \mathbb R$
 and all large k,
$u\in \mathbb R$
 and all large k, 
 $(1+u/k)^{k}\approx e^u$
, combining (4.1) and (4.2), for some uniform constant
$(1+u/k)^{k}\approx e^u$
, combining (4.1) and (4.2), for some uniform constant 
 $C_2>0$
,
$C_2>0$
, 
 $$ \begin{align*} &\mu(x:\tau_{1/n}(x)>e^{n^{\overline\alpha+\varepsilon}}L'(4n))\le (1+e^{-n^{\overline\alpha+\varepsilon}})^{e^{n^{\overline\alpha+\varepsilon}}}\sum_{y\in\mathcal{Y}_{k+1}}(1- e^{-n^{\overline\alpha+\varepsilon/2}})^{e^{n^{\overline\alpha+\varepsilon}}}\\ &\quad\le (1+e^{-n^{\overline\alpha+\varepsilon}})^{e^{n^{\overline\alpha+\varepsilon}}}n\bigg(1-\frac{e^{n^{\varepsilon/2}}}{e^{n^{\overline\alpha+\varepsilon}}}\bigg)^{e^{n^{\overline\alpha+\varepsilon}}}\le C_2\exp(\log n-{e^{n^{\varepsilon/2}}}). \end{align*} $$
$$ \begin{align*} &\mu(x:\tau_{1/n}(x)>e^{n^{\overline\alpha+\varepsilon}}L'(4n))\le (1+e^{-n^{\overline\alpha+\varepsilon}})^{e^{n^{\overline\alpha+\varepsilon}}}\sum_{y\in\mathcal{Y}_{k+1}}(1- e^{-n^{\overline\alpha+\varepsilon/2}})^{e^{n^{\overline\alpha+\varepsilon}}}\\ &\quad\le (1+e^{-n^{\overline\alpha+\varepsilon}})^{e^{n^{\overline\alpha+\varepsilon}}}n\bigg(1-\frac{e^{n^{\varepsilon/2}}}{e^{n^{\overline\alpha+\varepsilon}}}\bigg)^{e^{n^{\overline\alpha+\varepsilon}}}\le C_2\exp(\log n-{e^{n^{\varepsilon/2}}}). \end{align*} $$
The last term is clearly summable over n, then by Borel–Cantelli, for all n large enough, 
 $\tau _{1/n}(x)\le e^{n^{\overline \alpha +\varepsilon }}L'(4n)$
. Since
$\tau _{1/n}(x)\le e^{n^{\overline \alpha +\varepsilon }}L'(4n)$
. Since 
 $\log L'(4n)\approx (\overline \alpha +\varepsilon )\log n\ll n^{\overline \alpha +\varepsilon }$
, we have for
$\log L'(4n)\approx (\overline \alpha +\varepsilon )\log n\ll n^{\overline \alpha +\varepsilon }$
, we have for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 ${x\in \Lambda} $
,
${x\in \Lambda} $
, 
 $$ \begin{align*}\limsup_{n\to\infty}\frac{\log\log\tau_{1/n}(x)}{\log n}\le\limsup_{n\to\infty}\frac{\log\log (e^{n^{\overline\alpha+\varepsilon}}L'(4n))}{\log n}\le \overline{\alpha}+\varepsilon.\end{align*} $$
$$ \begin{align*}\limsup_{n\to\infty}\frac{\log\log\tau_{1/n}(x)}{\log n}\le\limsup_{n\to\infty}\frac{\log\log (e^{n^{\overline\alpha+\varepsilon}}L'(4n))}{\log n}\le \overline{\alpha}+\varepsilon.\end{align*} $$
By Remark 4.1, this upper bound for 
 $\limsup $
 holds for all sequences decreasing to 0, and as
$\limsup $
 holds for all sequences decreasing to 0, and as 
 $\varepsilon>0$
 was arbitrary, we can conclude that for
$\varepsilon>0$
 was arbitrary, we can conclude that for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log\log\tau_r(x)}{-\log r}=\limsup_{n\to\infty}\frac{\log\log\tau_{1/n}}{\log n}\le \overline{\alpha}.\\[-42pt] \end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log\log\tau_r(x)}{-\log r}=\limsup_{n\to\infty}\frac{\log\log\tau_{1/n}}{\log n}\le \overline{\alpha}.\\[-42pt] \end{align*} $$
Proposition 4.3. Suppose 
 $(f,\mu )$
 is exponentially
$(f,\mu )$
 is exponentially 
 $\psi $
-mixing and the lower stretched Minkowski dimension of
$\psi $
-mixing and the lower stretched Minkowski dimension of 
 $\mu $
,
$\mu $
, 
 $\underline {\mathrm {dim}}_M^s(\mu )$
, is finite, then for
$\underline {\mathrm {dim}}_M^s(\mu )$
, is finite, then for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align*}\liminf_{r\to0}\frac{\log\log \tau_r(x)}{-\log r}\le \underline{{\mathrm{dim}}}_M^s(\mu).\end{align*} $$
$$ \begin{align*}\liminf_{r\to0}\frac{\log\log \tau_r(x)}{-\log r}\le \underline{{\mathrm{dim}}}_M^s(\mu).\end{align*} $$
Proof. Again, for simplicity, denote 
 $\underline \alpha :=\underline {{\mathrm {dim}}}^s_M(\mu )$
. Let
$\underline \alpha :=\underline {{\mathrm {dim}}}^s_M(\mu )$
. Let 
 $\varepsilon>0$
 and by definition of liminf, there is a subsequence
$\varepsilon>0$
 and by definition of liminf, there is a subsequence 
 $\{n_k\}_k\to \infty $
 such that for all k,
$\{n_k\}_k\to \infty $
 such that for all k, 
 $$ \begin{align*}\frac{\log (-\log M_\mu(1/n_k))}{\log n_k }\le \underline{\alpha}+\varepsilon,\end{align*} $$
$$ \begin{align*}\frac{\log (-\log M_\mu(1/n_k))}{\log n_k }\le \underline{\alpha}+\varepsilon,\end{align*} $$
then repeating the proof of Proposition 4.2 by replacing n by 
 $n_k$
 everywhere, one gets that for
$n_k$
 everywhere, one gets that for 
 $\mu $
-a.e. x,
$\mu $
-a.e. x, 
 $$ \begin{align*}\liminf_{k\to\infty}\frac{\log\log\tau_{1/n_k}(x)}{\log n_k}\le\underline{\alpha}+\varepsilon.\end{align*} $$
$$ \begin{align*}\liminf_{k\to\infty}\frac{\log\log\tau_{1/n_k}(x)}{\log n_k}\le\underline{\alpha}+\varepsilon.\end{align*} $$
Again sending 
 $\varepsilon \to 0$
 and using the fact that liminf over the entire sequence is no greater than the liminf along any subsequence, the proposition is proved.
$\varepsilon \to 0$
 and using the fact that liminf over the entire sequence is no greater than the liminf along any subsequence, the proposition is proved.
4.2 Proof of the inequalities (1.1)
Proposition 4.4. For 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align*}\liminf_{n\to\infty}\frac{\log\log \tau_{r}(x)}{-\log r}\ge\underline{\mathrm{dim}}^s_M(\mu).\end{align*} $$
$$ \begin{align*}\liminf_{n\to\infty}\frac{\log\log \tau_{r}(x)}{-\log r}\ge\underline{\mathrm{dim}}^s_M(\mu).\end{align*} $$
Proof. We continue to use the notation 
 $\underline \alpha =\underline {\mathrm {dim}}^s_M(\mu )$
. Let
$\underline \alpha =\underline {\mathrm {dim}}^s_M(\mu )$
. Let 
 $\varepsilon>0$
 be arbitrary and by definition of
$\varepsilon>0$
 be arbitrary and by definition of 
 $\underline \alpha $
 for all large n, there exists
$\underline \alpha $
 for all large n, there exists 
 $y_n\in \mathrm {supp} (\mu )$
 such that
$y_n\in \mathrm {supp} (\mu )$
 such that 
 $\mu (B(y_n,1/n))\le e^{-n^{\underline \alpha -\varepsilon }}$
. Let
$\mu (B(y_n,1/n))\le e^{-n^{\underline \alpha -\varepsilon }}$
. Let 
 $$ \begin{align*}T(x,y,r):=\inf\{j\ge0:f^j(x)\in B(y,r)\},\end{align*} $$
$$ \begin{align*}T(x,y,r):=\inf\{j\ge0:f^j(x)\in B(y,r)\},\end{align*} $$
so for all 
 $n\in \mathbb N$
 and all x,
$n\in \mathbb N$
 and all x, 
 $\tau _{1/n}(x)\ge T(x,y_n,1/n)$
. Then, by invariance,
$\tau _{1/n}(x)\ge T(x,y_n,1/n)$
. Then, by invariance, 
 $$ \begin{align*} &\mu(x:\tau_{1/n}(x)< e^{n^{\underline\alpha-\varepsilon}}/n^2)\le \mu(x:T(x,y_n,1/n)< e^{n^{\underline\alpha-\varepsilon}}/n^2)\\ &\quad=\mu(x:\text{there exists }\,0\le j< e^{n^{\underline\alpha-\varepsilon}}/n^2:\,f^j(x)\in B(y_n,1/n))\\&\quad\le \sum_{j=0}^{ e^{n^{\underline\alpha-\varepsilon}}/n^2-1 }\mu(x:f^j(x)\in B(y_n,1/n))\\ &\quad=\sum_{j=0}^{e^{n^{\underline\alpha-\varepsilon}}/n^2-1}\mu\bigg(f^{-j}B\bigg(y_n,\frac1n\bigg)\bigg)\le\frac{ e^{n^{\underline\alpha-\varepsilon}}}{n^2} e^{-n^{\overline\alpha-\varepsilon}}=\frac1{n^2}, \end{align*} $$
$$ \begin{align*} &\mu(x:\tau_{1/n}(x)< e^{n^{\underline\alpha-\varepsilon}}/n^2)\le \mu(x:T(x,y_n,1/n)< e^{n^{\underline\alpha-\varepsilon}}/n^2)\\ &\quad=\mu(x:\text{there exists }\,0\le j< e^{n^{\underline\alpha-\varepsilon}}/n^2:\,f^j(x)\in B(y_n,1/n))\\&\quad\le \sum_{j=0}^{ e^{n^{\underline\alpha-\varepsilon}}/n^2-1 }\mu(x:f^j(x)\in B(y_n,1/n))\\ &\quad=\sum_{j=0}^{e^{n^{\underline\alpha-\varepsilon}}/n^2-1}\mu\bigg(f^{-j}B\bigg(y_n,\frac1n\bigg)\bigg)\le\frac{ e^{n^{\underline\alpha-\varepsilon}}}{n^2} e^{-n^{\overline\alpha-\varepsilon}}=\frac1{n^2}, \end{align*} $$
which is summable. By Borel–Cantelli, since 
 $2\log n\ll n^{\underline \alpha -\varepsilon }$
, for
$2\log n\ll n^{\underline \alpha -\varepsilon }$
, for 
 $\mu $
-a.e. x,
$\mu $
-a.e. x, 
 $$ \begin{align*}\liminf_{n\to\infty}\frac{\log\log \tau_{1/n}(x)}{\log n}\ge\underline\alpha-\varepsilon.\end{align*} $$
$$ \begin{align*}\liminf_{n\to\infty}\frac{\log\log \tau_{1/n}(x)}{\log n}\ge\underline\alpha-\varepsilon.\end{align*} $$
Since 
 $\varepsilon>0$
 is arbitrarily small, the proposition is proved.
$\varepsilon>0$
 is arbitrarily small, the proposition is proved.
Similar to Propositions 4.2 and 4.3, we get the following proposition.
Proposition 4.5. For 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in \Lambda $
,
$x\in \Lambda $
, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log\log \tau_r(x)}{-\log r}\ge\overline{{\mathrm{dim}}}_M^s(\mu).\end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log\log \tau_r(x)}{-\log r}\ge\overline{{\mathrm{dim}}}_M^s(\mu).\end{align*} $$
Proof. Let 
 $\varepsilon>0$
, then by definition of limsup, there exists a subsequence
$\varepsilon>0$
, then by definition of limsup, there exists a subsequence 
 $\{n_k\}_k\to \infty $
 such that for all k,
$\{n_k\}_k\to \infty $
 such that for all k, 
 $$ \begin{align*}\frac{\log \log(-M_\mu(1/n_k))}{\log n_k}\ge\overline{\alpha}-\varepsilon.\end{align*} $$
$$ \begin{align*}\frac{\log \log(-M_\mu(1/n_k))}{\log n_k}\ge\overline{\alpha}-\varepsilon.\end{align*} $$
Then, repeating the proof of Proposition 4.4 along 
 $\{n_k\}_k$
, one gets that for
$\{n_k\}_k$
, one gets that for 
 $\mu $
-a.e. x:
$\mu $
-a.e. x: 
 $$ \begin{align*}\limsup_{k\to\infty}\frac{\log\log\tau_{1/n_k}(x)}{\log n_k}\ge \overline{\alpha}-\varepsilon.\end{align*} $$
$$ \begin{align*}\limsup_{k\to\infty}\frac{\log\log\tau_{1/n_k}(x)}{\log n_k}\ge \overline{\alpha}-\varepsilon.\end{align*} $$
As 
 $\varepsilon $
 can be arbitrarily small,
$\varepsilon $
 can be arbitrarily small, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log\log\tau_r(x)}{-\log r}\ge\limsup_{k\to\infty}\frac{\log\log\tau_{1/n_k}(x)}{\log n_k}\ge\overline{\alpha}.\\[-42pt] \end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log\log\tau_r(x)}{-\log r}\ge\limsup_{k\to\infty}\frac{\log\log\tau_{1/n_k}(x)}{\log n_k}\ge\overline{\alpha}.\\[-42pt] \end{align*} $$
5 Irrational rotations
 The proof of (1.2) requires an exponentially 
 $\psi $
-mixing rate which is a strong mixing condition, and it is natural to ask if the same asymptotic growth in Theorem 1.4 remains the same under different mixing conditions, for example, exponentially
$\psi $
-mixing rate which is a strong mixing condition, and it is natural to ask if the same asymptotic growth in Theorem 1.4 remains the same under different mixing conditions, for example, exponentially 
 $\phi $
-mixing and
$\phi $
-mixing and 
 $\alpha $
-mixing, or even polynomial
$\alpha $
-mixing, or even polynomial 
 $\psi $
-mixing. Although these questions are unresolved, in this section, we will show that the limsup and liminf of the asymptotic growth rate can differ if the system is not mixing at all.
$\psi $
-mixing. Although these questions are unresolved, in this section, we will show that the limsup and liminf of the asymptotic growth rate can differ if the system is not mixing at all.
 Let 
 $\theta \in (0,1)$
 be an irrational number and define
$\theta \in (0,1)$
 be an irrational number and define 
 $T(x)=T_\theta (x)=x+\theta \text { (mod 1)}$
. Denote the one-dimensional Lebesgue measure on
$T(x)=T_\theta (x)=x+\theta \text { (mod 1)}$
. Denote the one-dimensional Lebesgue measure on 
 $[0,1)$
 by
$[0,1)$
 by 
 $\mu $
, then
$\mu $
, then 
 $(T,\mu )$
 is an ergodic probability preserving system with
$(T,\mu )$
 is an ergodic probability preserving system with 
 ${\mathrm {dim}}_M(\mu )=1$
.
${\mathrm {dim}}_M(\mu )=1$
.
Definition 5.1. For a given irrational number 
 $\theta $
, the type of
$\theta $
, the type of 
 $T_\theta $
 is given by the following number:
$T_\theta $
 is given by the following number: 
 $$ \begin{align*}\eta=\eta(\theta):=\sup\Big\{\beta:\liminf_{n\to\infty}n^{\beta}\|n\theta\|=0\Big\},\end{align*} $$
$$ \begin{align*}\eta=\eta(\theta):=\sup\Big\{\beta:\liminf_{n\to\infty}n^{\beta}\|n\theta\|=0\Big\},\end{align*} $$
where for every 
 $r\in \mathbb R$
,
$r\in \mathbb R$
,  .
.
Remark 5.2. (See [Reference KhintchineK]) For every 
 $\theta \in (0,1)$
 irrational,
$\theta \in (0,1)$
 irrational, 
 $\eta (\theta )\ge 1$
 and
$\eta (\theta )\ge 1$
 and 
 $\eta (\theta )=1$
 almost everywhere, but there exists irrational number with
$\eta (\theta )=1$
 almost everywhere, but there exists irrational number with 
 $\eta (\theta )\in (1,\infty ]$
, for example, the Liouville numbers.
$\eta (\theta )\in (1,\infty ]$
, for example, the Liouville numbers.
 For any irrational number 
 $\theta \in (0,1)$
, there is a unique continued fraction expansion
$\theta \in (0,1)$
, there is a unique continued fraction expansion 
 $$ \begin{align*}\theta=[a_1,a_2,\ldots]:=\frac{1}{a_1+\frac{1}{a_2+\cdots}},\end{align*} $$
$$ \begin{align*}\theta=[a_1,a_2,\ldots]:=\frac{1}{a_1+\frac{1}{a_2+\cdots}},\end{align*} $$
where 
 $a_i\ge 1$
 for all
$a_i\ge 1$
 for all 
 $i\ge 1$
. Set
$i\ge 1$
. Set 
 $p_0=0$
 and
$p_0=0$
 and 
 $q_0=1$
, and for
$q_0=1$
, and for 
 $i\ge 1$
, choose
$i\ge 1$
, choose 
 $p_i,q_i\in \mathbb N$
 coprime such that
$p_i,q_i\in \mathbb N$
 coprime such that 
 $$ \begin{align*}\frac{p_i}{q_i}=[a_1,\ldots,a_i]=\frac1{a_1+\frac1{\cdots\frac1{a_i}}}.\end{align*} $$
$$ \begin{align*}\frac{p_i}{q_i}=[a_1,\ldots,a_i]=\frac1{a_1+\frac1{\cdots\frac1{a_i}}}.\end{align*} $$
Definition 5.3. The 
 $a_i$
 terms are called the ith partial quotient and
$a_i$
 terms are called the ith partial quotient and 
 $p_i/q_i$
 the ith convergent. In particular (see [Reference KhintchineK]),
$p_i/q_i$
 the ith convergent. In particular (see [Reference KhintchineK]), 
 $$ \begin{align*}\eta(\theta)=\limsup_{n\to\infty}\frac{\log q_{n+1}}{\log q_n}.\end{align*} $$
$$ \begin{align*}\eta(\theta)=\limsup_{n\to\infty}\frac{\log q_{n+1}}{\log q_n}.\end{align*} $$
Theorem 5.4. For any irrational rotation 
 $T_\theta $
,
$T_\theta $
, 
 $$ \begin{align*}\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}={\mathrm{dim}}_M(\mu)=1\le\eta(\theta)=\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r} \mu\text{-a.e.}\end{align*} $$
$$ \begin{align*}\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}={\mathrm{dim}}_M(\mu)=1\le\eta(\theta)=\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r} \mu\text{-a.e.}\end{align*} $$
 By Remark 5.2, there exists irrational rotations such that the asymptotic cover time does not converge. The proof of this theorem relies on the algebraic properties of 
 $\eta (\theta )$
. For simplicity, we fix
$\eta (\theta )$
. For simplicity, we fix 
 $\theta $
 and write
$\theta $
 and write 
 $\eta $
 from now on.
$\eta $
 from now on.
Lemma 5.5. [Reference Kim and SeoKS, Fact 1, Lemma 7]
 For each  , the following statements hold:
, the following statements hold:
- 
(a)  $q_{i+2}=a_{i+2}q_{i+1}+q_i$
 and $q_{i+2}=a_{i+2}q_{i+1}+q_i$
 and $p_{i+2}=a_{i+2}p_{i+1}+p_i$
; $p_{i+2}=a_{i+2}p_{i+1}+p_i$
;
- 
(b)  $1/{(2q_{i+1})}\le 1/({q_{i+1}+q_i})<\|q_i\theta \|<1/q_{i+1}$
 for $1/{(2q_{i+1})}\le 1/({q_{i+1}+q_i})<\|q_i\theta \|<1/q_{i+1}$
 for $i\ge 1$
; $i\ge 1$
;
- 
(c) if  $0<j<q_{i+1}$
, then $0<j<q_{i+1}$
, then $\|j\theta \|\ge \|q_i\theta \|$
; $\|j\theta \|\ge \|q_i\theta \|$
;
- 
(d) for  $\varepsilon>0$
, there exists uniform $\varepsilon>0$
, there exists uniform $C_{\varepsilon }>0$
 such that for all $C_{\varepsilon }>0$
 such that for all $j\in \mathbb N$
, $j\in \mathbb N$
, $j^{\eta +\varepsilon }\|j\theta \|>C_\varepsilon $
. $j^{\eta +\varepsilon }\|j\theta \|>C_\varepsilon $
.
The following propositions use results given in [Reference Kim and SeoKS, Propositions 6 and 10].
Proposition 5.6. For 
 $\mu $
-a.e. x,
$\mu $
-a.e. x, 
 $$ \begin{align} \limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge \eta. \end{align} $$
$$ \begin{align} \limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge \eta. \end{align} $$
Proof. First, it is easy to see that for all 
 $r>0$
 and all
$r>0$
 and all 
 $x,y\in [0,1)$
, by the nature of rotation,
$x,y\in [0,1)$
, by the nature of rotation, 
 $\tau _r(x)=\tau _r(y)$
. In particular,
$\tau _r(x)=\tau _r(y)$
. In particular, 
 $\tau _r(x)=\tau _r(Tx)$
; hence, the function
$\tau _r(x)=\tau _r(Tx)$
; hence, the function 
 $x\mapsto \limsup _{r\to 0}({\log \tau _r(x)}/{-\log r})$
 is T invariant; therefore, constant
$x\mapsto \limsup _{r\to 0}({\log \tau _r(x)}/{-\log r})$
 is T invariant; therefore, constant 
 $\mu $
-almost everywhere by ergodicity of
$\mu $
-almost everywhere by ergodicity of 
 $\mu $
.
$\mu $
.
 By [Reference Kim and SeoKS, Proposition 10], for almost every 
 $x,y$
,
$x,y$
, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log W_{B(y,r)}(x)}{-\log r}\ge \eta,\end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log W_{B(y,r)}(x)}{-\log r}\ge \eta,\end{align*} $$
where 
 $W_E(x):=\inf \{n\ge 1:T^nx\in E\}$
 denotes the waiting time of x before visiting E. Hence, there exists a set of strictly positive measures consisting of points that satisfy
$W_E(x):=\inf \{n\ge 1:T^nx\in E\}$
 denotes the waiting time of x before visiting E. Hence, there exists a set of strictly positive measures consisting of points that satisfy 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge \limsup_{r\to0}\frac{\log W_{B(y,r)}(x)}{-\log r}\ge \eta,\end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge \limsup_{r\to0}\frac{\log W_{B(y,r)}(x)}{-\log r}\ge \eta,\end{align*} $$
since for all 
 $y\in [0,1)$
,
$y\in [0,1)$
, 
 $\tau _r(x)\ge W_{B(y,r)}(x)$
. As
$\tau _r(x)\ge W_{B(y,r)}(x)$
. As 
 $\limsup _{r\to 0}({\log \tau _r(x)}/{-\log r})$
 is
$\limsup _{r\to 0}({\log \tau _r(x)}/{-\log r})$
 is 
 $\mu $
-almost everywhere constant, the inequality above holds for
$\mu $
-almost everywhere constant, the inequality above holds for 
 $\mu $
-a.e. x and hence the proposition is proved.
$\mu $
-a.e. x and hence the proposition is proved.
Proposition 5.7. For 
 $\mu $
-a.e. x,
$\mu $
-a.e. x, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\le \eta.\end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\le \eta.\end{align*} $$
Proof. Let 
 $\mathcal Q_n:=\{[2^{-n}j,2^{-n}(j+1)):j=0,\ldots ,2^n-1\}$
 and set
$\mathcal Q_n:=\{[2^{-n}j,2^{-n}(j+1)):j=0,\ldots ,2^n-1\}$
 and set 
 $\tau ({\mathcal Q_n},x)$
 as the minimum time for x to have visited each element of
$\tau ({\mathcal Q_n},x)$
 as the minimum time for x to have visited each element of 
 $\mathcal Q_n$
. Again, we have
$\mathcal Q_n$
. Again, we have 
 $\tau _{2^{-n+1}}(x)\le \tau (\mathcal Q_n,x)$
 for all x. By Lemma 5.5(a) and (c),
$\tau _{2^{-n+1}}(x)\le \tau (\mathcal Q_n,x)$
 for all x. By Lemma 5.5(a) and (c), 
 $\{\|q_i\theta \|\}_i$
 is a decreasing sequence, and for each
$\{\|q_i\theta \|\}_i$
 is a decreasing sequence, and for each 
 $n\in \mathbb N$
, there exists a minimal j such that
$n\in \mathbb N$
, there exists a minimal j such that 
 $\|q_{j}\theta \|< 2^{-n}\le \|q_{j-1}\theta \|$
, write
$\|q_{j}\theta \|< 2^{-n}\le \|q_{j-1}\theta \|$
, write 
 $j=j_n$
.
$j=j_n$
.
 By [Reference Kim and SeoKS, Proposition 6] for all n, there is 
 $\mu (W_{[0,2^{-n})}>q_{j_n}+q_{j_n-1})=0$
. Notice that for all
$\mu (W_{[0,2^{-n})}>q_{j_n}+q_{j_n-1})=0$
. Notice that for all 
 $a,b\in [0,1)$
,
$a,b\in [0,1)$
, 
 $$ \begin{align} \mu\{W_{[a,a+b)}(x)=k\}=\mu\{\{x:W_{[0,b)}(x)=k\}+a\}=\mu\{W_{[0,b)}(x)=k\}, \end{align} $$
$$ \begin{align} \mu\{W_{[a,a+b)}(x)=k\}=\mu\{\{x:W_{[0,b)}(x)=k\}+a\}=\mu\{W_{[0,b)}(x)=k\}, \end{align} $$
as 
 $\mu =Leb$
 is translation invariant. Then, by (5.2),
$\mu =Leb$
 is translation invariant. Then, by (5.2), 
 $$ \begin{align*} \begin{split} &\mu\{\tau({\mathcal Q_n},x)>q_{j_n}+q_{j_n-1}\}=\mu\{x:\text{for all } Q\in\mathcal Q_{n}:\,W_Q(x)>q_{j_n}+q_{j_n-1}\}\\ &\quad=\mu\bigg(x:\bigcup_{Q\in\mathcal Q_{n}}\{W_Q(x)>q_{j_n-1}+q_{j_n}\}\bigg)\le \sum_{Q\in\mathcal Q_{n}}\mu(W_Q>q_{j_n-1}+q_{j_n})\\ &\quad=\sum_{j=0}^{2^n-1}\mu(W_{[2^{-n}j,2^{-n}(j+1))}>q_{j_n}+q_{j_n-1})=\sum_{j=0}^{2^n-1}\mu(W_{[0,2^{-n})}>q_{j_n}+q_{j_n-1})=0. \end{split} \end{align*} $$
$$ \begin{align*} \begin{split} &\mu\{\tau({\mathcal Q_n},x)>q_{j_n}+q_{j_n-1}\}=\mu\{x:\text{for all } Q\in\mathcal Q_{n}:\,W_Q(x)>q_{j_n}+q_{j_n-1}\}\\ &\quad=\mu\bigg(x:\bigcup_{Q\in\mathcal Q_{n}}\{W_Q(x)>q_{j_n-1}+q_{j_n}\}\bigg)\le \sum_{Q\in\mathcal Q_{n}}\mu(W_Q>q_{j_n-1}+q_{j_n})\\ &\quad=\sum_{j=0}^{2^n-1}\mu(W_{[2^{-n}j,2^{-n}(j+1))}>q_{j_n}+q_{j_n-1})=\sum_{j=0}^{2^n-1}\mu(W_{[0,2^{-n})}>q_{j_n}+q_{j_n-1})=0. \end{split} \end{align*} $$
Hence, by Borel–Cantelli, for all n large enough, 
 $\tau _{2^{-n+1}}(x)\le (q_{j_n}+q_{j_n-1})$
 for
$\tau _{2^{-n+1}}(x)\le (q_{j_n}+q_{j_n-1})$
 for 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in [0,1)$
.
$x\in [0,1)$
.
 Let 
 $\varepsilon>0$
, and by Lemma 5.5(b) and (d), there exists
$\varepsilon>0$
, and by Lemma 5.5(b) and (d), there exists 
 $C_\varepsilon $
 such that
$C_\varepsilon $
 such that 
 $$ \begin{align*} \log(q_{j_n}+q_{j_n-1})\le \log( 2q_{j_n})\le\log\frac2{\|q_{j_n}\theta\|}\le(\eta+\varepsilon)\log q_{j_n}+\log2-\log C_\varepsilon. \end{align*} $$
$$ \begin{align*} \log(q_{j_n}+q_{j_n-1})\le \log( 2q_{j_n})\le\log\frac2{\|q_{j_n}\theta\|}\le(\eta+\varepsilon)\log q_{j_n}+\log2-\log C_\varepsilon. \end{align*} $$
Again by Lemma 5.5 and our choice of 
 $j_n$
, for
$j_n$
, for 
 $\mu $
-a.e. x and all n large enough,
$\mu $
-a.e. x and all n large enough, 
 $$ \begin{align*} \log\tau_{2^{-n+1}}(x)&\le \log(q_{j_n}+q_{j_n-1})\lesssim(\eta+\varepsilon)\log q_{j_n}\\&\le-(\eta+\varepsilon)\log\|q_{j_n-1}\theta\|\le (\eta+\varepsilon)n\log 2, \end{align*} $$
$$ \begin{align*} \log\tau_{2^{-n+1}}(x)&\le \log(q_{j_n}+q_{j_n-1})\lesssim(\eta+\varepsilon)\log q_{j_n}\\&\le-(\eta+\varepsilon)\log\|q_{j_n-1}\theta\|\le (\eta+\varepsilon)n\log 2, \end{align*} $$
where 
 $a\lesssim b$
 means
$a\lesssim b$
 means 
 $a\le b$
 up to a uniform constant. Hence,
$a\le b$
 up to a uniform constant. Hence, 
 $\limsup _{n\to \infty }(({\log \tau _{2^{-n}}(x)})/ {n\log 2})\le \eta +\varepsilon $
 for
$\limsup _{n\to \infty }(({\log \tau _{2^{-n}}(x)})/ {n\log 2})\le \eta +\varepsilon $
 for 
 $\mu $
-a.e. x. Again, since for each
$\mu $
-a.e. x. Again, since for each 
 $r<0$
 there is a unique
$r<0$
 there is a unique 
 $n\in \mathbb N$
 for which
$n\in \mathbb N$
 for which 
 $2^{-n}<r\le 2^{-n+1}$
, we can apply the subsequence trick again. As
$2^{-n}<r\le 2^{-n+1}$
, we can apply the subsequence trick again. As 
 $\varepsilon>0$
 is arbitrarily small, the proposition is proved.
$\varepsilon>0$
 is arbitrarily small, the proposition is proved.
Proposition 5.8. For 
 $\mu $
-a.e.
$\mu $
-a.e. 
 $x\in [0,1)$
,
$x\in [0,1)$
, 
 $$ \begin{align*}\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}= 1.\end{align*} $$
$$ \begin{align*}\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}= 1.\end{align*} $$
Proof. Let 
 $\varepsilon>0$
, and using the same arguments as in the previous proof, that is, the cover time is greater than the hitting time of the ball of smallest measure at scale r, then along the sequence
$\varepsilon>0$
, and using the same arguments as in the previous proof, that is, the cover time is greater than the hitting time of the ball of smallest measure at scale r, then along the sequence 
 $r_n=2^{-(n+1)}$
, one gets for all
$r_n=2^{-(n+1)}$
, one gets for all 
 $[a-r_n,a+r_n)\subset [0,1)$
,
$[a-r_n,a+r_n)\subset [0,1)$
, 
 $$ \begin{align*} &\sum_{n\ge1}\mu(\tau_{r_n}(x)< 2^{n(1-\varepsilon)})\le \sum_{n\ge1}\mu(W_{[a-2^{-n-1},a+2^{-n-1})}(x)< 2^{n(1-\varepsilon)})\\ &\quad\le \sum_{n\ge1}\sum_{k=0}^{2^{n(1-\varepsilon)}}\mu(T^{-k}[a-2^{-n-1},a+2^{-n-1}))=\sum_{n\ge1}2^{n(1-\varepsilon)}2^{-n}=\sum_{n\ge1}2^{-\varepsilon n}<\infty. \end{align*} $$
$$ \begin{align*} &\sum_{n\ge1}\mu(\tau_{r_n}(x)< 2^{n(1-\varepsilon)})\le \sum_{n\ge1}\mu(W_{[a-2^{-n-1},a+2^{-n-1})}(x)< 2^{n(1-\varepsilon)})\\ &\quad\le \sum_{n\ge1}\sum_{k=0}^{2^{n(1-\varepsilon)}}\mu(T^{-k}[a-2^{-n-1},a+2^{-n-1}))=\sum_{n\ge1}2^{n(1-\varepsilon)}2^{-n}=\sum_{n\ge1}2^{-\varepsilon n}<\infty. \end{align*} $$
Since for each r there is a unique n such that 
 $r_n\kern1.4pt{<}\kern1.4pt r \kern1.4pt{\le}\kern1.4pt r_{n-1}$
 while
$r_n\kern1.4pt{<}\kern1.4pt r \kern1.4pt{\le}\kern1.4pt r_{n-1}$
 while 
 ${\lim _n({\log r_n}/{\log r_{n-1}})\kern1.4pt{=}\kern1.4pt1}$
, so by Borel–Cantelli,
${\lim _n({\log r_n}/{\log r_{n-1}})\kern1.4pt{=}\kern1.4pt1}$
, so by Borel–Cantelli, 
 $$ \begin{align*}\liminf_{r\to0}\frac{\log\tau_r(x)}{-\log r}=\liminf_{n\to\infty}\frac{\log \tau_{2^{-n}}(x)}{n\log 2}\ge1-\varepsilon,\end{align*} $$
$$ \begin{align*}\liminf_{r\to0}\frac{\log\tau_r(x)}{-\log r}=\liminf_{n\to\infty}\frac{\log \tau_{2^{-n}}(x)}{n\log 2}\ge1-\varepsilon,\end{align*} $$
and as 
 $\varepsilon $
 is arbitrarily small, the lower bound is proved.
$\varepsilon $
 is arbitrarily small, the lower bound is proved.
 For the upper bound of liminf, recall that 
 $\tau ({\mathcal Q_n},x)\ge \tau _{2^{-n}}(x)$
. We can repeat the proof of Proposition 5.7, apart from that this time, we choose
$\tau ({\mathcal Q_n},x)\ge \tau _{2^{-n}}(x)$
. We can repeat the proof of Proposition 5.7, apart from that this time, we choose 
 $\{2^{-n_i}\}_i$
 according to
$\{2^{-n_i}\}_i$
 according to 
 $\{q_i\}_{i\in \mathbb N}$
: for each i, choose
$\{q_i\}_{i\in \mathbb N}$
: for each i, choose 
 $n_i\in \mathbb N$
 to be the smallest number such that
$n_i\in \mathbb N$
 to be the smallest number such that 
 $$ \begin{align*}\|q_{i+1}\theta\|< 2^{-n_i}\le \|q_{i}\theta\|.\end{align*} $$
$$ \begin{align*}\|q_{i+1}\theta\|< 2^{-n_i}\le \|q_{i}\theta\|.\end{align*} $$
Hence, as in Proposition 5.7,
 $$ \begin{align*} \mu(\tau({\mathcal Q_{n_i}},x)>q_{i+1}+q_i)\le \sum_{Q\in\mathcal Q_{n_i}}\mu(W_Q>q_{i+1}+q_i)=0. \end{align*} $$
$$ \begin{align*} \mu(\tau({\mathcal Q_{n_i}},x)>q_{i+1}+q_i)\le \sum_{Q\in\mathcal Q_{n_i}}\mu(W_Q>q_{i+1}+q_i)=0. \end{align*} $$
Again by Lemma 5.5(b), 
 $q_{i+1}+q_i\le 2q_{i+1}\le (2/{\|q_i\theta \|})<2^{n_i+1}$
 by our choice of
$q_{i+1}+q_i\le 2q_{i+1}\le (2/{\|q_i\theta \|})<2^{n_i+1}$
 by our choice of 
 $n_i$
, so
$n_i$
, so 
 $\lim _{i\to \infty }({\log (q_i+q_{i+1})}/{n_i\log 2})\le 1$
; therefore, for
$\lim _{i\to \infty }({\log (q_i+q_{i+1})}/{n_i\log 2})\le 1$
; therefore, for 
 $\mu $
-a.e. x,
$\mu $
-a.e. x, 
 $$ \begin{align*}\liminf_{r\to0}\frac{\log\tau_r(x)}{-\log r}\le \liminf_{i\to\infty}\frac{\log \tau_{2^{-n_i}}(x)}{n_i\log2}\le \liminf_{i\to\infty}\frac{\log \tau({\mathcal Q_{n_i}},x)}{n_i\log2}\le 1.\\[-42pt] \end{align*} $$
$$ \begin{align*}\liminf_{r\to0}\frac{\log\tau_r(x)}{-\log r}\le \liminf_{i\to\infty}\frac{\log \tau_{2^{-n_i}}(x)}{n_i\log2}\le \liminf_{i\to\infty}\frac{\log \tau({\mathcal Q_{n_i}},x)}{n_i\log2}\le 1.\\[-42pt] \end{align*} $$
6 Cover time for flows
In this section, we prove results analogous to Theorem 1.1 regarding cover times for a class of flows similar to those discussed in [Reference Rousseau and ToddRT, §4].
 Let 
 $\{f_t\}_{t\in \mathbb R}$
 be a flow on a metric space
$\{f_t\}_{t\in \mathbb R}$
 be a flow on a metric space 
 $(\mathcal {X},d_{\mathcal {X}})$
 preserving an ergodic measure
$(\mathcal {X},d_{\mathcal {X}})$
 preserving an ergodic measure 
 $\nu $
, that is,
$\nu $
, that is, 
 $\nu (f_t^{-1}A)=\nu (A)$
 for every
$\nu (f_t^{-1}A)=\nu (A)$
 for every 
 $t\ge 0$
 and A measurable. Define the cover time of x at scale r by
$t\ge 0$
 and A measurable. Define the cover time of x at scale r by 
 $$ \begin{align*}\tau_r(x):=\inf\{T>0:\text{for all } y\in\Omega, \text{ there exists } t\le T:\, d(f_t(x),y)<r\}.\end{align*} $$
$$ \begin{align*}\tau_r(x):=\inf\{T>0:\text{for all } y\in\Omega, \text{ there exists } t\le T:\, d(f_t(x),y)<r\}.\end{align*} $$
 We will assume the existence of a Poincaré section 
 $Y\subset \mathcal {X}$
, and let
$Y\subset \mathcal {X}$
, and let 
 $R_ Y(x)$
 denote the first hitting time to Y, that is,
$R_ Y(x)$
 denote the first hitting time to Y, that is, 
 $R_Y(x):=\inf \{t>0:f_t(x)\in Y\}$
, with
$R_Y(x):=\inf \{t>0:f_t(x)\in Y\}$
, with 
 $\overline R:=\int R_Yd\nu <\infty $
. Define the Poincaré map by
$\overline R:=\int R_Yd\nu <\infty $
. Define the Poincaré map by 
 $(Y,F,\mu )$
, where
$(Y,F,\mu )$
, where 
 $F=f_{R_Y}$
 and
$F=f_{R_Y}$
 and 
 $\mu $
 is the induced measure on Y given by
$\mu $
 is the induced measure on Y given by 
 $\mu =(1/{\overline {R}})\nu |_Y$
. Additionally, assume the following conditions are satisfied:
$\mu =(1/{\overline {R}})\nu |_Y$
. Additionally, assume the following conditions are satisfied: 
- 
(H1)  ${\mathrm {dim}}_M(\mu )$
 exists and is finite for ${\mathrm {dim}}_M(\mu )$
 exists and is finite for $(F,\mu )$
; $(F,\mu )$
;
- 
(H2)  $(Y,F,\mu )$
 is Gibbs–Markov, so Theorem 1.1 is applicable for $(Y,F,\mu )$
 is Gibbs–Markov, so Theorem 1.1 is applicable for $\mu $
-a.e. $\mu $
-a.e. $y\in Y$
; $y\in Y$
;
- 
(H3)  $\{f_t\}_t$
 has bounded speed: there exists $\{f_t\}_t$
 has bounded speed: there exists $K>0$
 such that for all $K>0$
 such that for all $t>0$
, $t>0$
, $d(f_s(x), f_{s+t} (x))<Kt$
; $d(f_s(x), f_{s+t} (x))<Kt$
;
- 
(H4)  $\{f_t\}_t$
 is topologically mixing and there exists $\{f_t\}_t$
 is topologically mixing and there exists $T_1>0$
 such that (6.1) $T_1>0$
 such that (6.1) $$ \begin{align} \bigcup_{0<t\le T_1}f_t(Y)=\mathcal{X}; \end{align} $$ $$ \begin{align} \bigcup_{0<t\le T_1}f_t(Y)=\mathcal{X}; \end{align} $$
- 
(H5) there exists  $$ \begin{align*}C_f&:=\sup\{\text{diam}(f_t(I))/\text{diam}(I):I\text{ an interval contained in } Y, 0<t\le T_1\}\\&\quad\,\in(0,\infty).\end{align*} $$ $$ \begin{align*}C_f&:=\sup\{\text{diam}(f_t(I))/\text{diam}(I):I\text{ an interval contained in } Y, 0<t\le T_1\}\\&\quad\,\in(0,\infty).\end{align*} $$
Remark 6.1. The last condition is satisfied when condition (H3) holds and the flow is, for example, Lipschitz, that is, there exists 
 $L>0$
 such that for all
$L>0$
 such that for all 
 $x,y\in \mathcal {X}$
,
$x,y\in \mathcal {X}$
, 
 $$ \begin{align*}d_{\mathcal{X}}(f_t(x),f_t(y))\le L^td_{\mathcal{X}}(x,y).\end{align*} $$
$$ \begin{align*}d_{\mathcal{X}}(f_t(x),f_t(y))\le L^td_{\mathcal{X}}(x,y).\end{align*} $$
Theorem 6.2. Let 
 $(f_t,\nu )$
 be a measure-preserving flow satisfying conditions (H1)–(H5) and
$(f_t,\nu )$
 be a measure-preserving flow satisfying conditions (H1)–(H5) and 
 $\underline {\mathrm {dim}}_{M}(\nu )>1$
, then for
$\underline {\mathrm {dim}}_{M}(\nu )>1$
, then for 
 $\nu $
-a.e.
$\nu $
-a.e. 
 $x\in \Omega $
,
$x\in \Omega $
, 
 $$ \begin{align}\liminf_{r\to0}\frac{\log\tau_r(x)}{-\log r}\ge \underline{{\mathrm{dim}}}_M(\nu)-1.\end{align} $$
$$ \begin{align}\liminf_{r\to0}\frac{\log\tau_r(x)}{-\log r}\ge \underline{{\mathrm{dim}}}_M(\nu)-1.\end{align} $$
Furthermore, if 
 $\overline {\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
,
$\overline {\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
, 
 $$ \begin{align}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\le\overline{\mathrm{dim}}_M(\mu)\hspace{2mm} \nu\text{-a.e}. \end{align} $$
$$ \begin{align}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\le\overline{\mathrm{dim}}_M(\mu)\hspace{2mm} \nu\text{-a.e}. \end{align} $$
Proof of (6.2)
This proof is analogous to those of Proposition 4.3 and [Reference Rousseau and ToddRT, Theorem 4.1].
 Fix some 
 $y\in \Omega $
 and
$y\in \Omega $
 and 
 $r>0$
, and consider the random variable
$r>0$
, and consider the random variable 
 $$ \begin{align*}S_{T,r}(x):=\int_0^T\mathbf{1}_{B(y,r)}(f_t(x))\,dt.\end{align*} $$
$$ \begin{align*}S_{T,r}(x):=\int_0^T\mathbf{1}_{B(y,r)}(f_t(x))\,dt.\end{align*} $$
Observe that by the bounded speed property, for all 
 $T>r/K$
,
$T>r/K$
, 
 $$ \begin{align*}\{x:\text{there exists } 0\le t\le T \text{ such that }\,d(f_t(x),y))<r\}\subset\{S_{2T,2r}(x)>r/K\},\end{align*} $$
$$ \begin{align*}\{x:\text{there exists } 0\le t\le T \text{ such that }\,d(f_t(x),y))<r\}\subset\{S_{2T,2r}(x)>r/K\},\end{align*} $$
since if 
 $d(f_s(x),y)<r$
 for some s, then for all
$d(f_s(x),y)<r$
 for some s, then for all 
 $t<r/K$
,
$t<r/K$
, 
 $d(f_{t+s}(x),y)<2r$
. Also set
$d(f_{t+s}(x),y)<2r$
. Also set 
 $$ \begin{align*}T(x,y,r):=\inf\{t\ge0:f_t(x)\in B(y,r)\},\end{align*} $$
$$ \begin{align*}T(x,y,r):=\inf\{t\ge0:f_t(x)\in B(y,r)\},\end{align*} $$
and similarly for all 
 $r>0$
 and all
$r>0$
 and all 
 $x,z$
,
$x,z$
, 
 $\tau _{r}(x)\ge T(x,y,r)$
.
$\tau _{r}(x)\ge T(x,y,r)$
.
 Let 
 $\varepsilon>0$
 be arbitrary and by definition of
$\varepsilon>0$
 be arbitrary and by definition of 
 $\underline \alpha $
 for all large
$\underline \alpha $
 for all large 
 $n\in \mathbb N$
, there exists
$n\in \mathbb N$
, there exists 
 $y_n\in \Omega $
 such that
$y_n\in \Omega $
 such that 
 $\nu (B(y_n,2^{-n}))\le 2^{-n(\underline \alpha -\varepsilon )}$
. By Markov’s inequality, for some
$\nu (B(y_n,2^{-n}))\le 2^{-n(\underline \alpha -\varepsilon )}$
. By Markov’s inequality, for some 
 $\mathcal T_n>0$
 to be decided later,
$\mathcal T_n>0$
 to be decided later, 
 $$ \begin{align*} &\nu(x:\tau_{2^{-n}}(x)< \mathcal T_n)\le \nu(x:T(x,y_n,2^{-n})< \mathcal T_n)\\&\quad=\nu(x:\text{there exists }0\le t< \mathcal T_n:\,f_t(x)\in B(y_n,2^{-n}))\\ &\quad\le \nu(x:S_{2\mathcal T_n,2^{-n+1}}(x)>2^{-n}/K)\le K2^n\int_0^{2\mathcal T_n}\int\mathbf{1}_{B(y_{n-1},2^{-n+1})}(f_t(x))\,d\nu(x)\,dt\\ &\quad\le K2^{n+1}\mathcal T_n\nu(B(y_{n-1},2^{-n+1}))\le 4K\mathcal T_n2^{-(n-1)(\underline{\alpha}-\varepsilon-1)}. \end{align*} $$
$$ \begin{align*} &\nu(x:\tau_{2^{-n}}(x)< \mathcal T_n)\le \nu(x:T(x,y_n,2^{-n})< \mathcal T_n)\\&\quad=\nu(x:\text{there exists }0\le t< \mathcal T_n:\,f_t(x)\in B(y_n,2^{-n}))\\ &\quad\le \nu(x:S_{2\mathcal T_n,2^{-n+1}}(x)>2^{-n}/K)\le K2^n\int_0^{2\mathcal T_n}\int\mathbf{1}_{B(y_{n-1},2^{-n+1})}(f_t(x))\,d\nu(x)\,dt\\ &\quad\le K2^{n+1}\mathcal T_n\nu(B(y_{n-1},2^{-n+1}))\le 4K\mathcal T_n2^{-(n-1)(\underline{\alpha}-\varepsilon-1)}. \end{align*} $$
Choosing 
 $\mathcal T_n=2^{(n-1)(\underline \alpha -\varepsilon -1)}/n^2$
, the last term above is summable along n; hence, by Borel–Cantelli, for
$\mathcal T_n=2^{(n-1)(\underline \alpha -\varepsilon -1)}/n^2$
, the last term above is summable along n; hence, by Borel–Cantelli, for 
 $\nu $
-a.e. x
$\nu $
-a.e. x 
 $$ \begin{align*}\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge\liminf_{n\to\infty}\frac{\log\mathcal T_n}{n\log2}=\underline\alpha-1-\varepsilon.\end{align*} $$
$$ \begin{align*}\liminf_{r\to0}\frac{\log \tau_r(x)}{-\log r}\ge\liminf_{n\to\infty}\frac{\log\mathcal T_n}{n\log2}=\underline\alpha-1-\varepsilon.\end{align*} $$
Since 
 $\varepsilon>0$
 was arbitrarily small, the lower bound is
$\varepsilon>0$
 was arbitrarily small, the lower bound is 
 $\underline \alpha -1$
, and by Remark 4.1, the proposition is proved.
$\underline \alpha -1$
, and by Remark 4.1, the proposition is proved.
Note that the proof of lower bound is independent of the existence or mixing properties of the Poincaré map 
 $(Y,F,\mu )$
. For upper bound, we first prove that the cover time of the Poincaré F in Y is comparable to the cover time of the flow.
$(Y,F,\mu )$
. For upper bound, we first prove that the cover time of the Poincaré F in Y is comparable to the cover time of the flow.
Lemma 6.3. Define
 $$ \begin{align*}\tau_r^F(x):=\min\{n\in\mathbb N_0:\text{for all } y\in Y,\text{there exists } 0\le j\le n:d(y,F^jx)<r \}.\end{align*} $$
$$ \begin{align*}\tau_r^F(x):=\min\{n\in\mathbb N_0:\text{for all } y\in Y,\text{there exists } 0\le j\le n:d(y,F^jx)<r \}.\end{align*} $$
There exists 
 $\unicode{x3bb} =(1/{C_f})$
 for
$\unicode{x3bb} =(1/{C_f})$
 for 
 $C_f$
 defined in condition (H5) such that
$C_f$
 defined in condition (H5) such that 
 $\tau _{r}(x)\le T_1+\sum _{j=0}^{\tau _{\unicode{x3bb} r}^F(x)}R_Y(F^jx).$
$\tau _{r}(x)\le T_1+\sum _{j=0}^{\tau _{\unicode{x3bb} r}^F(x)}R_Y(F^jx).$
Proof. This is adapted from the proof of [Reference Jurga and ToddJT, Lemma 6.4] and [Reference Rousseau and ToddRT, Theorem 2.1]. Here, F is by assumption Gibbs–Markov, so one can find 
 $\mathcal P(r)$
, a natural partition of Y using cylinder sets with respect to F, such that for each
$\mathcal P(r)$
, a natural partition of Y using cylinder sets with respect to F, such that for each 
 $P\in \mathcal P(r)$
: (a)
$P\in \mathcal P(r)$
: (a) 
 $ \text {diam}(P)\le r/C_f$
; and (b) for all
$ \text {diam}(P)\le r/C_f$
; and (b) for all 
 $0<t\le T_1$
,
$0<t\le T_1$
, 
 $f_t(P)$
 is connected. Suppose
$f_t(P)$
 is connected. Suppose 
 $\tau ^F_{r/C_f}(x)=k$
, then the orbit
$\tau ^F_{r/C_f}(x)=k$
, then the orbit 
 $\{x,F(x),\ldots ,F^k(x)\}$
 must have visited every element of
$\{x,F(x),\ldots ,F^k(x)\}$
 must have visited every element of 
 $\mathcal P$
. By (6.1) for each
$\mathcal P$
. By (6.1) for each 
 $y\in \Omega $
, there is
$y\in \Omega $
, there is 
 $P\in \mathcal P(r)$
 and
$P\in \mathcal P(r)$
 and 
 $0<s\le T_1$
 such that
$0<s\le T_1$
 such that 
 $y\in f_s(P)$
 and, hence, there exists
$y\in f_s(P)$
 and, hence, there exists 
 $j\le k$
 such that
$j\le k$
 such that 
 $d(f_s(F^j(x)),y)\le C_f|P|<r.$
 Then, set
$d(f_s(F^j(x)),y)\le C_f|P|<r.$
 Then, set 
 $\unicode{x3bb} =1/C_f$
. The lemma is proved.
$\unicode{x3bb} =1/C_f$
. The lemma is proved.
Proof of (6.3)
 Now assume 
 $\overline {\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
. Let
$\overline {\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
. Let 
 $\xi>0$
 be arbitrary and define the sets
$\xi>0$
 be arbitrary and define the sets 
 $$ \begin{align*}U_{\xi,N}:=\{x\in Y:|R_n(x)-n\overline{R}|\le \xi n \text{ for all } n\ge N\},\end{align*} $$
$$ \begin{align*}U_{\xi,N}:=\{x\in Y:|R_n(x)-n\overline{R}|\le \xi n \text{ for all } n\ge N\},\end{align*} $$
where 
 $R_n(x)=\sum _{j=0}^{n-1}R_Y(F^j(x))$
. By ergodicity,
$R_n(x)=\sum _{j=0}^{n-1}R_Y(F^j(x))$
. By ergodicity, 
 $\lim _N\mu (U_{\xi ,N})=1$
, so for N large,
$\lim _N\mu (U_{\xi ,N})=1$
, so for N large, 
 $\nu (U_{\xi ,N})>0$
; hence, by invariance,
$\nu (U_{\xi ,N})>0$
; hence, by invariance, 
 $$ \begin{align}\lim_{N\to\infty}\nu\bigg(\bigcup_{t=0}^{\xi N}f_{-t}(U_{\xi,N})\bigg)=1.\end{align} $$
$$ \begin{align}\lim_{N\to\infty}\nu\bigg(\bigcup_{t=0}^{\xi N}f_{-t}(U_{\xi,N})\bigg)=1.\end{align} $$
 Let 
 $\varepsilon>0$
 be arbitrary. By (6.4), one can pick
$\varepsilon>0$
 be arbitrary. By (6.4), one can pick 
 $N^*$
 such that for each
$N^*$
 such that for each 
 $\nu $
 typical
$\nu $
 typical 
 $x\in \mathcal {X}$
, there is some
$x\in \mathcal {X}$
, there is some 
 $t^*\le \xi N^*$
 such that
$t^*\le \xi N^*$
 such that 
 $f_{t^*}(x)\in Y$
. By Theorem 1.1 applied to the Poincaré map and Lemma 6.3, for all sufficiently small
$f_{t^*}(x)\in Y$
. By Theorem 1.1 applied to the Poincaré map and Lemma 6.3, for all sufficiently small 
 $r>0$
, we have the following two inequalities:
$r>0$
, we have the following two inequalities: 
 $$ \begin{align*}\frac{\log \tau_{\unicode{x3bb} r}^F(f_{t^*}x)}{-\log\unicode{x3bb} r}\le {\mathrm{dim}}_M(\mu)+\varepsilon,\quad\frac{\log (\tau_{r}(x)-T_1)}{-\log r}\le \frac{\log ((\overline R+\xi)\tau_{\unicode{x3bb} r}^F(f_{t^*}x))}{-\log r}. \end{align*} $$
$$ \begin{align*}\frac{\log \tau_{\unicode{x3bb} r}^F(f_{t^*}x)}{-\log\unicode{x3bb} r}\le {\mathrm{dim}}_M(\mu)+\varepsilon,\quad\frac{\log (\tau_{r}(x)-T_1)}{-\log r}\le \frac{\log ((\overline R+\xi)\tau_{\unicode{x3bb} r}^F(f_{t^*}x))}{-\log r}. \end{align*} $$
Then, as 
 $\unicode{x3bb} , \overline R$
 are constants and
$\unicode{x3bb} , \overline R$
 are constants and 
 $\varepsilon $
 is arbitrary, for
$\varepsilon $
 is arbitrary, for 
 $\nu $
-a.e. x,
$\nu $
-a.e. x, 
 $$ \begin{align*}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\le {\mathrm{dim}}_M(\mu)=\overline{\mathrm{dim}}_M(\nu)-1.\\[-42pt] \end{align*} $$
$$ \begin{align*}\limsup_{r\to0}\frac{\log \tau_r(x)}{-\log r}\le {\mathrm{dim}}_M(\mu)=\overline{\mathrm{dim}}_M(\nu)-1.\\[-42pt] \end{align*} $$
6.1 Example: suspension semi-flows over topological Markov shifts
 In this section, we give an example of a flow for which 
 ${\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
 is satisfied, so Theorem 6.2 is applicable.
${\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
 is satisfied, so Theorem 6.2 is applicable.
 Let 
 $\mathcal {A}$
 be a finite alphabet and M an
$\mathcal {A}$
 be a finite alphabet and M an 
 $\mathcal {A}\times \mathcal {A}$
 matrix with
$\mathcal {A}\times \mathcal {A}$
 matrix with 
 $\{0,1\}$
 entries, we will consider two-sided topological Markov shift systems
$\{0,1\}$
 entries, we will consider two-sided topological Markov shift systems 
 $(\Sigma ,\sigma ,\phi ,\mu )$
, where
$(\Sigma ,\sigma ,\phi ,\mu )$
, where 

 $\sigma $
 the usual left shift,
$\sigma $
 the usual left shift, 
 $\phi $
 a Hölder potential and
$\phi $
 a Hölder potential and 
 $\mu $
 is the unique Gibbs measure with respect to
$\mu $
 is the unique Gibbs measure with respect to 
 $\phi $
. We assume that
$\phi $
. We assume that 
 ${\mathrm {dim}}_M(\mu )\in (0,\infty )$
. The natural symbolic metric on
${\mathrm {dim}}_M(\mu )\in (0,\infty )$
. The natural symbolic metric on 
 $\Sigma $
 is
$\Sigma $
 is 
 $d(x,y)=2^{-x\land y}$
, where
$d(x,y)=2^{-x\land y}$
, where 
 $$ \begin{align*}x\land y=\sup\{k\ge0:x_j=y_j \text{ for all } |j|<k\}.\end{align*} $$
$$ \begin{align*}x\land y=\sup\{k\ge0:x_j=y_j \text{ for all } |j|<k\}.\end{align*} $$
An n-cylinder in this setting is given by 
 $[x_{-(n-1)},\ldots ,x_0,\ldots ,x_{n-1}]:=\{y\in \Sigma , \,y_j=x_j \text {for all } |j|<n\}$
, and it is a well-known fact that balls in
$[x_{-(n-1)},\ldots ,x_0,\ldots ,x_{n-1}]:=\{y\in \Sigma , \,y_j=x_j \text {for all } |j|<n\}$
, and it is a well-known fact that balls in 
 $\Sigma $
 are precisely the cylinder sets. The left-shift map
$\Sigma $
 are precisely the cylinder sets. The left-shift map 
 $\sigma $
 is bi-Lipschitz with Lipschitz constant
$\sigma $
 is bi-Lipschitz with Lipschitz constant 
 $L=2$
. For a more detailed description of the shift space, see [Reference BowenBow, §1].
$L=2$
. For a more detailed description of the shift space, see [Reference BowenBow, §1].
 Let 
 $\varphi \in L^1(\mu )$
 be a positive Lipschitz function, define the space
$\varphi \in L^1(\mu )$
 be a positive Lipschitz function, define the space 
 $$ \begin{align*}Y_\varphi:=\{(x,s)\in \Sigma\times \mathbb R_{\ge0}:0\le s\le \varphi(x)\}{/\sim},\end{align*} $$
$$ \begin{align*}Y_\varphi:=\{(x,s)\in \Sigma\times \mathbb R_{\ge0}:0\le s\le \varphi(x)\}{/\sim},\end{align*} $$
where 
 $(x,\varphi (x))\sim (\sigma (x),0)$
 for all
$(x,\varphi (x))\sim (\sigma (x),0)$
 for all 
 $x\in I$
. The suspension flow
$x\in I$
. The suspension flow 
 $\Psi $
 over
$\Psi $
 over 
 $\sigma $
 is the function that acts on
$\sigma $
 is the function that acts on 
 $Y_\varphi $
 by
$Y_\varphi $
 by 
 $$ \begin{align*}\Psi_t(x,s)=(\sigma^k(x),v),\end{align*} $$
$$ \begin{align*}\Psi_t(x,s)=(\sigma^k(x),v),\end{align*} $$
where 
 $k,v\ge 0$
 are determined by
$k,v\ge 0$
 are determined by 
 $s+t=v+\sum _{j=0}^{k-1}\varphi (\sigma ^j(x)).$
 The invariant measure
$s+t=v+\sum _{j=0}^{k-1}\varphi (\sigma ^j(x)).$
 The invariant measure 
 $\nu $
 for the flow
$\nu $
 for the flow 
 $\Psi $
 on
$\Psi $
 on 
 $Y_\varphi $
 satisfies the following: for every
$Y_\varphi $
 satisfies the following: for every 
 $g:Y_\varphi \to \mathbb R$
 continuous,
$g:Y_\varphi \to \mathbb R$
 continuous, 
 $$ \begin{align} \int g\,d\nu=\frac1{\int_\Sigma\varphi \,d\mu}\int_\Sigma\int_0^{\varphi(x)}g(x,s)\,ds\,d\mu(x).\end{align} $$
$$ \begin{align} \int g\,d\nu=\frac1{\int_\Sigma\varphi \,d\mu}\int_\Sigma\int_0^{\varphi(x)}g(x,s)\,ds\,d\mu(x).\end{align} $$
 The metric on 
 $Y_\varphi $
 is the Bowen–Walters distance
$Y_\varphi $
 is the Bowen–Walters distance 
 $d_Y$
 (see for example [Reference Bowen and WaltersBW]). Define another metric
$d_Y$
 (see for example [Reference Bowen and WaltersBW]). Define another metric 
 $d_\pi $
 on
$d_\pi $
 on 
 $Y_\varphi $
: for all
$Y_\varphi $
: for all 
 $(x_i,t_i)_{i=1,2}\in Y_\varphi $
,
$(x_i,t_i)_{i=1,2}\in Y_\varphi $
, 
 $$ \begin{align*}d_\pi((x_1,t_1),(x_2,t_2)):=\min\left\{\begin{aligned} &d(x,y)+|s-t|,\\ &d(\sigma x,y)+\varphi(x)-s+t,\\ &d(x,\sigma y)+\varphi(y)-t+s \end{aligned}\right\},\end{align*} $$
$$ \begin{align*}d_\pi((x_1,t_1),(x_2,t_2)):=\min\left\{\begin{aligned} &d(x,y)+|s-t|,\\ &d(\sigma x,y)+\varphi(x)-s+t,\\ &d(x,\sigma y)+\varphi(y)-t+s \end{aligned}\right\},\end{align*} $$
and the following proposition says 
 $d_\pi $
 is comparable to the Bowen–Walters distance.
$d_\pi $
 is comparable to the Bowen–Walters distance.
Proposition 6.4. [Reference Barreira and SaussolBS, Proposition 17]
 There exists 
 $c=c_\pi $
 such that
$c=c_\pi $
 such that 
 $$ \begin{align*}c^{-1}d_\pi((x_1,t_1),(x_2,t_2))\le d_Y((x_1,t_1),(x_2,t_2))\le c\, d_\pi((x_1,t_1),(x_2,t_2)).\end{align*} $$
$$ \begin{align*}c^{-1}d_\pi((x_1,t_1),(x_2,t_2))\le d_Y((x_1,t_1),(x_2,t_2))\le c\, d_\pi((x_1,t_1),(x_2,t_2)).\end{align*} $$
 Then, the Minkowski dimension of the flow-invariant measure 
 $\nu $
 is given by the following.
$\nu $
 is given by the following.
Proposition 6.5. For 
 $(\mu)$
 the Gibbs measure with respect to
$(\mu)$
 the Gibbs measure with respect to 
 $\phi$
 on the two-sided subshift and
$\phi$
 on the two-sided subshift and 
 $\nu$
 the flow invariant measure,
$\nu$
 the flow invariant measure, 
 ${\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
.
${\mathrm {dim}}_M(\nu )={\mathrm {dim}}_M(\mu )+1$
.
Proof. The proof is based on the proof of [Reference Rousseau and ToddRT, Theorem 4.3] for correlation dimensions.
 By Proposition 6.4 for all 
 $r>0$
,
$r>0$
, 
 $$ \begin{align*}(B(x,r/2c)\times(s-r/2c,s+r/2c))\cap Y\subset B_Y((x,s),r),\end{align*} $$
$$ \begin{align*}(B(x,r/2c)\times(s-r/2c,s+r/2c))\cap Y\subset B_Y((x,s),r),\end{align*} $$
where 
 $B_Y$
 denotes the ball with respect to the metric
$B_Y$
 denotes the ball with respect to the metric 
 $d_Y$
, then for all
$d_Y$
, then for all 
 $(x,s)\in Y_\varphi $
, put
$(x,s)\in Y_\varphi $
, put 
 $\overline \varphi =\int _\Sigma \varphi \,d\mu $
, then
$\overline \varphi =\int _\Sigma \varphi \,d\mu $
, then 
 $$ \begin{align*}\nu(B_Y((x,s),r))\ge \nu\bigg(B(x,r/2c)\times\bigg(s-\frac{r}{2c},s+\frac{r}{2c}\bigg)\bigg),\end{align*} $$
$$ \begin{align*}\nu(B_Y((x,s),r))\ge \nu\bigg(B(x,r/2c)\times\bigg(s-\frac{r}{2c},s+\frac{r}{2c}\bigg)\bigg),\end{align*} $$
 $$ \begin{align*}\frac{\log\nu(B_Y((x,s),r))}{\log r}\le \frac{\log(({r}/{c\overline\varphi})\mu(B(x, r/{2c})))}{\log r}.\end{align*} $$
$$ \begin{align*}\frac{\log\nu(B_Y((x,s),r))}{\log r}\le \frac{\log(({r}/{c\overline\varphi})\mu(B(x, r/{2c})))}{\log r}.\end{align*} $$
Hence, 
 $\overline {{\mathrm {dim}}}_M(\nu )\kern1.4pt{=}\kern1.4pt\limsup _{r\to 0}{\log \min _{(x,s)\in \mathrm {supp}(\nu )}\nu (B_Y((x,s),r)}/{\log r})\kern1.4pt{\le}\kern1.4pt {{\mathrm {dim}}_M(\mu )\kern1.4pt{+}1}$
.
$\overline {{\mathrm {dim}}}_M(\nu )\kern1.4pt{=}\kern1.4pt\limsup _{r\to 0}{\log \min _{(x,s)\in \mathrm {supp}(\nu )}\nu (B_Y((x,s),r)}/{\log r})\kern1.4pt{\le}\kern1.4pt {{\mathrm {dim}}_M(\mu )\kern1.4pt{+}1}$
.
For lower bound, define
 $$ \begin{align*}B_1:=B(x,c r)\times(s-c r,s+c r),\quad B_2:=B(\sigma x,c r)\times[0,cr),\end{align*} $$
$$ \begin{align*}B_1:=B(x,c r)\times(s-c r,s+c r),\quad B_2:=B(\sigma x,c r)\times[0,cr),\end{align*} $$
 $$ \begin{align*}B_3:=\{(y,t):y\in B(\sigma^{-1}x,2cr)\text{ and } \varphi(y)-cr\le t\le \varphi(y)\}.\end{align*} $$
$$ \begin{align*}B_3:=\{(y,t):y\in B(\sigma^{-1}x,2cr)\text{ and } \varphi(y)-cr\le t\le \varphi(y)\}.\end{align*} $$
Then, as in the proof of [Reference Rousseau and ToddRT, Theorem 4.3], 
 $B_Y((x,s),r)\subset (B_1\cup B_2\cup B_3)\cap Y_\varphi $
.
$B_Y((x,s),r)\subset (B_1\cup B_2\cup B_3)\cap Y_\varphi $
.
 For all 
 $r>0$
 and
$r>0$
 and 
 $(x,s)\in Y_\varphi $
 by (6.5), and as
$(x,s)\in Y_\varphi $
 by (6.5), and as 
 $\mu $
 is
$\mu $
 is 
 $\sigma ,\sigma ^{-1}$
 invariant,
$\sigma ,\sigma ^{-1}$
 invariant, 
 $$ \begin{align*} &\nu(B_1\cap Y_\varphi)={2cr}\mu(B(x,cr))/{\overline\varphi},\quad \nu(B_2,Y_\varphi)\le cr\mu(B(x,cr))/\overline\varphi\\ &\nu(B_3\cap Y_\varphi)\le cr\mu(\sigma^{-1}B(x,2cr))/\overline\varphi=cr\mu(B(x,2cr))/\overline\varphi. \end{align*} $$
$$ \begin{align*} &\nu(B_1\cap Y_\varphi)={2cr}\mu(B(x,cr))/{\overline\varphi},\quad \nu(B_2,Y_\varphi)\le cr\mu(B(x,cr))/\overline\varphi\\ &\nu(B_3\cap Y_\varphi)\le cr\mu(\sigma^{-1}B(x,2cr))/\overline\varphi=cr\mu(B(x,2cr))/\overline\varphi. \end{align*} $$
Therefore,
 $$ \begin{align*}\nu(B_Y((x,s),r))\le\frac1{{\overline\varphi}}(3r\mu(B(x,cr))+cr\mu(B(x,2cr))),\end{align*} $$
$$ \begin{align*}\nu(B_Y((x,s),r))\le\frac1{{\overline\varphi}}(3r\mu(B(x,cr))+cr\mu(B(x,2cr))),\end{align*} $$
which is enough to conclude that 
 $\underline {{\mathrm {dim}}}_M(\nu )\ge {\mathrm {dim}}_M(\mu )+1$
.
$\underline {{\mathrm {dim}}}_M(\nu )\ge {\mathrm {dim}}_M(\mu )+1$
.
Acknowledgements
I acknowledge the grant from the Chinese Scholarship Council. I am also thankful for various comments and help from my supervisor M. Todd, as well as other comments on §6 from J. Rousseau.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
