Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-07T15:09:17.049Z Has data issue: false hasContentIssue false

A Self-Normalized Central Limit Theorem for Markov Random Walks

Published online by Cambridge University Press:  04 January 2016

Cheng-Der Fuh*
Affiliation:
National Central University
Tian-Xiao Pang*
Affiliation:
Zhejiang University
*
Postal address: Graduate Institute of Statistics, National Central University, Jhongli, Taiwan. Email address: stcheng@stat.sinica.edu.tw
∗∗ Postal address: Department of Mathematics, Zhejiang University, Hangzhou 310027, P. R. China. Email address: txpang@zju.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the 'Save PDF' action button.

Motivated by the study of the asymptotic normality of the least-squares estimator in the (autoregressive) AR(1) model under possibly infinite variance, in this paper we investigate a self-normalized central limit theorem for Markov random walks. That is, let {X n , n ≥ 0} be a Markov chain on a general state space X with transition probability P and invariant measure π. Suppose that an additive component S n takes values on the real line , and is adjoined to the chain such that {S n , n ≥ 1} is a Markov random walk. Assume that S n = ∑k=1 n ξk , and that {ξn , n ≥ 1} is a nondegenerate and stationary sequence under π that belongs to the domain of attraction of the normal law with zero mean and possibly infinite variance. By making use of an asymptotic variance formula of S n / √n, we prove a self-normalized central limit theorem for S n under some regularity conditions. An essential idea in our proof is to bound the covariance of the Markov random walk via a sequence of weight functions, which plays a crucial role in determining the moment condition and dependence structure of the Markov random walk. As illustrations, we apply our results to the finite-state Markov chain, the AR(1) model, and the linear state space model.

Information

Type
General Applied Probability
Copyright
© Applied Probability Trust