Book contents
- Frontmatter
- Dedication
- Contents
- Notation
- Preface
- 1 Introduction
- 2 Limiting Spectral Distributions
- 3 CLT for Linear Spectral Statistics
- 4 The Generalised Variance and Multiple Correlation Coefficient
- 5 The T2-Statistic
- 6 Classification of Data
- 7 Testing the General Linear Hypothesis
- 8 Testing Independence of Sets of Variates
- 9 Testing Hypotheses of Equality of Covariance Matrices
- 10 Estimation of the Population Spectral Distribution
- 11 Large-Dimensional Spiked Population Models
- 12 Efficient Optimisation of a Large Financial Portfolio
- Appendix A Curvilinear Integrals
- Appendix B Eigenvalue Inequalities
- References
- Index
1 - Introduction
- Frontmatter
- Dedication
- Contents
- Notation
- Preface
- 1 Introduction
- 2 Limiting Spectral Distributions
- 3 CLT for Linear Spectral Statistics
- 4 The Generalised Variance and Multiple Correlation Coefficient
- 5 The T2-Statistic
- 6 Classification of Data
- 7 Testing the General Linear Hypothesis
- 8 Testing Independence of Sets of Variates
- 9 Testing Hypotheses of Equality of Covariance Matrices
- 10 Estimation of the Population Spectral Distribution
- 11 Large-Dimensional Spiked Population Models
- 12 Efficient Optimisation of a Large Financial Portfolio
- Appendix A Curvilinear Integrals
- Appendix B Eigenvalue Inequalities
- References
- Index
Summary
Large-Dimensional Data and New Asymptotic Statistics
In a multivariate analysis problem, we are given a sample x1, x2, …, xn of random observations of dimension p. Statistical methods, such as principal component analysis, have been developed since the beginning of the 20th century. When the observations are Gaussian, some nonasymptotic methods exist, such as Student's test, Fisher's test, or the analysis of variance. However, in most applications, observations are non-Gaussian, at least in part, so that nonasymptotic results become hard to obtain and statistical methods are built using limiting theorems on model statistics.
Most of these asymptotic results are derived under the assumption that the data dimension p is fixed while the sample size n tends to infinity (large sample theory). This theory had been adopted by most practitioners until very recently, when they were faced with a new challenge: the analysis of large dimensional data.
Large-dimensional data appear in various fields for different reasons. In finance, as a consequence of the generalisation of Internet and electronic commerce supported by the exponentially increasing power of computing, online data from markets around the world are accumulated on a giga-octet basis every day. In genetic experiments, such as micro-arrays, it becomes possible to record the expression of several thousand of genes from a single tissue. Table 1.1 displays some typical data dimensions and sample sizes. We can see from this table that the data dimension p is far from the “usual” situations where p is commonly less than 10. We refer to this new type of data as large-dimensional data.
It has been observed for a long time that several well-known methods in multivariate analysis become inefficient or even misleading when the data dimension p is not as small as, say, several tens. A seminal example was provided by Dempster in 1958, when he established the inefficiency of Hotelling's T2 in such cases and provided a remedy (named a non-exact test). However, by that time, no statistician was able to discover the fundamental reasons for such a breakdown in the well-established methods.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2015
- 2
- Cited by