Skip to main content Accessibility help
×
Hostname: page-component-74d7c59bfc-6sd86 Total loading time: 0 Render date: 2026-02-09T19:34:26.718Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  30 January 2026

Roman Vershynin
Affiliation:
University of California, Irvine
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'

Information

Type
Chapter
Information
High-Dimensional Probability
An Introduction with Applications in Data Science
, pp. 312 - 326
Publisher: Cambridge University Press
Print publication year: 2026

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Abbe, E.. Community detection and stochastic block models: Recent developments. Journal of Machine Learning Research, 18(177):186, 2018.Google Scholar
Abbe, E., Bandeira, A. S., and Hall, G.. Exact recovery in the stochastic block model. IEEE Transactions on Information Theory, 62(1):471487, 2015.10.1109/TIT.2015.2490670CrossRefGoogle Scholar
Abdalla, P. and Mendelson, S.. Covariance estimation with direction dependence accuracy. Probability Theory and Related Fields, online early, doi:10.1007/s00440-025-01376-7, 39pp., 2025.Google Scholar
Abdalla, P. and Zhivotovskiy, N.. Covariance estimation: Optimal dimension-free guarantees for adversarial corruption and heavy tails. Journal of the European Mathematical Society, online first, doi:10.4171/JEMS/1505, 2024.CrossRefGoogle Scholar
Achlioptas, D. and McSherry, F.. Fast computation of low-rank matrix approximations. Journal of the ACM, 54(2):9-es, 2007.10.1145/1219092.1219097CrossRefGoogle Scholar
Adamczak, R.. A note on the Hanson–Wright inequality for random vectors with dependencies. Electronic Communications in Probability, 20(72):113, 2015.10.1214/ECP.v20-3829CrossRefGoogle Scholar
Adamczak, R., Latala, R., Litvak, A. E., Pajor, A., and Tomczak-Jaegermann, N.. Chevet type inequality and norms of submatrices. Preprint, arXiv:1107.4066, 2011.10.4064/sm210-1-3CrossRefGoogle Scholar
Adamczak, R., Latala, R., and Meller, R.. Hanson–Wright inequality in Banach spaces. Annales de l’Institut Henri Poincaré (B) – Probabilités et Statistiques, 56(4):23562376, 2020.Google Scholar
Adamczak, R., Prochno, J., Strzelecka, M., and Strzelecki, M.. Norms of structured random matrices. Mathematische Annalen, 388(4):34633527, 2024.10.1007/s00208-023-02599-6CrossRefGoogle ScholarPubMed
Adiprasito, K., Bárány, I., Mustafa, N. H., and Terpai, T.. Theorems of Carathéodory, Helly, and Tverberg without dimension. Discrete & Computational Geometry, 64(2):233258, 2020.10.1007/s00454-020-00172-5CrossRefGoogle Scholar
Adler, R. J. and Taylor, J. E.. Random Fields and Geometry. Springer, 2009.Google Scholar
Ahlswede, R. and Winter, A.. Strong converse for identification via quantum channels. IEEE Transactions on Information Theory, 48(3):569579, 2002.10.1109/18.985947CrossRefGoogle Scholar
Ai, A., Lapanowski, A., Plan, Y., and Vershynin, R.. One-bit compressed sensing with non-gaussian measurements. Linear Algebra and Its Applications, 441:222239, 2014.10.1016/j.laa.2013.04.002CrossRefGoogle Scholar
Albiac, F. and Kalton, N. J.. Topics in Banach Space Theory (Graduate Texts in Mathematics). Springer, 2006.Google Scholar
Alesker, S.. A remark on the Szarek–Talagrand theorem. Combinatorics, Probability and Computing, 6(2):139144, 1997.10.1017/S0963548396002866CrossRefGoogle Scholar
Alon, N. and Naor, A.. Approximating the cut-norm via Grothendieck’s inequality. Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, pp. 7280. ACM, 2004.10.1145/1007352.1007371CrossRefGoogle Scholar
Alon, N. and Spencer, J. H.. The Probabilistic Method. Wiley, 2015.Google Scholar
Amelunxen, D., Lotz, M., McCoy, M. B., and Tropp, J. A.. Living on the edge: Phase transitions in convex programs with random data. Information and Inference: A Journal of the IMA, 3(3):224294, 2014.10.1093/imaiai/iau005CrossRefGoogle Scholar
Anandkumar, A., Ge, R., Hsu, D. J., Kakade, S. M., and Telgarsky, M.. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):27732832, 2014.Google Scholar
Anderson, G. W, Guionnet, A., and Zeitouni, O.. An Introduction to Random Matrices (Cambridge Studies in Advanced Mathematics 118). Cambridge University Press, 2010.Google Scholar
Artstein-Avidan, S., Giannopoulos, A., and Milman, V. D.. Asymptotic Geometric Analysis, Part II (Mathematical Surveys and Monographs 261). American Mathematical Society, 2021.Google Scholar
Bakry, D. and Ledoux, M.. Lévy–Gromov’s isoperimetric inequality for an infinite dimensional diffusion generator. Inventiones Mathematicae, 123(2):259281, 1996.Google Scholar
Ball, K.. An elementary introduction to modern convex geometry. Flavors of Geometry (Mathematical Sciences Research Institute Publications 31), pp. 158. Cambridge University Press, 1997.Google Scholar
Bandeira, A. S.. Ten lectures and forty-two open problems in the mathematics of data science. Lecture Notes, 2015.Google Scholar
Bandeira, A. S., Boedihardjo, M. T., and van Handel, R.. Matrix concentration inequalities and free probability. Inventiones Mathematicae, 234(1):419487, 2023.10.1007/s00222-023-01204-6CrossRefGoogle Scholar
Bandeira, A. S., Cipolloni, G., Schröder, D., and van Handel, R.. Matrix concentration inequalities and free probability II. Two-sided bounds and applications. Preprint, arXiv:2406.11453, 2024.Google Scholar
Bandeira, A. S. and van Handel, R.. Sharp nonasymptotic bounds on the norm of random matrices with independent entries. Annals of Probability, 44(4):24792506, 2016.10.1214/15-AOP1025CrossRefGoogle Scholar
Baraniuk, R., Foucart, S., Needell, D., Plan, Y., and Wootters, M.. One-bit compressive sensing of dictionary-sparse signals. Information and Inference: A Journal of the IMA, 7(1):83104, 2018.10.1093/imaiai/iax009CrossRefGoogle Scholar
Bárány, I.. Sylvester’s question: The probability that n points are in convex position. Annals of Probability, 27(4):20202034, 1999.10.1214/aop/1022677559CrossRefGoogle Scholar
Barthe, F. and Maurey, B.. Some remarks on isoperimetry of Gaussian type. Annales de l’Institut Henri Poincaré (B) – Probabilités et Statistiques, 36:419434, 2000.Google Scholar
Barthe, F. and Milman, E.. Transference principles for log-Sobolev and spectral-gap with applications to conservative spin systems. Communications in Mathematical Physics, 323(2):575625, 2013.10.1007/s00220-013-1782-2CrossRefGoogle Scholar
Bartlett, P. L. and Mendelson, S.. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov.):463482, 2002.Google Scholar
Belkin, M. and Sinha, K.. Polynomial learning of distribution families. 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp. 103112. IEEE, 2010.10.1109/FOCS.2010.16CrossRefGoogle Scholar
Bellec, P. C.. Concentration of quadratic forms under a Bernstein moment assumption. Preprint, arXiv:1901.08736, 2019.Google Scholar
Bennett, G.. Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association, 57(297):3345, 1962.10.1080/01621459.1962.10482149CrossRefGoogle Scholar
Bennett, G.. Upper bounds on the moments and probability inequalities for the sum of independent, bounded random variables. Biometrika, 52(3/4):559569, 1965.10.1093/biomet/52.3-4.559CrossRefGoogle Scholar
Bennett, G., Goodman, V., and Newman, C.. Norms of random matrices. Pacific Journal of Mathematics, 59(2):359365, 1975.10.2140/pjm.1975.59.359CrossRefGoogle Scholar
Bernstein, S. N.. On a modification of Chebyshev’s inequality and of the error formula of Laplace. Ann. Sci. Inst. Sav. Ukraine, Sect. Math., 1(5), 1924. Reprinted in: Math. Sbornik, 34(1):117, 1933.Google Scholar
Bernstein, S. N.. Theory of Probability. Gosizdat, Moscow, 1927.Google Scholar
Bernstein, S. N.. On certain modifications of Chebyshev’s inequality. Doklady Akademii Nauk SSSR, 17(6):275277, 1937.Google Scholar
Bhatia, R.. Matrix Analysis (Graduate Texts in Mathematics 169). Springer, 2013.Google Scholar
Billingsley, P.. Probability and Measure (Wiley Series in Probability and Statistics). Wiley, 2012.Google Scholar
Blum, A., Har-Peled, S., and Raichel, B.. Sparse approximation via generating point sets. ACM Transactions on Algorithms (TALG), 15(3):116, 2019.10.1145/3302249CrossRefGoogle Scholar
Blyth, C. R. and Pathak, P. K.. A note on easy proofs of Stirling’s theorem. American Mathematical Monthly, 93(5):376379, 1986.10.1080/00029890.1986.11971831CrossRefGoogle Scholar
Bobkov, S. G.. An isoperimetric inequality on the discrete cube, and an elementary proof of the isoperimetric inequality in Gauss space. Annals of Probability, 25(1):206214, 1997.Google Scholar
Bollobás, B.. Combinatorics: Set Systems, Hypergraphs, Families of Vectors, and Combinatorial Probability. Cambridge University Press, 1986.Google Scholar
Bollobás, B.. Random Graphs (Cambridge Studies in Advanced Mathematics 73). Cambridge University Press, 2001.10.1017/CBO9780511814068CrossRefGoogle Scholar
Bordenave, C., Lelarge, M., and Massoulié, L.. Non-backtracking spectrum of random graphs: Community detection and non-regular Ramanujan graphs. 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pp. 13471357. IEEE, 2015.Google Scholar
Borel, E.. Introduction Geometrique à Quelques Theories Physiques. Gauthier-Villars, 1914.Google Scholar
Borell, C.. The Brunn–Minkowski inequality in Gauss space. Inventiones Mathematicae, 30(2):207216, 1975.10.1007/BF01425510CrossRefGoogle Scholar
Borwein, J. M. and Lewis, A. S.. Convex Analysis and Nonlinear Optimization: Theory and Examples. Springer, 2005.Google Scholar
Boucheron, S., Lugosi, G., and Massart, P.. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.10.1093/acprof:oso/9780199535255.001.0001CrossRefGoogle Scholar
Boufounos, P. T. and Baraniuk, R. G.. 1-bit compressive sensing. 2008 42nd Annual Conference on Information Sciences and Systems, pp. 1621. IEEE, 2008.10.1109/CISS.2008.4558487CrossRefGoogle Scholar
Bourgain, J., Dirksen, S., and Nelson, J.. Toward a unified theory of sparse dimensionality reduction in Euclidean space. Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, pp. 499508. ACM, 2015.Google Scholar
Bourgain, J. and Tzafriri, L.. Invertibility of “large” submatrices with applications to the geometry of Banach spaces and harmonic analysis. Israel Journal of Mathematics, 57:137224, 1987.10.1007/BF02772174CrossRefGoogle Scholar
Bousquet, O., Boucheron, S., and Lugosi, G.. Introduction to statistical learning theory. Summer School on Machine Learning, pp. 169207. Springer, 2003.Google Scholar
Boutsidis, C., Drineas, P., and Magdon-Ismail, M.. Near-optimal column-based matrix reconstruction. SIAM Journal on Computing, 43(2):687717, 2014.10.1137/12086755XCrossRefGoogle Scholar
Boyd, S. and Vandenberghe, L.. Convex Optimization. Cambridge University Press, 2004.10.1017/CBO9780511804441CrossRefGoogle Scholar
Brailovskaya, T. and van Handel, R.. Universality and sharp matrix concentration inequalities. Geometric and Functional Analysis, 34(6):17341838, 2024.Google Scholar
Braverman, M., Makarychev, K., Makarychev, Y., and Naor, A.. The Grothendieck constant is strictly smaller than Krivine’s bound. Forum of Mathematics, Pi, 1:e4, 2013.10.1017/fmp.2013.4CrossRefGoogle Scholar
Brazitikos, S., Giannopoulos, A., Valettas, P., and Vritsiou, B.-H.. Geometry of Isotropic Convex Bodies (Mathematical Surveys and Monographs 196). American Mathematical Society, 2014.10.1090/surv/196CrossRefGoogle Scholar
Brieden, A., Gritzmann, P., Kannan, R., Klee, V., Lovász, L., and Simonovits, M.. Deterministic and randomized polynomial-time approximation of radii. Mathematika, 48(1–2):63105, 2001.10.1112/S0025579300014364CrossRefGoogle Scholar
Brooks, S., Gelman, A., Jones, G., and Meng, X. L.. Handbook of Markov Chain Monte Carlo. CRC Press, 2011.10.1201/b10905CrossRefGoogle Scholar
Bubeck, S.. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3–4):231357, 2015.10.1561/2200000050CrossRefGoogle Scholar
Buchholz, A.. Operator Khintchine inequality in non-commutative probability. Mathematische Annalen, 319(1):116, 2001.10.1007/PL00004425CrossRefGoogle Scholar
Buchholz, A.. Optimal constants in Khintchine type inequalities for fermions, Rademachers and q-Gaussian operators. Bulletin of the Polish Academy of Sciences. Mathematics, 53(3):315321, 2005.10.4064/ba53-3-9CrossRefGoogle Scholar
Bühlmann, P. and van de Geer, S.. Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, 2011.Google Scholar
Buldygin, V. V. and Kozachenko, Yu. V.. Subgaussian random variables. Ukrainian Mathematical Journal, 32:483489, 1980.10.1007/BF01087176CrossRefGoogle Scholar
Cai, T. T., Ren, Z., and Zhou, H. H.. Estimating structured high-dimensional covariance and precision matrices: Optimal rates and adaptive estimation. Electronic Journal of Statistics, 10(1):159, 2016.Google Scholar
Candès, E. J.. The restricted isometry property and its implications for compressed sensing. Comptes Rendus. Mathematique, 346(9–10):589592, 2008.10.1016/j.crma.2008.03.014CrossRefGoogle Scholar
Candès, E. J. and Plan, Y.. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Transactions on Information Theory, 57(4):23422359, 2011.10.1109/TIT.2011.2111771CrossRefGoogle Scholar
Candès, E. J. and Recht, B.. Exact matrix completion via convex optimization. Communications of the ACM, 55(6):111119, 2012.10.1145/2184319.2184343CrossRefGoogle Scholar
Candès, E. J. and Tao, T.. Decoding by linear programming. IEEE Transactions on Information Theory, 51(12):42034215, 2005.10.1109/TIT.2005.858979CrossRefGoogle Scholar
Candès, E. J. and Tao, T.. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):20532080, 2010.10.1109/TIT.2010.2044061CrossRefGoogle Scholar
Cantelli, F. P.. Sulla determinazione empirica delle leggi di probabilità. Giornale dell’Istituto Italiano degli Attuari, 4:421424, 1933.Google Scholar
Carl, B.. Inequalities of Bernstein–Jackson-type and the degree of compactness of operators in Banach spaces. Annales de l’Institut Fourier, 35(3):79118, 1985.10.5802/aif.1020CrossRefGoogle Scholar
Carl, B. and Pajor, A.. Gelfand numbers of operators with values in a Hilbert space. Inventiones Mathematicae, 94(3):479504, 1988.10.1007/BF01394273CrossRefGoogle Scholar
Casazza, P. G., Kutyniok, G., and Philipp, F.. Introduction to finite frame theory. Finite Frames: Theory and Applications, pp. 153. Springer, 2013.10.1007/978-0-8176-8373-3CrossRefGoogle Scholar
Chafaı̈, D., Guédon, O., Lecué, G., and Pajor, A.. Interactions Between Compressed Sensing Random Matrices and High Dimensional Geometry (Panoramas et Synthèses 37). Société Mathématique de France, 2012.Google Scholar
Chandrasekaran, V., Recht, B., Parrilo, P. A., and Willsky, A. S.. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805849, 2012.10.1007/s10208-012-9135-7CrossRefGoogle Scholar
Chen, J. and Yuan, M.. One-bit phase retrieval: Optimal rates and efficient algorithms. Preprint, arXiv:2405.04733, 2024.Google Scholar
Chen, R. Y., Gittens, A., and Tropp, J. A.. The masked sample covariance estimator: An analysis using matrix concentration inequalities. Information and Inference: A Journal of the IMA, 1(1):220, 2012.10.1093/imaiai/ias001CrossRefGoogle Scholar
Chernoff, H.. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Annals of Mathematical Statistics, 23(4):493507, 1952.10.1214/aoms/1177729330CrossRefGoogle Scholar
Chevet, S.. Séries de variables aléatoires gaussiennes à valeurs dans. Application aux produits d’espaces de Wiener abstraits. Séminaire Maurey–Schwartz 1977–78, Talk no. 19, 15pp., 1978.Google Scholar
Chewi, S., Niles-Weed, J., and Rigollet, P.. Statistical optimal transport. Preprint, arXiv:2407.18163, 2024.Google Scholar
Chin, P., Rao, A., and Vu, V.. Stochastic block model and community detection in sparse graphs: A spectral algorithm with optimal rate of recovery. Conference on Learning Theory (Proceedings of Machine Learning Research 40), pp. 391423. PMLR, 2015.Google Scholar
Cirel’son, B. S., Ibragimov, L. A., and Sudakov, V. N.. Norms of Gaussian sample functions. Proceedings of the 3rd Japan–USSR Symposium on Probability Theory (Lecture Notes in Mathematics 550), pp. 2041. Springer, 1976.10.1007/BFb0077482CrossRefGoogle Scholar
Cohen, A., Dahmen, W., and DeVore, R.. Compressed sensing and best k-term approximation. Journal of the American Mathematical Society, 22(1):211231, 2009.10.1090/S0894-0347-08-00610-3CrossRefGoogle Scholar
Cvetkovic, Z., Daubechies, I., and Logan, B. F.. Single-bit oversampled A/D conversion with exponential accuracy in the bit rate. IEEE Transactions on Information Theory, 53(11):39793989, 2007.10.1109/TIT.2007.907508CrossRefGoogle Scholar
Dafnis, N., Giannopoulos, A., and Tsolomitis, A.. Asymptotic shape of a random polytope in a convex body. Journal of Functional Analysis, 257(9):28202839, 2009.10.1016/j.jfa.2009.06.027CrossRefGoogle Scholar
Davenport, M. A., Duarte, M. F., Eldar, Y. C., and Kutyniok, G.. Introduction to compressed sensing. Compressed Sensing: Theory and Applications, pp. 164. Cambridge University Press, 2012.Google Scholar
Davenport, M. A., Plan, Y., van den Berg, E., and Wootters, M.. 1-bit matrix completion. Information and Inference: A Journal of the IMA, 3(3):189223, 2014.10.1093/imaiai/iau006CrossRefGoogle Scholar
Davenport, M. A. and Romberg, J.. An overview of low-rank matrix recovery from incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608622, 2016.10.1109/JSTSP.2016.2539100CrossRefGoogle Scholar
Davidson, K. R. and Szarek, S. J.. Local operator theory, random matrices and Banach spaces. Handbook of the Geometry of Banach Spaces, vol. 1, pp. 317366. Elsevier, 2001.Google Scholar
Davis, C. and Kahan, W. M.. The rotation of eigenvectors by a perturbation. III. SIAM Journal on Numerical Analysis, 7(1):146, 1970.10.1137/0707001CrossRefGoogle Scholar
de la Peña, V. H. and Giné, E.. Decoupling: From Dependence to Independence (Probability and Its Applications). Springer, 1999.10.1007/978-1-4612-0537-1CrossRefGoogle Scholar
de la Peña, V. H. and Montgomery-Smith, S. J.. Decoupling inequalities for the tail probabilities of multivariate U-statistics. Annals of Probability, 23(2):806816, 1995.Google Scholar
Deshpande, A. and Vempala, S.. Adaptive sampling and fast low-rank matrix approximation. International Workshop on Approximation Algorithms for Combinatorial Optimization, pp. 292303. Springer, 2006.Google Scholar
Devroye, L., Lerasle, M., Lugosi, G., and Oliveira, R. I.. Subgaussian mean estimators. Annals of Statistics, 44(6):26952725, 2016.10.1214/16-AOS1440CrossRefGoogle Scholar
Dhara, S., Mukherjee, D., and Ramanan, K.. On r-to-p norms of random matrices with nonnegative entries: Asymptotic normality and -bounds for the maximizer. Annals of Applied Probability, 34(6):50765115, 2024.10.1214/24-AAP2061CrossRefGoogle Scholar
Diaconis, P. and Freedman, D.. An elementary proof of Stirling’s formula. American Mathematical Monthly, 93(2):123125, 1986.10.1080/00029890.1986.11971767CrossRefGoogle Scholar
Diaconis, P. and Freedman, D.. A dozen de Finetti-style results in search of a theory. Annales de l’Institut Henri Poincaré (B) – Probabilités et Statistiques, 23(S2):397423, 1987.Google Scholar
Dirksen, S.. Tail bounds via generic chaining. Electronic Journal of Probability, 20:129, 2015.Google Scholar
Dirksen, S., Maly, J., and Rauhut, H.. Covariance estimation under one-bit quantization. Annals of Statistics, 50(6):35383562, 2022.10.1214/22-AOS2239CrossRefGoogle Scholar
Dirksen, S. and Mendelson, S.. Non-gaussian hyperplane tessellations and robust one-bit compressed sensing. Journal of the European Mathematical Society, 23(9):29132947, 2021.Google Scholar
Dirksen, S., Mendelson, S., and Stollenwerk, A.. Sharp estimates on random hyperplane tessellations. SIAM Journal on Mathematics of Data Science, 4(4):13961419, 2022.Google Scholar
Dirksen, S., Mendelson, S., and Stollenwerk, A.. Fast metric embedding into the Hamming cube. SIAM Journal on Computing, 53(2):315345, 2024.10.1137/22M1520220CrossRefGoogle Scholar
Donoho, D. L., Gavish, M., and Montanari, A.. The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising. Proceedings of the National Academy of Sciences, 110(21):84058410, 2013.10.1073/pnas.1306110110CrossRefGoogle ScholarPubMed
Donoho, D. L., Javanmard, A., and Montanari, A.. Information-theoretically optimal compressed sensing via spatial coupling and approximate message passing. IEEE Transactions on Information Theory, 59(11):74347464, 2013.10.1109/TIT.2013.2274513CrossRefGoogle Scholar
Donoho, D. L., Johnstone, I., and Montanari, A.. Accurate prediction of phase transitions in compressed sensing via a connection to minimax denoising. IEEE Transactions on Information Theory, 59(6):33963433, 2013.Google Scholar
Donoho, D. L., Maleki, A., and Montanari, A.. The noise-sensitivity phase transition in compressed sensing. IEEE Transactions on Information Theory, 57(10):69206941, 2011.10.1109/TIT.2011.2165823CrossRefGoogle Scholar
Donoho, D. L. and Tanner, J.. Counting faces of randomly projected polytopes when the projection radically lowers dimension. Journal of the American Mathematical Society, 22(1):153, 2009.10.1090/S0894-0347-08-00600-0CrossRefGoogle Scholar
Drineas, P. and Kannan, R.. Pass efficient algorithms for approximating large matrices. SODA ’03: Proceedings of the Fourteenth Annual ACM–SIAM Symposium on Discrete Algorithms, vol. 3, pp. 223232. ACM, 2003.Google Scholar
Dudley, R. M.. Central limit theorems for empirical measures. Annals of Probability, 6(6):899929, 1978.10.1214/aop/1176995384CrossRefGoogle Scholar
Dudley, R. M.. Uniform Central Limit Theorems, 2nd edn (Cambridge Studies in Advanced Mathematics 142). Cambridge University Press, 2014.10.1017/CBO9781139014830CrossRefGoogle Scholar
Durrett, R.. Probability: Theory and Examples (Cambridge Series in Statistical and Probabilistic Mathematics 49). Cambridge University Press, 2019.10.1017/9781108591034CrossRefGoogle Scholar
Dvoretzky, A.. A theorem on convex bodies and applications to Banach spaces. Proceedings of the National Academy of Sciences, 45(2):223226, 1959.10.1073/pnas.45.2.223CrossRefGoogle Scholar
Dvoretzky, A.. Some results on convex bodies and Banach spaces. Matematika, 8(1):73102, 1964.Google Scholar
Eisenstat, D. and Angluin, D.. The VC dimension of k-fold union. Information Processing Letters, 101(5):181184, 2007.10.1016/j.ipl.2006.10.004CrossRefGoogle Scholar
Eldridge, J., Belkin, M., and Wang, Y.. Unperturbed: Spectral analysis beyond Davis–Kahan. Algorithmic Learning Theory (Proceedings of Machine Learning Research 83), pp. 321358. PMLR, 2018.Google Scholar
Erdős, P.. On a lemma of Littlewood and Offord. Bulletin of the American Mathematical Society, 51:898902, 1945.10.1090/S0002-9904-1945-08454-7CrossRefGoogle Scholar
Erdős, P. and Rényi, A.. On the strength of connectedness of a random graph. Acta Mathematica Hungarica, 12(1):261267, 1961.Google Scholar
Feller, W.. An Introduction to Probability Theory and Its Applications, 3rd edn, vol. I. Wiley, 1968.Google Scholar
Fernique, X. M., Conze, J. P., Gani, J., and Fernique, X.. Regularité des Trajectoires des Fonctions Aléatoires Gaussiennes. Springer, 1975.10.1007/BFb0080190CrossRefGoogle Scholar
Folland, G. B.. A Course in Abstract Harmonic Analysis. CRC Press, 2016.10.1201/b19172CrossRefGoogle Scholar
Fortunato, S. and Hric, D.. Community detection in networks: A user guide. Physics Reports, 659:144, 2016.Google Scholar
Foucart, S. and Rauhut, H.. A Mathematical Introduction to Compressive Sensing (Applied and Numerical Harmonic Analysis). Springer, 2013.10.1007/978-0-8176-4948-7CrossRefGoogle Scholar
Frankl, P.. On the trace of finite sets. Journal of Combinatorial Theory, Series A, 34(1):4145, 1983.10.1016/0097-3165(83)90038-9CrossRefGoogle Scholar
Friedlander, M. P., Jeong, H., Plan, Y., and Yılmaz, Ö.. NBIHT: An efficient algorithm for 1-bit compressed sensing with optimal error decay rate. IEEE Transactions on Information Theory, 68(2):11571177, 2021.10.1109/TIT.2021.3124598CrossRefGoogle Scholar
Frieze, A. and Kannan, R.. Quick approximation to matrices and applications. Combinatorica, 19(2):175220, 1999.10.1007/s004930050052CrossRefGoogle Scholar
Frieze, A. and Karonski, M.. Introduction to Random Graphs. Cambridge University Press, 2016.Google Scholar
Yu, A.. Garnaev and E. D. Gluskin. On diameters of the Euclidean sphere. Doklady Akademii Nauk SSSR, 277(5):200204, 1984.Google Scholar
Gasull, A. and Utzet, F.. Approximating Mills ratio. Journal of Mathematical Analysis and Applications, 420(2):18321853, 2014.10.1016/j.jmaa.2014.05.034CrossRefGoogle Scholar
Giannopoulos, A. A. and Milman, V. D.. Euclidean structure in finite dimensional normed spaces. Handbook of the Geometry of Banach Spaces, vol. 1, pp. 707779. Elsevier, 2001.Google Scholar
Giné, E., Götze, F., and Mason, D. M.. When is the student t-statistic asymptotically standard normal? Annals of Probability, 25(3):15141531, 1997.10.1214/aop/1024404523CrossRefGoogle Scholar
Giné, E. and Nickl, R.. Mathematical Foundations of Infinite-Dimensional Statistical Models. Cambridge University Press, 2021.Google Scholar
Giraud, C.. Introduction to High-Dimensional Statistics. Chapman & Hall/CRC Press, 2021.10.1201/9781003158745CrossRefGoogle Scholar
Glivenko, V.. Sulla determinazione empirica delle leggi di probabilità. Giornale dell’Istituto Italiano degli Attuari, 4:9299, 1933.Google Scholar
Goemans, M. X. and Williamson, D. P.. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):11151145, 1995.10.1145/227683.227684CrossRefGoogle Scholar
Gordon, Y.. Some inequalities for Gaussian processes and applications. Israel Journal of Mathematics, 50:265289, 1985.10.1007/BF02759761CrossRefGoogle Scholar
Gordon, Y.. Elliptically contoured distributions. Probability Theory and Related Fields, 76(4):429438, 1987.10.1007/BF00960067CrossRefGoogle Scholar
Gordon, Y.. Gaussian processes and almost spherical sections of convex bodies. Annals of Probability, 16(1):180188, 1988.10.1214/aop/1176991893CrossRefGoogle Scholar
Gordon, Y.. On Milman’s inequality and random subspaces which escape through a mesh in . Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 1986–87, pp. 84106. Springer, 1988.10.1007/BFb0081737CrossRefGoogle Scholar
Gordon, Y.. Majorization of Gaussian processes and geometric applications. Probability Theory and Related Fields, 91(2):251267, 1992.10.1007/BF01291425CrossRefGoogle Scholar
Götze, F., Sambale, H., and Sinulis, A.. Concentration inequalities for polynomials in α-subexponential random variables. Electronic Journal of Probability, 26(95):122, 2021.10.1214/21-EJP606CrossRefGoogle Scholar
Goyal, N., Vempala, S., and Xiao, Y.. Fourier PCA and robust tensor decomposition. Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, pp. 584593. ACM, 2014.10.1145/2591796.2591875CrossRefGoogle Scholar
Gromov, M.. Paul Levy’s isoperimetric inequality. Report. IHES, 1980.Google Scholar
Gross, D.. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):15481566, 2011.10.1109/TIT.2011.2104999CrossRefGoogle Scholar
Grothendieck, A.. Résumé de la Théorie Métrique des Produits Tensoriels Topologiques, vol. 2. Sociedade de Matemática de São Paulo, 1956.Google Scholar
Guédon, O.. Concentration phenomena in high dimensional geometry. ESAIM: Proceedings, 44:4760, 2014.10.1051/proc/201444002CrossRefGoogle Scholar
Guédon, O. and Vershynin, R.. Community detection in sparse networks via Grothendieck’s inequality. Probability Theory and Related Fields, 165(3):10251049, 2016.10.1007/s00440-015-0659-zCrossRefGoogle Scholar
Haagerup, U.. The best constants in the Khintchine inequality. Studia Mathematica, 70(3):231283, 1981.10.4064/sm-70-3-231-283CrossRefGoogle Scholar
Hajek, B., Wu, Y., and Xu, J.. Achieving exact cluster recovery threshold via semidefinite programming. IEEE Transactions on Information Theory, 62(5):27882797, 2016.10.1109/TIT.2016.2546280CrossRefGoogle Scholar
Hanson, D. L. and Wright, F. T.. A bound on tail probabilities for quadratic forms in independent random variables. Annals of Mathematical Statistics, 42(3):10791083, 1971.10.1214/aoms/1177693335CrossRefGoogle Scholar
Harper, L. H.. Optimal numberings and isoperimetric problems on graphs. Journal of Combinatorial Theory, 1(3):385393, 1966.10.1016/S0021-9800(66)80059-5CrossRefGoogle Scholar
Hastie, T., Tibshirani, R., and Friedman, J.. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, corrected 2nd edn. Springer, 2017.Google Scholar
Hastie, T., Tibshirani, R., and Wainwright, M.. Statistical Learning with Sparsity (Monographs on Statistics and Applied Probability 143). Chapman & Hall/CRC Press, 2015.10.1201/b18401CrossRefGoogle Scholar
Haussler, D. and Long, P. M.. A generalization of Sauer’s lemma. Journal of Combinatorial Theory, Series A, 71(2):219240, 1995.10.1016/0097-3165(95)90001-2CrossRefGoogle Scholar
He, Y., Wang, K., and Zhu, Y.. Sparse Hanson–Wright inequalities with applications. Preprint, arXiv:2410.15652, 2024.Google Scholar
Hitczenko, P. and Kwapien, S.. On the Rademacher series. Probability in Banach Spaces, 9 (Progress in Probability 35), pp. 3136. Springer, 1994.Google Scholar
Hoeffding, W.. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):1330, 1963.10.1080/01621459.1963.10500830CrossRefGoogle Scholar
Hofmann, T., Schölkopf, B., and Smola, A. J.. Kernel methods in machine learning. Annals of Statistics, 36(3):11711220, 2008.10.1214/009053607000000677CrossRefGoogle Scholar
Holland, P. W., Laskey, K. B., and Leinhardt, S.. Stochastic blockmodels: First steps. Social Networks, 5(2):109137, 1983.10.1016/0378-8733(83)90021-7CrossRefGoogle Scholar
Hoory, S., Linial, N., and Wigderson, A.. Expander graphs and their applications. Bulletin of the American Mathematical Society, 43(4):439561, 2006.10.1090/S0273-0979-06-01126-8CrossRefGoogle Scholar
Hsu, D. and Kakade, S. M.. Learning mixtures of spherical Gaussians: Moment methods and spectral decompositions. Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, pp. 1120. ACM, 2013.10.1145/2422436.2422439CrossRefGoogle Scholar
Hsu, D., Kakade, S. M., and Zhang, T.. A tail inequality for quadratic forms of subgaussian random vectors. Electronic Communications in Probability, 17(52):16, 2012.10.1214/ECP.v17-2079CrossRefGoogle Scholar
Huang, H. and Tikhomirov, K.. On dimension-dependent concentration for convex Lipschitz functions in product spaces. Electronic Journal of Probability, 28:123, 2023.10.1214/23-EJP944CrossRefGoogle Scholar
Huang, J., Jiao, Y., Lu, X., and Zhu, L.. Robust decoding from 1-bit compressive sampling with ordinary and regularized least squares. SIAM Journal on Scientific Computing, 40(4):A2062A2086, 2018.10.1137/17M1154102CrossRefGoogle Scholar
Huffer, F. W.. Slepian’s inequality via the central limit theorem. Canadian Journal of Statistics/Revue Canadienne de Statistique, 14(4):367370, 1986.10.2307/3315195CrossRefGoogle Scholar
Hug, D.. Random polytopes. Stochastic Geometry, Spatial Statistics and Random Fields: Asymptotic Methods, pp. 205238. Springer, 2012.Google Scholar
Jacques, L., Laska, J. N., Boufounos, P. T., and Baraniuk, R. G.. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Transactions on Information Theory, 59(4):20822102, 2013.10.1109/TIT.2012.2234823CrossRefGoogle Scholar
James, G., Witten, D., Hastie, T., and Tibshirani, R.. An Introduction to Statistical Learning (Springer Texts in Statistics 112). Springer, 2013.10.1007/978-1-4614-7138-7CrossRefGoogle Scholar
Jameson, G. J. O.. A simple proof of Stirling’s formula for the gamma function. Mathematical Gazette, 99(544):6874, 2015.10.1017/mag.2014.9CrossRefGoogle Scholar
Janson, S.. Graphons, cut norm and distance, couplings and rearrangements. New York Journal of Mathematics Monographs, 4, 76pp., 2013.Google Scholar
Janson, S., Luczak, T., and Rucinski, A.. Random Graphs. Wiley, 2011.Google Scholar
Javanmard, A., Montanari, A., and Ricci-Tersenghi, F.. Phase transitions in semidefinite relaxations. Proceedings of the National Academy of Sciences, 113(16):E2218E2223, 2016.10.1073/pnas.1523097113CrossRefGoogle ScholarPubMed
Jeong, H., Li, X., Plan, Y., and Yilmaz, O.. Subgaussian matrices on sets: Optimal tail dependence and applications. Communications on Pure and Applied Mathematics, 75(8):17131754, 2022.10.1002/cpa.22024CrossRefGoogle Scholar
Jerrum, M., Valiant, L., and Vazirani, V.. Random generation of combinatorial structures from a uniform distribution. Theoretical Computer Science, 43:186188, 1986.10.1016/0304-3975(86)90174-XCrossRefGoogle Scholar
Johnson, W. B., Lindenstrauss, J., and Schechtman, G.. Extensions of Lipschitz maps into Banach spaces. Israel Journal of Mathematics, 54(2):129138, 1986.10.1007/BF02764938CrossRefGoogle Scholar
Kahane, J.-P.. Propriétés locales des fonctions à séries de Fourier aléatoires. Studia Mathematica, 19(1):125, 1960.10.4064/sm-19-1-1-25CrossRefGoogle Scholar
Kahane, J.-P.. Une inégalité du type de Slepian et Gordon sur les processus gaussiens. Israel Journal of Mathematics, 55:109110, 1986.10.1007/BF02772698CrossRefGoogle Scholar
Kalai, A. T., Moitra, A., and Valiant, G.. Disentangling Gaussians. Communications of the ACM, 55(2):113120, 2012.10.1145/2076450.2076474CrossRefGoogle Scholar
Keshavan, R. H., Montanari, A., and Oh, S.. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):29802998, 2010.10.1109/TIT.2010.2046205CrossRefGoogle Scholar
Khintchine, A.. Über dyadische brüche. Mathematische Zeitschrift, 18(1):109116, 1923.10.1007/BF01192399CrossRefGoogle Scholar
Khot, S., Kindler, G., Mossel, E., and O’Donnell, R.. Optimal inapproximability results for MAX-CUT and other 2-variable CSPs? SIAM Journal on Computing, 37(1):319357, 2007.10.1137/S0097539705447372CrossRefGoogle Scholar
Khot, S. and Naor, A.. Grothendieck-type inequalities in combinatorial optimization. Communications on Pure and Applied Mathematics, 65(7):9921035, 2012.10.1002/cpa.21398CrossRefGoogle Scholar
Klartag, B.. A central limit theorem for convex sets. Inventiones Mathematicae, 168(1):91131, 2007.10.1007/s00222-006-0028-8CrossRefGoogle Scholar
Klartag, B. and Mendelson, S.. Empirical processes and random projections. Journal of Functional Analysis, 225(1):229245, 2005.10.1016/j.jfa.2004.10.009CrossRefGoogle Scholar
Klochkov, Y. and Zhivotovskiy, N.. Uniform Hanson–Wright type concentration inequalities for unbounded entries via the entropy method. Electronic Journal of Probability, 25(22):130, 2020.10.1214/20-EJP422CrossRefGoogle Scholar
Knudson, K., Saab, R., and Ward, R.. One-bit compressive sensing with norm estimation. IEEE Transactions on Information Theory, 62(5):27482758, 2016.10.1109/TIT.2016.2527637CrossRefGoogle Scholar
Koltchinskii, V. and Lounici, K.. Concentration inequalities and moment bounds for sample covariance operators. Bernoulli, 23(1):110133, 2017.10.3150/15-BEJ730CrossRefGoogle Scholar
Koltchinskii, V. and Mendelson, S.. Bounding the smallest singular value of a random matrix without concentration. International Mathematics Research Notices, 2015(23):1299113008, 2015.Google Scholar
König, H.. On the best constants in the Khintchine inequality for Steinhaus variables. Israel Journal of Mathematics, 203:2357, 2014.10.1007/s11856-013-0006-yCrossRefGoogle Scholar
Kovačević, J. and Chebira, A.. An introduction to frames. Foundations and Trends in Signal Processing, 2(1):194, 2008.10.1561/2000000006CrossRefGoogle Scholar
Krein, M. G., Krasnosel’skii, M. A., and Milman, D. P.. On the defect indices of linear operators in Banach space and on some geometric questions. Sbornik Trudov Instituta Matematiki Akademii Nauk Ukraini SSR, 11:97112, 1948.Google Scholar
Krivine, J.-L.. Constantes de Grothendieck et fonctions de type positif sur les sphères. Séminaire Maurey–Schwartz 1977–78, Talks no. 1 and 2, 17pp., 1978.10.1016/0001-8708(79)90017-3CrossRefGoogle Scholar
Kulkarni, S. and Harman, G.. An Elementary Introduction to Statistical Learning Theory. Wiley, 2011.10.1002/9781118023471CrossRefGoogle Scholar
Larsen, K. G. and Nelson, J.. Optimality of the Johnson–Lindenstrauss lemma. 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pp. 633638. IEEE, 2017.10.1109/FOCS.2017.64CrossRefGoogle Scholar
Laska, J. N., Wen, Z., Yin, W., and Baraniuk, R. G.. Trust, but verify: Fast and accurate signal recovery from 1-bit compressive measurements. IEEE Transactions on Signal Processing, 59(11):52895301, 2011.10.1109/TSP.2011.2162324CrossRefGoogle Scholar
Latala, R.. On the spectral norm of Rademacher matrices. Preprint, arXiv:2405.13656, 2024.Google Scholar
Latala, R. and Strzelecka, M.. Chevet-type inequalities for subexponential Weibull variables and estimates for norms of random matrices. Electronic Journal of Probability, 29:119, 2024.10.1214/24-EJP1151CrossRefGoogle Scholar
Latala, R. and Strzelecka, M.. Operator norms of random matrices with iid entries. Journal of Functional Analysis, 288(3):110720, 2025.10.1016/j.jfa.2024.110720CrossRefGoogle Scholar
Latala, R. and Strzelecka, M.. Operator norms of Gaussian matrices. Preprint, arXiv:2502.02186, 2025.Google Scholar
Latala, R., van Handel, R., and Youssef, P.. The dimension-free structure of nonhomogeneous random matrices. Inventiones Mathematicae, 214:10311080, 2018.10.1007/s00222-018-0817-xCrossRefGoogle Scholar
Laurent, M. and Vallentin, F.. Semidefinite optimization. Lecture Notes, April 2016.Google Scholar
Le, C. M., Levina, E., and Vershynin, R.. Concentration and regularization of random graphs. Random Structures & Algorithms, 51(3):538561, 2017.CrossRefGoogle Scholar
Lecué, G. and Mendelson, S.. Regularization and the small-ball method I: Sparse recovery. Annals of Statistics 46(2):611641, 2018.10.1214/17-AOS1562CrossRefGoogle Scholar
Lecué, G. and Mendelson, S.. Regularization and the small-ball method II: Complexity dependent error rates. Journal of Machine Learning Research, 18(146):148, 2017.Google Scholar
Ledoux, M.. The Concentration of Measure Phenomenon. American Mathematical Society, 2001.Google Scholar
Ledoux, M. and Talagrand, M.. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 2013.Google Scholar
Levina, E. and Vershynin, R.. Partial estimation of covariance matrices. Probability Theory and Related Fields, 153(3):405419, 2012.10.1007/s00440-011-0349-4CrossRefGoogle Scholar
Li, S. and Schramm, T.. Spectral clustering in the Gaussian mixture block model. Preprint, arXiv:2305.00979, 2023.Google Scholar
Liaw, C., Mehrabian, A., Plan, Y., and Vershynin, R.. A simple tool for bounding the deviation of random matrices on geometric sets. Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 2014–2016, pp. 277299. Springer, 2017.10.1007/978-3-319-45282-1_18CrossRefGoogle Scholar
Liberty, E.. Simple and deterministic matrix sketching. Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 581588. ACM, 2013.10.1145/2487575.2487623CrossRefGoogle Scholar
Lindenstrauss, J. and Pelczynski, A.. Absolutely summing operators in lp-spaces and their applications. Studia Mathematica, 29(3):275326, 1968.10.4064/sm-29-3-275-326CrossRefGoogle Scholar
Littlewood, J. E.. On bounded bilinear forms in an infinite number of variables. Quarterly Journal of Mathematics, os-1(1):164174, 1930.10.1093/qmath/os-1.1.164CrossRefGoogle Scholar
Littlewood, J. E. and Offord, A. C.. On the number of real roots of a random algebraic equation. III. Rec. Math. [Mat. Sbornik] NS, 12(54):277286, 1943.Google Scholar
Löwner, K.. Über monotone matrixfunktionen. Mathematische Zeitschrift, 38(1):177216, 1934.10.1007/BF01170633CrossRefGoogle Scholar
Lugosi, G. and Mendelson, S.. Mean estimation and regression under heavy-tailed distributions – a survey. Foundations of Computational Mathematics, 19(5):11451190, 2019.10.1007/s10208-019-09427-xCrossRefGoogle Scholar
Lust-Piquard, F.. Inégalites de Khintchine dans Cp (1 < p ∞). Comptes Rendus de l’Académie des Sciences – Série I, 303:289292, 1986.Google Scholar
Lust-Piquard, F. and Pisier, G.. Non commutative Khintchine and Paley inequalities. Arkiv för Matematik, 29:241260, 1991.10.1007/BF02384340CrossRefGoogle Scholar
Makovoz, Yu.. A simple proof of an inequality in the theory of n-widths. Constructive Theory of Functions (Varna, 1987), pp. 305308. Bulgarian Academy of Sciences, 1988.Google Scholar
Martinsson, P.-G. and Tropp, J. A.. Randomized numerical linear algebra: Foundations and algorithms. Acta Numerica, 29:403572, 2020.10.1017/S0962492920000021CrossRefGoogle Scholar
Matousek, J.. Geometric Discrepancy: An Illustrated Guide (Algorithms and Combinatorics 18). Springer, 2009.Google Scholar
Matousek, J.. Lectures on Discrete Geometry, vol. 212. Springer Science & Business Media, 2013.Google Scholar
Matsumoto, N. and Mazumdar, A.. Binary iterative hard thresholding converges with optimal number of measurements for 1-bit compressed sensing. Journal of the ACM, 71(5):164, 2024.CrossRefGoogle Scholar
Maurey, B.. Construction de suites symétriques. Comptes Rendus de l’Académie des Sciences, Paris – Série AB, 288(14):A679–A681, 1979.Google Scholar
McCoy, M. B. and Tropp, J. A.. From Steiner formulas for cones to concentration of intrinsic volumes. Discrete & Computational Geometry, 51:926963, 2014.10.1007/s00454-014-9595-4CrossRefGoogle Scholar
McSherry, F.. Spectral partitioning of random graphs. Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science, pp. 529537. IEEE, 2001.10.1109/SFCS.2001.959929CrossRefGoogle Scholar
Meckes, E.. Projections of probability distributions: A measure-theoretic Dvoretzky theorem. Geometric Aspects of Functional Analysis: Israel Seminar 2006–2010, pp. 317326. Springer, 2012.10.1007/978-3-642-29849-3_18CrossRefGoogle Scholar
Mehta, M. L.. Random Matrices. Elsevier, 2004.Google Scholar
Mendelson, S.. A few notes on statistical learning theory. Advanced Lectures on Machine Learning (Lecture Notes in Computer Science 2600), pp. 140. Springer, 2003.10.1007/3-540-36434-XCrossRefGoogle Scholar
Mendelson, S.. A remark on the diameter of random sections of convex bodies. Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 2011–2013, pp. 395404. Springer, 2014.10.1007/978-3-319-09477-9_25CrossRefGoogle Scholar
Mendelson, S.. Learning without concentration. Journal of the ACM, 62(3):125, 2015.10.1145/2699439CrossRefGoogle Scholar
Mendelson, S.. Extending the scope of the small-ball method. Studia Mathematica, 256(2):147167, 2021.10.4064/sm190420-21-11CrossRefGoogle Scholar
Mendelson, S., Pajor, A., and Tomczak-Jaegermann, N.. Reconstruction and subgaussian operators in asymptotic geometric analysis. Geometric and Functional Analysis, 17(4):12481282, 2007.CrossRefGoogle Scholar
Mendelson, S. and Vershynin, R.. Entropy and the combinatorial dimension. Inventiones Mathematicae, 152(1):3755, 2003.10.1007/s00222-002-0266-3CrossRefGoogle Scholar
Mendelson, S. and Zhivotovskiy, N.. Robust covariance estimation under L4L2 norm equivalence. Annals of Statistics, 48(3):16481664, 2020.10.1214/19-AOS1862CrossRefGoogle Scholar
Meyer, C. D.. Matrix Analysis and Applied Linear Algebra. SIAM, 2023.Google Scholar
Mezzadri, F.. How to generate random matrices from the classical compact groups. Notices of the American Mathematical Society, 54(5): 592604, 2007.Google Scholar
Milman, V. D.. New proof of the theorem of A. Dvoretzky on intersections of convex bodies. Functional Analysis and Its Applications, 5(4):288295, 1971.10.1007/BF01086740CrossRefGoogle Scholar
Milman, V. D.. Geometrical inequalities and mixed volumes in the local theory of Banach spaces. Astérisque, 131:373400, 1985.Google Scholar
Milman, V. D.. Random subspaces of proportional dimension of finite dimensional normed spaces: Approach through the isoperimetric inequality. Banach Spaces: Proceedings of the Missouri Conference, pp. 106115. Springer, 1985.CrossRefGoogle Scholar
Milman, V. D.. A note on a low M*-estimate. Geometry of Banach Spaces (Proceedings of the Conference Held in Strobl, Austria 1989) (London Mathematical Society Lecture Note Series 158), pp. 219229. Cambridge University Press, 1990.Google Scholar
Milman, V. D.. Surprising geometric phenomena in high-dimensional convexity theory. European Congress of Mathematics: Budapest, July 22–26, 1996, vol. II, pp. 7391. Springer, 1998.10.1007/978-3-0348-8898-1_4CrossRefGoogle Scholar
Milman, V. D. and Schechtman, G.. Asymptotic Theory of Finite Dimensional Normed Spaces (Lecture Notes in Mathematics 1200). Springer, 1986.Google Scholar
Milman, V. D. and Schechtman, G.. Global vs. local asymptotic theories of finite dimensional normed spaces. Duke Mathematical Journal, 90:7393, 1997.10.1215/S0012-7094-97-09003-7CrossRefGoogle Scholar
Minsker, S.. On some extensions of Bernstein’s inequality for self-adjoint operators. Statistics & Probability Letters, 127:111119, 2017.CrossRefGoogle Scholar
Mitzenmacher, M. and Upfal, E.. Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis. Cambridge University Press, 2017.Google Scholar
Moitra, A.. Algorithmic Aspects of Machine Learning. Cambridge University Press, 2018.10.1017/9781316882177CrossRefGoogle Scholar
Moitra, A. and Valiant, G.. Settling the polynomial learnability of mixtures of Gaussians. 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp. 93102. IEEE, 2010.10.1109/FOCS.2010.15CrossRefGoogle Scholar
Montgomery-Smith, S. J.. The distribution of Rademacher sums. Proceedings of the American Mathematical Society, 109(2):517522, 1990.10.1090/S0002-9939-1990-1013975-0CrossRefGoogle Scholar
Mörters, P. and Peres, Y.. Brownian Motion (Cambridge Series in Statistical and Probabilistic Mathematics 30). Cambridge University Press, 2010.Google Scholar
Mossel, E., Neeman, J., and Sly, A.. Belief propagation, robust reconstruction and optimal recovery of block models. Conference on Learning Theory (Proceedings of Machine Learning Resarch 35), pp. 356370. PMLR, 2014.Google Scholar
Murray, R., Demmel, J., Mahoney, M. W., et al. Randomized numerical linear algebra: A perspective on the field with an eye to software. Preprint, arXiv:2302.11474, 2023.Google Scholar
Nemirovsky, A. and Yudin, D.. Problem Complexity and Method Efficiency in Optimization. Wiley Interscience, 1983.Google Scholar
Newman, M.. Networks: An Introduction. Oxford University Press, 2010.CrossRefGoogle Scholar
Nguyen, H. H. and Vu, V. H.. Small ball probability, inverse theorems, and applications. Erdős Centennial (Bolyai Society Mathematical Studies 25), pp. 409463. Springer, 2013.10.1007/978-3-642-39286-3_16CrossRefGoogle Scholar
Oliveira, R. I.. Concentration of the adjacency matrix and of the Laplacian in random graphs with independent edges. Preprint, arXiv:0911.0600, 2009.Google Scholar
Oliveira, R. I.. Sums of random Hermitian matrices and an inequality by Rudelson. Electronic Communications in Probability, 15:203212, 2010.10.1214/ECP.v15-1544CrossRefGoogle Scholar
Oliveira, R. I. and Rico, Z. F.. Improved covariance estimation: Optimal robustness and subgaussian guarantees under heavy tails. Annals of Statistics, 52(5):19531977, 2024.10.1214/24-AOS2407CrossRefGoogle Scholar
O’Rourke, S., Vu, V., and Wang, K.. Random perturbation of low rank matrices: Improving classical bounds. Linear Algebra and Its Applications, 540:2659, 2018.10.1016/j.laa.2017.11.014CrossRefGoogle Scholar
O’Rourke, S., Vu, V., and Wang, K.. Optimal subspace perturbation bounds under Gaussian noise. 2023 IEEE International Symposium on Information Theory (ISIT), pp. 26012606. IEEE, 2023.CrossRefGoogle Scholar
Ostrovskii, M. I.. Topologies on the set of all subspaces of a Banach space and related questions of Banach space geometry. Quaestiones Mathematicae, 17(3):259319, 1994.10.1080/16073606.1994.9631766CrossRefGoogle Scholar
Oymak, S. and Recht, B.. Near-optimal bounds for binary embeddings of arbitrary sets. Preprint, arXiv:1512.04433, 2015.Google Scholar
Oymak, S., Thrampoulidis, C., and Hassibi, B.. The squared-error of generalized LASSO: A precise analysis. 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 10021009. IEEE, 2013.10.1109/Allerton.2013.6736635CrossRefGoogle Scholar
Oymak, S. and Tropp, J. A.. Universality laws for randomized dimension reduction, with applications. Information and Inference: A Journal of the IMA, 7(3):337446, 2018.10.1093/imaiai/iax011CrossRefGoogle Scholar
Pajor, A.. Sous Espaces des Espaces de Banach. Editions Hermann, 1985.Google Scholar
Pajor, A. and Tomczak-Jaegermann, N.. Subspaces of small codimension of finite-dimensional Banach spaces. Proceedings of the American Mathematical Society, 97(4):637642, 1986.10.1090/S0002-9939-1986-0845980-8CrossRefGoogle Scholar
Petz, D.. A survey of certain trace inequalities. Banach Center Publications, 30(1):287298, 1994.10.4064/-30-1-287-298CrossRefGoogle Scholar
Pisier, G.. Remarques sur un résultat non publié de B. Maurey. Séminaire d’Analyse Fonctionnelle (Maurey–Schwartz) 1980–81, Talk no. 5, 12pp., 1981.Google Scholar
Pisier, G.. The Volume of Convex Bodies and Banach Space Geometry (Cambridge Tracts in Mathematics 94). Cambridge University Press, 1999.Google Scholar
Pisier, G.. Grothendieck’s theorem, past and present. Bulletin of the American Mathematical Society, 49(2):237323, 2012.10.1090/S0273-0979-2011-01348-9CrossRefGoogle Scholar
Plan, Y. and Vershynin, R.. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Transactions on Information Theory, 59(1):482494, 2012.10.1109/TIT.2012.2207945CrossRefGoogle Scholar
Plan, Y. and Vershynin, R.. One-bit compressed sensing by linear programming. Communications on Pure and Applied Mathematics, 66(8):12751297, 2013.CrossRefGoogle Scholar
Plan, Y. and Vershynin, R.. Dimension reduction by random hyperplane tessellations. Discrete & Computational Geometry, 51(2):438461, 2014.10.1007/s00454-013-9561-6CrossRefGoogle Scholar
Plan, Y. and Vershynin, R.. Random matrices acting on sets: Independent columns. Preprint, arXiv:2502.16827, 2025.Google Scholar
Plan, Y., Vershynin, R., and Yudovina, E.. High-dimensional estimation with geometric constraints. Information and Inference: A Journal of the IMA, 6(1):140, 2017.Google Scholar
Pollard, D.. Empirical processes: Theory and applications. NSF-CBMS Regional Conference Series in Probability and Statistics, vol. 2, 86pp. IMS, 1990.10.1214/cbms/1462061091CrossRefGoogle Scholar
Polyanskiy, Y. and Wu, Y.. Information Theory: From Coding to Learning. Cambridge University Press, 2025.Google Scholar
Rauhut, H.. Compressive sensing and structured random matrices. Theoretical Foundations and Numerical Methods for Sparse Recovery (Radon Series on Computational and Applied Mathematics 9), pp. 192. De Gruyter, 2010.Google Scholar
Recht, B.. A simpler approach to matrix completion. Journal of Machine Learning Research, 12(12):34133430, 2011.Google Scholar
Rietz, R. E.. A proof of the Grothendieck inequality. Israel Journal of Mathematics, 19(3):271276, 1974.10.1007/BF02757725CrossRefGoogle Scholar
Rigollet, P.. High-dimensional statistics. Lecture Notes, 2010.Google Scholar
Rivasplata, O.. Subgaussian random variables: An expository note. Technical Report, University of Alberta, 2012.Google Scholar
Robbins, H.. A remark on Stirling’s formula. American Mathematical Monthly, 62:2629, 1955.Google Scholar
Rudelson, M.. Random vectors in the isotropic position. Journal of Functional Analysis, 164(1):6072, 1999.CrossRefGoogle Scholar
Rudelson, M. and Vershynin, R.. Combinatorics of random processes and sections of convex bodies. Annals of Mathematics, 164(2):603648, 2006.10.4007/annals.2006.164.603CrossRefGoogle Scholar
Rudelson, M. and Vershynin, R.. Sampling from large matrices: An approach through geometric functional analysis. Journal of the ACM, 54(4):21–es, 2007.10.1145/1255443.1255449CrossRefGoogle Scholar
Rudelson, M. and Vershynin, R.. On sparse reconstruction from Fourier and Gaussian measurements. Communications on Pure and Applied Mathematics, 61(8):10251045, 2008.10.1002/cpa.20227CrossRefGoogle Scholar
Rudelson, M. and Vershynin, R.. Non-asymptotic theory of random matrices: Extreme singular values. Proceedings of the International Congress of Mathematicians 2010 (ICM 2010), pp. 15761602. World Scientific, 2010.Google Scholar
Rudelson, M. and Vershynin, R.. Hanson–Wright inequality and subgaussian concentration. Electronic Communications in Probability, 18:19, 2013.10.1214/ECP.v18-2865CrossRefGoogle Scholar
Sambale, H.. Some notes on concentration for α-subexponential random variables. High Dimensional Probability IX: The Ethereal Volume, pp. 167192. Springer, 2023.CrossRefGoogle Scholar
Sauer, N.. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1):145147, 1972.10.1016/0097-3165(72)90019-2CrossRefGoogle Scholar
Schechtman, G.. Two observations regarding embedding subsets of Euclidean spaces in normed spaces. Advances in Mathematics, 200(1):125135, 2006.10.1016/j.aim.2004.11.003CrossRefGoogle Scholar
Schechtman, G. and Zinn, J.. On the volume of the intersection of two balls. Proceedings of the American Mathematical Society, 110(1):217224, 1990.Google Scholar
Schneider, R. and Weil, W.. Stochastic and Integral Geometry, vol. 1. Springer, 2008.CrossRefGoogle Scholar
Schur, I.. Bemerkungen zur Theorie der beschränkten Bilinearformen mit unendlich vielen Veränderlichen. Journal für die Reine und Angewandte Mathematik, 140:128, 1911.10.1515/crll.1911.140.1CrossRefGoogle Scholar
Seginer, Y.. The expected norm of random matrices. Combinatorics, Probability and Computing, 9(2):149166, 2000.CrossRefGoogle Scholar
Shelah, S.. A combinatorial problem: Stability and order for models and theories in infinitary languages. Pacific Journal of Mathematics, 41(1):247261, 1972.10.2140/pjm.1972.41.247CrossRefGoogle Scholar
Sheu, Y.-C. and Wang, T.-C.. Matrix deviation inequality for -norm. Random Matrices: Theory and Applications, 12(4):2350007, 2023.CrossRefGoogle Scholar
Shevtsova, I.. On the absolute constants in the Berry–Esseen type inequalities for identically distributed summands. Preprint, arXiv:1111.6554, 2011.Google Scholar
Simonovits, M.. How to compute the volume in high dimension? Mathematical Programming, 97:337374, 2003.10.1007/s10107-003-0447-xCrossRefGoogle Scholar
Slepian, D.. The one-sided barrier problem for Gaussian noise. Bell System Technical Journal, 41(2):463501, 1962.10.1002/j.1538-7305.1962.tb02419.xCrossRefGoogle Scholar
Slepian, D.. On the zeros of Gaussian noise. Time Series Analysis, pp. 104115. Wiley, 1963.Google Scholar
Stewart, G. W. and Sun, J.-G.. Matrix Perturbation Theory. Academic Press, 1990.Google Scholar
Stojnic, M.. Various thresholds for -optimization in compressed sensing. Preprint, arXiv:0907.3666, 2009.Google Scholar
Stojnic, M.. Regularly random duality. Preprint, arXiv:1303.7295, 2013.Google Scholar
Sudakov, V.. Gaussian random processes and measures of solid angles in Hilbert space. Doklady Akademii Nauk, 197(1):4345, 1971.Google Scholar
Sudakov, V.. Geometric Problems in the Theory of Infinite-Dimensional Probability Distributions. American Mathematical Society, 1979.Google Scholar
Szarek, S.. On the best constants in the Khinchin inequality. Studia Mathematica, 2(58):197208, 1976.10.4064/sm-58-2-197-208CrossRefGoogle Scholar
Szarek, S. J. and Talagrand, M.. An “isomorphic” version of the Sauer–Shelah lemma and the Banach–Mazur distance to the cube. Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 1987–88, pp. 105112. Springer, 1989.10.1007/BFb0090050CrossRefGoogle Scholar
Szarek, S. J. and Talagrand, M.. On the convexified Sauer–Shelah theorem. Journal of Combinatorial Theory, Series B, 69(2):183192, 1997.CrossRefGoogle Scholar
Talagrand, M.. A new look at independence. Annals of Probability, 24(1):134, 1996.CrossRefGoogle Scholar
Talagrand, M.. The Generic Chaining: Upper and Lower Bounds of Stochastic Processes. Springer, 2005.Google Scholar
Talagrand, M.. Upper and Lower Bounds for Stochastic Processes: Modern Methods and Classical Problems. Springer, 2014.10.1007/978-3-642-54075-2CrossRefGoogle Scholar
Talagrand, M.. Upper and Lower Bounds for Stochastic Processes: Decomposition Theorems. Springer, 2022.Google Scholar
Tao, T. and Vu, V.. From the Littlewood–Offord problem to the circular law: Universality of the spectral distribution of random matrices. Bulletin of the American Mathematical Society, 46(3):377396, 2009.CrossRefGoogle Scholar
Tibshirani, R.. Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society, Series B: Statistical Methodology, 58(1):267288, 1996.CrossRefGoogle Scholar
Tikhomirov, K.. Sample covariance matrices of heavy-tailed distributions. International Mathematics Research Notices, 2018(20):62546289, 2018.10.1093/imrn/rnx067CrossRefGoogle Scholar
Tomczak-Jaegermann, N.. Banach–Mazur Distances and Finite-Dimensional Operator Ideals. Pitman, 1989.Google Scholar
Tropp, J. A.. Norms of random submatrices and sparse approximation. Comptes Rendus. Mathématique, 346(23–24):12711274, 2008.10.1016/j.crma.2008.10.008CrossRefGoogle Scholar
Tropp, J. A.. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12:389434, 2012.10.1007/s10208-011-9099-zCrossRefGoogle Scholar
Tropp, J. A.. Convex recovery of a structured signal from independent random linear measurements. Sampling Theory, a Renaissance: Compressive Sensing and Other Developments, pp. 67101. Birkhäuser, 2015.CrossRefGoogle Scholar
Tropp, J. A.. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 8(1–2):1230, 2015.CrossRefGoogle Scholar
Tropp, J. A. and Webber, R. J.. Randomized algorithms for low-rank matrix approximation: Design, analysis, and applications. Preprint, arXiv:2306.12418, 2023.Google Scholar
Tropp, J. A., Yurtsever, A., Udell, M., and Cevher, V.. Practical sketching algorithms for low-rank matrix approximation. SIAM Journal on Matrix Analysis and Applications, 38(4):14541485, 2017.10.1137/17M1111590CrossRefGoogle Scholar
van de Geer, S. A.. Applications of Empirical Process Theory. Cambridge University Press, 2000.Google Scholar
van der Vaart, A. W. and Wellner, J. A.. Weak Convergence and Empirical Processes, with Applications to Statistics, 2nd edn (Springer Series in Statistics). Springer, 2023.10.1007/978-3-031-29040-4CrossRefGoogle Scholar
van Handel, R.. Probability in high dimension. Lecture Notes (Princeton University), 2014.Google Scholar
van Handel, R.. Structured random matrices. Convexity and Concentration (IMA Volumes in Mathematics and Its Applications 161), pp. 107156. Springer, 2017.10.1007/978-1-4939-7005-6_4CrossRefGoogle Scholar
van Handel, R.. Chaining, interpolation, and convexity. Journal of the European Mathematical Society, 20(10):24132435, 2018.10.4171/jems/815CrossRefGoogle Scholar
van Handel, R.. Chaining, interpolation and convexity II: The contraction principle. Annals of Probability, 46(3):17641805, 2018.10.1214/17-AOP1214CrossRefGoogle Scholar
van Lint, J. H.. Introduction to Coding Theory (Graduate Texts in Mathematics 86). Springer, 1998.Google Scholar
Vapnik, V. N. and Ya, A.. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2):264280, 1971.10.1137/1116025CrossRefGoogle Scholar
Vempala, S.. Geometric random walks: A survey. Combinatorial and Computational Geometry (Mathematical Sciences Research Institute Publications 52), pp. 573612. Cambridge University Press, 2005.Google Scholar
Vershynin, R.. Integer cells in convex sets. Advances in Mathematics, 197(1):248273, 2005.CrossRefGoogle Scholar
Vershynin, R.. Golden–Thompson inequality. Lecture Notes, 2009.Google Scholar
Vershynin, R.. A note on sums of independent random matrices after Ahlswede–Winter. Lecture Notes, 2009.Google Scholar
Vershynin, R.. Introduction to the non-asymptotic analysis of random matrices. Compressed Sensing: Theory and Applications, pp. 210268. Cambridge University Press, 2012.10.1017/CBO9780511794308.006CrossRefGoogle Scholar
Vershynin, R.. Estimation in high dimensions: A geometric perspective. Sampling Theory, a Renaissance: Compressive Sensing and Other Developments, pp. 366. Birkhäuser, 2015.10.1007/978-3-319-19749-4_1CrossRefGoogle Scholar
Villani, C.. Topics in Optimal Transportation (Graduate Studies in Mathematics 58). American Mathematical Society, 2021.Google Scholar
Vu, V.. Singular vectors under random perturbation. Random Structures & Algorithms, 39(4):526538, 2011.10.1002/rsa.20367CrossRefGoogle Scholar
Wainwright, M. J.. High-Dimensional Statistics: A Non-Asymptotic Viewpoint (Cambridge Series in Statistical and Probabilistic Mathematics 48). Cambridge University Press, 2019.10.1017/9781108627771CrossRefGoogle Scholar
Wang, K.. Analysis of singular subspaces under random perturbations. Preprint, arXiv:2403.09170, 2024.Google Scholar
Wedin, P.-Å.. Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics, 12:99111, 1972.10.1007/BF01932678CrossRefGoogle Scholar
Wedin, P.-Å.. On angles between subspaces of a finite dimensional inner product space. Matrix Pencils: Proceedings of a Conference Held at Pite Havsbad, Sweden, March 22–24, 1982 (Lecture Notes in Mathematics 973), pp. 263285. Springer, 2006.Google Scholar
Wendel, J. G.. A problem in geometric probability. Mathematica Scandinavica, 11(1):109111, 1962.10.7146/math.scand.a-10655CrossRefGoogle Scholar
Wigderson, A. and Xiao, D.. Derandomizing the Ahlswede–Winter matrix-valued Chernoff bound using pessimistic estimators, and applications. Theory of Computing, 4(1):5376, 2008.10.4086/toc.2008.v004a003CrossRefGoogle Scholar
Wright, F. T.. A bound on tail probabilities for quadratic forms in independent random variables whose distributions are not necessarily symmetric. Annals of Probability, 1(6):10681070, 1973.10.1214/aop/1176996815CrossRefGoogle Scholar
Xu, C. and Jacques, L.. Quantized compressive sensing with rip matrices: The benefit of dithering. Information and Inference: A Journal of the IMA, 9(3):543586, 2020.CrossRefGoogle Scholar
Yu, Y., Wang, T., and Samworth, R. J.. A useful variant of the Davis–Kahan theorem for statisticians. Biometrika, 102(2):315323, 2015.10.1093/biomet/asv008CrossRefGoogle Scholar
Zhang, A. Y. and Zhou, H. H.. Minimax rates of community detection in stochastic block models. Annals of Statistics, 44(5):22522280, 2016.10.1214/15-AOS1428CrossRefGoogle Scholar
Zhou, S.. Sparse Hanson–Wright inequalities for subgaussian quadratic forms. Bernoulli, 25(3):16031639, 2019.10.3150/17-BEJ978CrossRefGoogle Scholar
Zymnis, A., Boyd, S., and Candès, E.. Compressed sensing with quantized measurements. IEEE Signal Processing Letters, 17(2):149152, 2009.10.1109/LSP.2009.2035667CrossRefGoogle Scholar

Accessibility standard: WCAG 2.2 A

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The PDF of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and meets the basic (A) level of WCAG compliance, addressing essential accessibility barriers.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • References
  • Roman Vershynin, University of California, Irvine
  • Book: High-Dimensional Probability
  • Online publication: 30 January 2026
  • Chapter DOI: https://doi.org/10.1017/9781009490672.014
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • References
  • Roman Vershynin, University of California, Irvine
  • Book: High-Dimensional Probability
  • Online publication: 30 January 2026
  • Chapter DOI: https://doi.org/10.1017/9781009490672.014
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • References
  • Roman Vershynin, University of California, Irvine
  • Book: High-Dimensional Probability
  • Online publication: 30 January 2026
  • Chapter DOI: https://doi.org/10.1017/9781009490672.014
Available formats
×