Skip to main content Accessibility help
×
Hostname: page-component-857557d7f7-9f75d Total loading time: 0 Render date: 2025-12-09T10:15:09.363Z Has data issue: false hasContentIssue false

Bibliography

Published online by Cambridge University Press:  28 September 2018

Ankur Moitra
Affiliation:
Massachusetts Institute of Technology
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Achlioptas, D. and McSherry, F.. On spectral learning of mixtures of distributions. In COLT, pages 458–469, 2005.CrossRefGoogle Scholar
Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P., and Tandon, R.. Learning sparsely used overcomplete dictionaries via alternating minimization. arXiv:1310.7991, 2013.Google Scholar
Agarwal, A., Anandkumar, A., and Netrapalli, P.. Exact recovery of sparsely used overcomplete dictionaries. arXiv:1309.1952, 2013.Google Scholar
Aharon, M.. Overcomplete dictionaries for sparse representation of signals. PhD thesis, 2006.Google Scholar
Aharon, M., Elad, M., and Bruckstein, A.. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11):43114322, 2006.CrossRefGoogle Scholar
Ahlswede, R. and Winter, A.. Strong converse for identification via quantum channels. IEEE Trans. Inf. Theory, 48(3):569579, 2002.10.1109/18.985947CrossRefGoogle Scholar
Alon, N.. Tools from higher algebra. In Handbook of Combinatorics, editors: Graham, R. L., Gr otschel, M., and Lovász, L.. Cambridge, MA: MIT Press, 1996, pages 17491783.Google Scholar
Anandkumar, A., Foster, D., Hsu, D., Kakade, S., and Liu, Y.. A spectral algorithm for latent Dirichlet allocation. In NIPS, pages 926–934, 2012.Google Scholar
Anandkumar, A., Ge, R., Hsu, D., and Kakade, S.. A tensor spectral approach to learning mixed membership community models. In COLT, pages 867–881, 2013.Google Scholar
Anandkumar, A., Hsu, D., and Kakade, S.. A method of moments for hidden Markov models and multi-view mixture models. In COLT, pages 33.1–33.34, 2012.Google Scholar
Anderson, J., Belkin, M., Goyal, N., Rademacher, L, and Voss, J.. The more the merrier: The blessing of dimensionality for learning large Gaussian mixtures. arXiv:1311.2891, 2013.Google Scholar
Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu, M.. A practical algorithm for topic modeling with provable guarantees. In ICML, pages 280–288, 2013.Google Scholar
Arora, S., Ge, R., Kannan, R., and Moitra, A.. Computing a nonnegative matrix factorization – provably. In STOC, pages 145–162, 2012.10.1145/2213977.2213994CrossRefGoogle Scholar
Arora, S., Ge, R., and Moitra, A.. Learning topic models – going beyond SVD. In FOCS, pages 1–10, 2012.CrossRefGoogle Scholar
Arora, S., Ge, R., and Moitra, A.. New algorithms for learning incoherent and overcomplete dictionaries. arXiv:1308.6273, 2013.Google Scholar
Arora, S., Ge, R., Ma, T., and Moitra, A.. Simple, efficient, and neural algorithms for sparse coding. In COLT, pages 113–149, 2015.Google Scholar
Arora, S., Ge, R., Moitra, A., and Sachdeva, S.. Provable ICA with unknown Gaussian noise, and implications for Gaussian mixtures and autoencoders. In NIPS, pages 2384–2392, 2012.Google Scholar
Arora, S., Ge, R., Sachdeva, S., and Schoenebeck, G.. Finding overlapping communities in social networks: Towards a rigorous approach. In EC, 2012.10.1145/2229012.2229020CrossRefGoogle Scholar
Arora, S. and Kannan, R.. Learning mixtures of separated nonspherical Gaussians. Ann. Appl. Probab., 15(1A):6992, 2005.10.1214/105051604000000512CrossRefGoogle Scholar
Balcan, M., Blum, A., and Gupta, A.. Clustering under approximation stability. J. ACM, 60(2): 134, 2013.CrossRefGoogle Scholar
Balcan, M., Blum, A., and Srebro, N.. On a theory of learning with similarity functions. Mach. Learn., 72(1–2):89112, 2008.CrossRefGoogle Scholar
Balcan, M., Borgs, C., Braverman, M., Chayes, J., and Teng, S.-H.. Finding endogenously formed communities. In SODA, 2013.CrossRefGoogle Scholar
Bandeira, A., Rigollet, P., and Weed, J.. Optimal rates of estimation for multi-reference alignment. arXiv:1702.08546, 2017.Google Scholar
Barak, B., Hopkins, S., Kelner, J., Kothari, P., Moitra, A., and Potechin, A.. A nearly tight sum-of-squares lower bound for the planted clique problem. In FOCS, pages 428–437, 2016.CrossRefGoogle Scholar
Barak, B., Kelner, J., and Steurer, D.. Dictionary learning and tensor decomposition via the sum-of-squares method. In STOC, pages 143–151, 2015.10.1145/2746539.2746605CrossRefGoogle Scholar
Barak, B. and Moitra, A.. Noisy tensor completion via the sum-of-squares hierarchy. In COLT, pages 417–445, 2016.Google Scholar
Belkin, M. and Sinha, K.. Toward learning Gaussian mixtures with arbitrary separation. In COLT, pages 407–419, 2010.Google Scholar
Belkin, M. and Sinha, K.. Polynomial learning of distribution families. In FOCS, pages 103–112, 2010.CrossRefGoogle Scholar
Berthet, Q. and Rigollet, P.. Complexity theoretic lower bounds for sparse principal component detection. In COLT, pages 1046–1066, 2013.Google Scholar
Bhaskara, A., Charikar, M., and Vijayaraghavan, A.. Uniqueness of tensor decompositions with applications to polynomial identifiability. In COLT, pages 742–778, 2014.CrossRefGoogle Scholar
Bhaskara, A., Charikar, M., Moitra, A., and Vijayaraghavan, A.. Smoothed analysis of tensor decompositions. In STOC, pages 594–603, 2014.10.1145/2591796.2591881CrossRefGoogle Scholar
Bilu, Y. and Linial, N.. Are stable instances easy? In Combinatorics, Probability and Computing, 21(5):643660, 2012.CrossRefGoogle Scholar
Bittorf, V., Recht, B., Re, C., and Tropp, J.. Factoring nonnegative matrices with linear programs. In NIPS, 2012.Google Scholar
Blei, D.. Introduction to probabilistic topic models. Commun. ACM , 55(4):7784, 2012.10.1145/2133806.2133826CrossRefGoogle Scholar
Blei, D. and Lafferty, J.. A correlated topic model of science. Ann. Appl. Stat., 1(1):1735, 2007.Google Scholar
Blei, D., Ng, A., and Jordan, M.. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:9931022, 2003.Google Scholar
Blum, A., Kalai, A., and Wasserman, H.. Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50:506519, 2003.10.1145/792538.792543CrossRefGoogle Scholar
Blum, A. and Spencer, J.. Coloring random and semi-random k-colorable graphs. Journal of Algorithms, 19(2):204234, 1995.10.1006/jagm.1995.1034CrossRefGoogle Scholar
Borgwardt, K.. The Simplex Method: A Probabilistic Analysis. New York: Springer, 2012.Google Scholar
Brubaker, S. C. and Vempala, S.. Isotropic PCA and affine-invariant clustering. In FOCS, pages 551–560, 2008.10.1109/FOCS.2008.48CrossRefGoogle Scholar
Candes, E. and Recht, B.. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717772, 2008.CrossRefGoogle Scholar
Candes, E., Romberg, J., and Tao, T.. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):12071223, 2006.10.1002/cpa.20124CrossRefGoogle Scholar
Candes, E. and Tao, T.. Decoding by linear programming. IEEE Trans. Inf. Theory, 51(12):42034215, 2005.10.1109/TIT.2005.858979CrossRefGoogle Scholar
Candes, E., Li, X., Ma, Y., and Wright, J.. Robust principal component analysis? J. ACM, 58(3):137, 2011.10.1145/1970392.1970395CrossRefGoogle Scholar
Chandrasekaran, V. and Jordan, M.. Computational and statistical tradeoffs via convex relaxation. Proc. Natl. Acad. Sci. U.S.A., 110(13):E1181E1190, 2013.10.1073/pnas.1302293110CrossRefGoogle ScholarPubMed
Chandrasekaran, V., Recht, B., Parrilo, P., and Willsky, A.. The convex geometry of linear inverse problems. Found. Comput. Math., 12(6):805849, 2012.10.1007/s10208-012-9135-7CrossRefGoogle Scholar
Chang, J.. Full reconstruction of Markov models on evolutionary trees: Identifiability and consistency. Math. Biosci., 137(1):5173, 1996.CrossRefGoogle ScholarPubMed
Chaudhuri, K. and Rao, S.. Learning mixtures of product distributions using correlations and independence. In COLT, pages 9–20, 2008.Google Scholar
Chaudhuri, K. and Rao, S.. Beyond Gaussians: Spectral methods for learning mixtures of heavy-tailed distributions. In COLT, pages 21–32, 2008.Google Scholar
Chen, S., Donoho, D., and Saunders, M.. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1):3361, 1998.CrossRefGoogle Scholar
Cohen, A., Dahmen, W., and DeVore, R.. Compressed sensing and best k-term approximation. J. AMS, 22(1):211231, 2009.Google Scholar
Cohen, J. and Rothblum, U.. Nonnegative ranks, decompositions and factorizations of nonnegative matrices. Linear Algebra Appl., 190:149168, 1993.CrossRefGoogle Scholar
Comon, P.. Independent component analysis: A new concept? Signal Processing, 36(3):287314, 1994.CrossRefGoogle Scholar
Dasgupta, A.. Asymptotic Theory of Statistics and Probability . New York: Springer, 2008.Google Scholar
Dasgupta, A., Hopcroft, J., Kleinberg, J., and Sandler, M.. On learning mixtures of heavy-tailed distributions. In FOCS, pages 491–500, 2005.10.1109/SFCS.2005.56CrossRefGoogle Scholar
Dasgupta, S.. Learning mixtures of Gaussians. In FOCS, pages 634–644, 1999.CrossRefGoogle Scholar
Dasgupta, S. and Schulman, L. J.. A two-round variant of EM for Gaussian mixtures. In UAI, pages 152–159, 2000.Google Scholar
Davis, G., Mallat, S., and Avellaneda, M.. Greedy adaptive approximations. Constr. Approx., 13:5798, 1997.10.1007/BF02678430CrossRefGoogle Scholar
De Lathauwer, L. J Castaing, , and Cardoso, J.. Fourth-order cumulant-based blind identification of underdetermined mixtures. IEEE Trans. Signal Process., 55(6):29652973, 2007.10.1109/TSP.2007.893943CrossRefGoogle Scholar
Deerwester, S., Dumais, S., Landauer, T., Furnas, G., and Harshman, R.. Indexing by latent semantic analysis. J. Assoc. Inf. Sci. Technol., 41(6):391407, 1990.Google Scholar
Dempster, A. P., Laird, N. M., and Rubin, D. B.. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Series B Stat. Methodol., 39(1):138, 1977.CrossRefGoogle Scholar
Donoho, D. and Elad, M.. Optimally sparse representation in general (non-orthogonal) dictionaries via 1-minimization. Proc. Natl. Acad. Sci. U.S.A., 100(5):21972202, 2003.CrossRefGoogle Scholar
Donoho, D. and Huo, X.. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory, 47(7):28452862, 1999.10.1109/18.959265CrossRefGoogle Scholar
Donoho, D. and Stark, P.. Uncertainty principles and signal recovery. SIAM J. Appl. Math., 49(3):906931, 1989.10.1137/0149053CrossRefGoogle Scholar
Donoho, D. and Stodden, V.. When does nonnegative matrix factorization give the correct decomposition into parts? In NIPS, 2003.Google Scholar
Downey, R. and Fellows, M.. Parameterized Complexity. New York: Springer, 2012.Google Scholar
Elad, M.. Sparse and Redundant Representations. New York: Springer, 2010.10.1007/978-1-4419-7011-4CrossRefGoogle Scholar
Engan, K., Aase, S., and Hakon-Husoy, J.. Method of optimal directions for frame design. Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 5:24432446, 1999.Google Scholar
Erdos, P., Steel, M., Szekely, L., and Warnow, T.. A few logs suffice to build (almost) all trees. I. Random Struct. Algorithms, 14:153184, 1997.3.0.CO;2-R>CrossRefGoogle Scholar
Fazel, M.. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002.Google Scholar
Feige, U. and Kilian, J.. Heuristics for semirandom graph problems. J. Comput. Syst. Sci., 63(4):639671, 2001.10.1006/jcss.2001.1773CrossRefGoogle Scholar
Feige, U. and Krauthgamer, R.. Finding and certifying a large hidden clique in a semirandom graph. Random Struct. Algorithms, 16(2):195208, 2009.10.1002/(SICI)1098-2418(200003)16:2<195::AID-RSA5>3.0.CO;2-A3.0.CO;2-A>CrossRefGoogle Scholar
Feldman, J., Servedio, R. A., and O’Donnell, R.. PAC learning axis-aligned mixtures of Gaussians with no separation assumption. In COLT, pages 20–34, 2006.10.1007/11776420_5CrossRefGoogle Scholar
Frieze, A., Jerrum, M., and Kannan, R.. Learning linear transformations. In FOCS, pages 359–368, 1996.CrossRefGoogle Scholar
Garnaev, A. and Gluskin, E.. The widths of a Euclidean ball. Sov. Math. Dokl., 277(5):200204, 1984.Google Scholar
Ge, R. and Ma, T.. Decomposing overcomplete 3rd order tensors using sum-of-squares algorithms. In RANDOM, pages 829–849, 2015.Google Scholar
Gilbert, A., Muthukrishnan, S., and Strauss, M.. Approximation of functions over redundant dictionaries using coherence. In SODA, pages 243–252, 2003.Google Scholar
Gillis, N.. Robustness analysis of hotttopixx, a linear programming model for factoring nonnegative matrices. arXiv:1211.6687, 2012.Google Scholar
Goyal, N., Vempala, S., and Xiao, Y.. Fourier PCA. In STOC, pages 584–593, 2014.10.1145/2591796.2591875CrossRefGoogle Scholar
Gross, D.. Recovering low-rank matrices from few coefficients in any basis. arXiv:0910.1879, 2009.Google Scholar
Gross, D., Liu, Y.-K., Flammia, S., Becker, S., and Eisert, J.. Quantum state tomography via compressed sensing. Phys. Rev. Lett ., 105(15):150401, 2010.10.1103/PhysRevLett.105.150401CrossRefGoogle ScholarPubMed
Guruswami, V., Lee, J., and Razborov, A.. Almost Euclidean subspaces of via expander codes. Combinatorica, 30(1):4768, 2010.CrossRefGoogle Scholar
Hardt, M.. Understanding alternating minimization for matrix completion. In FOCS, pages 651–660, 2014.CrossRefGoogle Scholar
Harshman, R.. Foundations of the PARAFAC procedure: model and conditions for an “explanatory” multi-mode factor analysis. UCLA Working Papers in Phonetics, 16:184, 1970.Google Scholar
Håstad, J.. Tensor rank is NP-complete. J. Algorithms, 11(4):644654, 1990.10.1016/0196-6774(90)90014-6CrossRefGoogle Scholar
Hillar, C. and Lim, L.-H.. Most tensor problems are NP-hard. arXiv:0911.1393v4, 2013 CrossRefGoogle Scholar
Hofmann, T.. Probabilistic latent semantic analysis. In UAI, pages 289–296, 1999.Google Scholar
Horn, R. and Johnson, C.. Matrix Analysis. New York: Cambridge University Press, 1990.Google Scholar
Hsu, D. and Kakade, S.. Learning mixtures of spherical Gaussians: Moment methods and spectral decompositions. In ITCS, pages 11–20, 2013.10.1145/2422436.2422439CrossRefGoogle Scholar
Huber, P. J.. Projection pursuit. Ann. Stat., 13:435475, 1985.Google Scholar
Hummel, R. A. and Gidas, B. C.. Zero crossings and the heat equation. Courant Institute of Mathematical Sciences, TR-111, 1984.Google Scholar
Impagliazzo, R. and Paturi, R.. On the complexity of k-SAT. J. Comput. Syst. Sci., 62(2):367375, 2001.CrossRefGoogle Scholar
Jain, P., Netrapalli, P., and Sanghavi, S.. Low rank matrix completion using alternating minimization. In STOC, pages 665–674, 2013.10.1145/2488608.2488693CrossRefGoogle Scholar
Kalai, A. T., Moitra, A., and Valiant, G.. Efficiently learning mixtures of two Gaussians. In STOC, pages 553–562, 2010.CrossRefGoogle Scholar
Karp, R.. Probabilistic analysis of some combinatorial search problems. In Algorithms and Complexity: New Directions and Recent Results. New York: Academic Press, 1976, pages 119.Google Scholar
Kashin, B. and Temlyakov, V.. A remark on compressed sensing. Manuscript, 2007.10.1134/S0001434607110193CrossRefGoogle Scholar
Khachiyan, L.. On the complexity of approximating extremal determinants in matrices. J. Complexity, 11(1):138153, 1995.CrossRefGoogle Scholar
Koller, D. and Friedman, N.. Probabilistic Graphical Models. Cambridge, MA: MIT Press, 2009.Google Scholar
Kruskal, J.. Three-way arrays: Rank and uniqueness of trilinear decompositions with applications to arithmetic complexity and statistics. Linear Algebra Appl., 18(2):95138, 1997.CrossRefGoogle Scholar
Kumar, A., Sindhwani, V., and Kambadur, P.. Fast conical hull algorithms for near-separable non-negative matrix factorization. In ICML, pages 231–239, 2013.Google Scholar
Lee, D. and Seung, H.. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788791, 1999.CrossRefGoogle ScholarPubMed
Lee, D. and Seung, H.. Algorithms for non-negative matrix factorization. In NIPS, pages 556–562, 2000.Google Scholar
Leurgans, S., Ross, R., and Abel, R.. A decomposition for three-way arrays. SIAM J. Matrix Anal. Appl., 14(4):10641083, 1993.CrossRefGoogle Scholar
Lewicki, M. and Sejnowski, T.. Learning overcomplete representations. Comput., 12:337365, 2000.Google ScholarPubMed
Li, W. and McCallum, A.. Pachinko allocation: DAG-structured mixture models of topic correlations. In ICML, pp. 633–640, 2007.Google Scholar
Lindsay, B.. Mixture Models: Theory, Geometry and Applications. Hayward, CA: Institute for Mathematical Statistics, 1995.Google Scholar
Logan, B. F.. Properties of high-pass signals. PhD thesis, Columbia University, 1965.Google Scholar
Lovász, L. and Saks, M.. Communication complexity and combinatorial lattice theory. J. Comput. Syst. Sci., 47(2):322349, 1993.CrossRefGoogle Scholar
McSherry, F.. Spectral partitioning of random graphs. In FOCS, pages 529–537, 2001.10.1109/SFCS.2001.959929CrossRefGoogle Scholar
Mallat, S.. A Wavelet Tour of Signal Processing. New York: Academic Press, 1998.Google Scholar
Mallat, S. and Zhang, Z.. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process., 41(12):33973415, 1993.10.1109/78.258082CrossRefGoogle Scholar
Moitra, A.. An almost optimal algorithm for computing nonnegative rank. In SODA, pages 1454–1464, 2013.CrossRefGoogle Scholar
Moitra, A.. Super-resolution, extremal functions and the condition number of Vandermonde matrices. In STOC, pages 821–830, 2015.CrossRefGoogle Scholar
Moitra, A. and Valiant, G.. Setting the polynomial learnability of mixtures of Gaussians. In FOCS, pages 93–102, 2010.CrossRefGoogle Scholar
Mossel, E. and Roch, S.. Learning nonsingular phylogenies and hidden Markov models. In STOC, pages 366–375, 2005.10.1145/1060590.1060645CrossRefGoogle Scholar
Nesterov, Y.. Introductory Lectures on Convex Optimization: A Basic Course. New York: Springer, 2004.CrossRefGoogle Scholar
Olshausen, B. and Field, B.. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):33113325, 1997.10.1016/S0042-6989(97)00169-7CrossRefGoogle Scholar
Papadimitriou, C., Raghavan, P., Tamaki, H., and Vempala, S.. Latent semantic indexing: A probabilistic analysis. J. Comput. Syst. Sci., 61(2):217235, 2000.10.1006/jcss.2000.1711CrossRefGoogle Scholar
Pati, Y., Rezaiifar, R., and Krishnaprasad, P.. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. Asilomar Conference on Signals, Systems, and Computers, pages 40–44, 1993.Google Scholar
Pearson, K.. Contributions to the mathematical theory of evolution. Philos. Trans. Royal Soc. A, 185: 71110, 1894.Google Scholar
Rabani, Y., Schulman, L., and Swamy, C.. Learning mixtures of arbitrary distributions over large discrete domains. In ITCS, pages 207–224, 2014.10.1145/2554797.2554818CrossRefGoogle Scholar
Raz, R.. Tensor-rank and lower bounds for arithmetic formulas. In STOC, pages 659–666, 2010.10.1145/1806689.1806780CrossRefGoogle Scholar
Recht, B.. A simpler approach to matrix completion. J. Mach. Learn. Res., 12:34133430, 2011.Google Scholar
Recht, B., Fazel, M., and Parrilo, P.. Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471501, 2010.10.1137/070697835CrossRefGoogle Scholar
Redner, R. A. and Walker, H. F.. Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev., 26(2):195239, 1984.CrossRefGoogle Scholar
Renegar, J.. On the computational complexity and geometry of the first-order theory of the reals. J. Symb. Comput., 13(1):255352, 1991.10.1016/S0747-7171(10)80003-3CrossRefGoogle Scholar
Rockefellar, T.. Convex Analysis. Princeton, NJ: Princeton University Press, 1996.Google Scholar
Seidenberg, A.. A new decision method for elementary algebra. Ann. Math., 60(2):365374, 1954.CrossRefGoogle Scholar
de Silva, V. and Lim, L.-H.. Tensor rank and the ill-posedness of the best low rank approximation problem. SIAM J. Matrix Anal. Appl., 30(3):10841127, 2008.10.1137/06066518XCrossRefGoogle Scholar
Spielman, D. and Teng, S.-H.. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. In Journal of the ACM, 51(3):385463, 2004.10.1145/990308.990310CrossRefGoogle Scholar
Spielman, D., Wang, H., and Wright, J.. Exact recovery of sparsely-used dictionaries. J. Mach. Learn. Res., 23:118, 2012.Google Scholar
Srebro, N. and Shraibman, A.. Rank, trace-norm and max-norm. In COLT, pages 545–560, 2005.10.1007/11503415_37CrossRefGoogle Scholar
Steel, M.. Recovering a tree from the leaf colourations it generates under a Markov model. Appl. Math. Lett., 7:1924, 1994.10.1016/0893-9659(94)90024-8CrossRefGoogle Scholar
Tarski, A.. A decision method for elementary algebra and geometry. Berkeley and Los Angeles: University of California Press, 1951.10.1525/9780520348097CrossRefGoogle Scholar
Teicher, H.. Identifiability of mixtures. Ann. Math. Stat., 31(1):244248, 1961.CrossRefGoogle Scholar
Tropp, J.. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory, 50(10):22312242, 2004.CrossRefGoogle Scholar
Tropp, J., Gilbert, A., Muthukrishnan, S., and Strauss, M.. Improved sparse approximation over quasi-incoherent dictionaries. IEEE International Conference on Image Processing, 1:3740, 2003.Google Scholar
Valiant, L.. A theory of the learnable. Commun. ACM, 27(11):11341142, 1984.CrossRefGoogle Scholar
Vavasis, S.. On the complexity of nonnegative matrix factorization. SIAM J. Optim., 20(3):13641377, 2009.10.1137/070709967CrossRefGoogle Scholar
Vempala, S. and Xiao, Y.. Structure from local optima: Learning subspace juntas via higher order PCA. arXiv:abs/1108.3329, 2011.Google Scholar
Vempala, S. and Wang, G.. A spectral algorithm for learning mixture models. J. Comput. Syst. Sci., 68(4):841860, 2004.10.1016/j.jcss.2003.11.008CrossRefGoogle Scholar
Wainwright, M. and Jordan, M.. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2): 1305, 2008.10.1561/2200000001CrossRefGoogle Scholar
Wedin, P.. Perturbation bounds in connection with singular value decompositions. BIT Numer. Math., 12:99111, 1972.CrossRefGoogle Scholar
Yannakakis, M.. Expressing combinatorial optimization problems by linear programs. J. Comput. Syst. Sci., 43(3):441466, 1991.10.1016/0022-0000(91)90024-YCrossRefGoogle Scholar

Accessibility standard: Unknown

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Bibliography
  • Ankur Moitra, Massachusetts Institute of Technology
  • Book: Algorithmic Aspects of Machine Learning
  • Online publication: 28 September 2018
  • Chapter DOI: https://doi.org/10.1017/9781316882177.010
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Bibliography
  • Ankur Moitra, Massachusetts Institute of Technology
  • Book: Algorithmic Aspects of Machine Learning
  • Online publication: 28 September 2018
  • Chapter DOI: https://doi.org/10.1017/9781316882177.010
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Bibliography
  • Ankur Moitra, Massachusetts Institute of Technology
  • Book: Algorithmic Aspects of Machine Learning
  • Online publication: 28 September 2018
  • Chapter DOI: https://doi.org/10.1017/9781316882177.010
Available formats
×