Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-45l2p Total loading time: 0 Render date: 2024-04-25T11:27:14.981Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  25 October 2017

Xian-Da Zhang
Affiliation:
Tsinghua University, Beijing
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Abatzoglou, T.J., Mendel, J.M., Harada, G.A. The constrained total least squares technique and its applications to harmonic superresolution. IEEE Trans. Signal Processing, 39: 1070–1087, 1991.Google Scholar
[2] Abbott, D. The Biographical Dictionary of Sciences: Mathematicians. New York: P. Bedrick Books, 1986.
[3] Abraham, R., Marsden, J.E., Ratiu, T. Manifolds, Tensor Analysis, and Applications. New York: Addison-Wesley, 1983.
[4] Acar, E. The MATLAB CMTF Toolbox. 2014.
[5] Acar, E. Aykut-Bingo|C., Bingo, H., Bro, R., Yener, B. Multiway analysis of epilepsy tensors. Bioinformatics, 23: i10–i18, 2007.Google Scholar
[6] Acar, E., Camtepe, S.A., Krishnamoorthy, M., Yener, B. Modeling and multiway analysis of chatroom tensors. In Proc. IEEE International Conference on Intelligence and Security Informatics. Springer, 256–268, 2005.
[7] Acar, E., Yener, B. Unsupervised multiway data analysis: A literature survey. IEEE Trans. Knowledge and Data Engineering, 21(1): 6–20, 2009.Google Scholar
[8] Acar, R., Vogel, C.R. Analysis of bounded variation penalty methods for illposed problems. Inverse Problems, 10: 1217–1229, 1994.Google Scholar
[9] Adamyan, V.M., Arov, D.Z. A general solution of a problem in linear prediction of stationary processes. Theory Probab. Appl., 13: 294–407, 1968.Google Scholar
[10] Adib, A., Moreau, E., Aboutajdine, D. Source separation contrasts using a reference signal. IEEE Signal Processing Lett., 11(3): 312–315, 2004.Google Scholar
[11] Afriat, S.N. Orthogonal and oblique projectors and the characteristics of pairs of vector spaces. Math. Proc. Cambridge Philos. Soc., 53: 800–816, 1957.Google Scholar
[12] Alexander, S.T. Adaptive Signal Processing: Theory and Applications. New York: Springer, 1986.CrossRef
[13] Alter, O., Brown, P.O., Botstein, D. Generalized singular value decomposition for comparative analysis of genome-scale expression data sets of two different organisms. Proc. Nat. Aca. Sci., USA, 100(6): 3351–3356, 2003.Google Scholar
[14] Amari, S., Nagaoka, H. Methods of Information Geometry. New York: Oxford University Press, 2000.
[15] Andersson, C., Bro, R. The N-way toolbox for MATLAB.[Online]. Chemomet. Intell. Lab. Syst., 52(1): 1–4, 2000. Available at http://www.models.life.ku.dk/nwaytoolbox/(2000).Google Scholar
[16] Andrews, H., Hunt, B. Digital Image Restoration. Engliwood Cliffs, NJ: Prentice-Hall, 1977.
[17] Aronszajn, N. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68: 800–816, 1950.Google Scholar
[18] Auslender, A. Asymptotic properties of the Fenchel dual functional and applications to decomposition problems. J. Optim. Theory Appl., 73(3): 427–449, 1992.Google Scholar
[19] Autonne, L. Sur les groupes lineaires, reelles et orthogonaus. Bull Soc. Math., France, 30: 121–133, 1902.Google Scholar
[20] Aybat, N.S., Iyengar, G. A first-order augmented Lagrangian method for compressed sensing. SIAM J. Optim., 22(2): 429–459, 2012.Google Scholar
[21] Bader, B.W., Kolda, T.G., et al. MATLAB Tensor Toolbox version 2.5. Available at http://www.sandia.gov/tgkolda/TensorToolbox/(2012).
[22] Banachiewicz, T. Zur Berechungung der Determinanten, wie auch der Inverse, und zur darauf basierten Auflösung der Systeme linearer Gleichungen. Acta Astronomica, Sér. C, 3: 41–67, 1937.Google Scholar
[23] Bapat, R. Nonnegative Matrices and Applications. Cambridge University Press, 1997.
[24] Barbarossa, S., Daddio, E., Galati, G. Comparison of optimum and linear prediction technique for clutter cancellation. Proc. IEE, Part F, 134: 277–282, 1987.Google Scholar
[25] Barnett, S. Matrices: Methods and Applications. Oxford: Clarendon Press, 1990.
[26] Barzilai, J., Borwein, J.M. Two-point step size gradient methods. IMA J. Numer. Anal. 8: 141–148, 1988.Google Scholar
[27] Basri, R., Jacobs, D. Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Machine Intell., 25(2): 218–233, 2003.Google Scholar
[28] Beal, M., Jojic, N., Attias, H. A graphical model for audiovisual object tracking. IEEE Trans. Pattern Anal. Machine Intell., 25(7): 828–836, 2003.Google Scholar
[29] Beck, A., Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1): 183–202, 2009.Google Scholar
[30] Beck, A., Elder, Y.C. Structured total maximum likelihood: an alternative to structured total squares. SIAM J. Matrix Anal. Appl., 31(5): 2623–2649, 2010.Google Scholar
[31] Behrens, R.T., Scharf, L.L. Signal processing applications of oblique projection operators. IEEE Trans. Signal Processing, 42(6): 1413–1424, 1994.Google Scholar
[32] Bellman, R. Introduction to Matrix Analysis, 2nd edn. McGraw-Hill, 1970.
[33] Belochrani, A. Abed-Merain|K., Cardoso, J.F., Moulines, E. A blind source separation technique using second-order statistics. IEEE Trans. Signal Processing, 45(2): 434–444 1997.Google Scholar
[34] Beltrami, E. Sulle funzioni bilineari, Giomale di Mathematiche ad Uso Studenti Delle Universita. 11: 98–106, 1873. An English translation by D Boley is available as Technical Report 90–37, University of Minnesota, Department of Computer Science, 1990.
[35] Berberian, S.K. Linear Algebra. New York: Oxford University Press, 1992.
[36] Berge, J.M. F.T.Convergence of PARAFAC preprocessing procedures and the Deming–Stephan method of iterative proportional fitting. In Multiway Data Analysis (Coppi, R., Bolasco, S. eds.). Amsterdam: Elsevier, 53–63, 1989.
[37] Berge, J.M. F.T., Sidiropolous N.D. On uniqueness in CANDECOMP/ PARAFAC. Psychometrika, 67: 399–409, 2002.Google Scholar
[38] Berger, C., Voltersen, M., Eckardt, R., Eberle, J. Multi-modal and multitemporal data fusion: outcome of the 2012 GRSS data fusion contest. IEEE J. Selected Topics Appl. Earth Observat. Remote Sensing, 6(3): 1324–1340, 2013.Google Scholar
[39] Berman, A., Plemmons, R. Nonnegative Matrices in the Mathematical Sciences. Philadephia: SIAM, 1994.
[40] Berry, M.W., Browne, M., Langville, A.N., Pauca, V.P., Plemmons, R.J. Algorithms and applications for approximate nonnegative matrix factorization. Computational Statistics & Data Analysis, 52: 155–173, 2007.Google Scholar
[41] Bertsekas, D.P. Multiplier methods: a survey. Automatica, 12: 133–145, 1976.Google Scholar
[42] Bertsekas, D.P. Nonlinear Programming, 2nd edn. Belmont: Athena Scientific, 1999.
[43] Bertsekas, D.P., Nedich, A., Ozdaglar, A. Convex Analysis and Optimization. Belmont: Athena Scientific, 2003.
[44] Bertsekas, D.P., Tseng, P. Partial proximal minimization algorithms for convex programming. SIAM J. Optimization, 4(3): 551–572, 1994.Google Scholar
[45] Bi, H., Jiang, C., Zhang, B., Wang, Z., Hong, W. Radar change imaging with undersampled data based on matrix completion and Bayesian compressive sensing. IEEE Geosci. Remote Sensing Lett., 12(7): 1546–1550, 2015.Google Scholar
[46] Bi, H., Zhang, B., Hong, W., Zhou, S. Matrix-completion-based airborne tomographic SAR inversion under missing data. IEEE Geosci. Remote Sensing Lett. 12(11): 2346–2350, 2015.Google Scholar
[47] Bienvenu, G., Kopp, L. Principède la goniomgraveetre passive adaptive. Proc. 7th Colloqucum GRESIT, Nice, France, 106/1–106/10, 1979.
[48] Bien, J., Tibshirani, R.J. Sparse estimation of a covariance matrix. Biometrika, 98(4): 807–820, 2011.Google Scholar
[49] Biessmann, F., Plis, S., Meinecke, F.C., Eichele, T., Mller, K.R. Analysis of multimodal neuroimaging data. IEEE Reviews in Biomedical Engineering, 4: 26–58, 2011.Google Scholar
[50] Bigoni, D. Tensor toolbox, 2015.
[51] Boot, J. Computation of the generalized inverse of singular or rectangular matrices. Amer. Math. Monthly, 70: 302–303, 1963.Google Scholar
[52] Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1): 1–122, 2010.Google Scholar
[53] Boyd, S., Vandenberghe, L. Convex Optimization. Cambridge, UK: Cambridge University Press, 2004.
[54] Boyd, S., Vandenberghe, L. Subgradients. Notes for EE364b, Stanford University, Winter 2006–2007, April 13, 2008.
[55] Brachat, J., Common, P., Mourrain, B., Tsigaridas, E. Symmetric tensor decomposition. Linear Algebra Appl., 433(11-12): 1851–1872, 2010.Google Scholar
[56] Bramble, J., Pasciak, J. A preconditioning technique for indefinite systems resulting from mixed approximations of elliptic problems. Math. Comput., 50(181): 1–17, 1988.Google Scholar
[57] Brandwood, D.H. A complex gradient operator and its application in adaptive array theory. Proc. IEE, 130: 11–16, 1983.Google Scholar
[58] Branham, R.L. Total least squares in astronomy. In Recent Advances in Total Least Squares Techniques and Error-in-Variables Modeling (Van Huffel S. ed.). Philadelphia, PA: SIAM, 1997.
[59] Bregman, L.M. The method of successive projection for finding a common point of convex sets. Soviet Math. Dokl., 6: 688–692, 1965.Google Scholar
[60] Brewer, J.W. Kronecker products and matrix calculus in system theory. IEEE Trans. Circuits Syst., 25: 772–781, 1978.Google Scholar
[61] Bridges, T.J., Morris, P.J. Differential eigenvalue problems in which the parameters appear nonlinearly. J. Comput. Phys., 55: 437–460, 1984.Google Scholar
[62] Bro, R. PARAFAC: tutorial and applications. Chemometrics and Intelligent Laboratory Systems, 38: 149–171, 1997.Google Scholar
[63] Bro, R., de Jong, S. A fast non-negatively constrained least squares algorithm. J. Chemometrics, 11(5): 393–401, 1997.Google Scholar
[64] Bro, R., Harshman, R.A., Sidiropoulos, N.D. Modeling multi-way data with linearly dependent loadings. Technical Report 2005-176, KVL, 2005.
[65] Bro, R. Multiway analysis in the food industry: models, algorithms and applications, Doctoral dissertation, University of Amsterdam, 1998.
[66] Bro, R., Sidiropoulos, N.D. Least squares algorithms under unimodality and non-negativity constraints. J. Chemometrics, 12(4): 223–247, 1998.Google Scholar
[67] Brockwell, P.J., Davis, R.A. Time Series: Theory and Methods. New York: Springer-Verlag, 1987.
[68] Brookes, M. Matrix Reference Manual 2004. Available at http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html(2005).
[69] Bruckstein, A.M., Donoho, D.L., Elad, M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51(1): 34–81, 2009.Google Scholar
[70] Brunet, J.P., Tamayo, P., Golub, T.R., Mesirov, J.P. Metagenes and molecular pattern discovery using matrix factorization. Proc. Nat. Acad. Sci. USA, 101(12): 4164–4169, 2004.Google Scholar
[71] Bunse-Gerstner, A. An analysis of the HR algorithm for computing the eigenvalues of a matrix. Linear Alg. Applic., 35: 155–173, 1981.Google Scholar
[72] Byrd, R.H., Hribar, M.E., Nocedal, J. An interior point algorithm for large scale nonlinear programming. SIAM J. Optimization, 1999, 9(4): 877–900.Google Scholar
[73] Byrne, C., Censor, Y. Proximity function minimization using multiple Bregman projections, with applications to split feasibility and Kullback–Leibler distance minimization. Ann. Oper. Rese., 105: 77–98, 2001.Google Scholar
[74] Cadzow, J.A. Spectral estimation: an overdetermined rational model equation approach. Proc. IEEE, 70: 907–938, 1982.Google Scholar
[75] Cai, J.F., Candès, E.J., Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim., 20(4): 1956–1982, 2010.Google Scholar
[76] Cai, T.T., Wang, L., Xu, G. New bounds for restricted isometry constants. IEEE Trans. Inform. Theory, 56(9): 4388–4394, 2010.Google Scholar
[77] Candès, E.J., Li, X., Ma, Y., Wright, J. Robust principal component analysis? J. ACM, 58(3): 1–37, 2011.Google Scholar
[78] Candès, E.J., Plan, Y. Matrix completion with noise. Proc. IEEE, 98(6): 925–936, 2010.Google Scholar
[79] Candès, E.J., Recht, B. Exact matrix completion via convex optimization. Found. Comput. Math., 9: 717–772, 2009.Google Scholar
[80] Candès, E.J., Romberg, J., Tao, T. Stable signal recovery from incomplete and inaccurate information. Commun. Pure Appl. Math., 59: 1207–1233, 2005.Google Scholar
[81] Candès, E.J., Tao, T. Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inform. Theory, 52(12): 5406–5425, 2006.Google Scholar
[82] Cao, X., Wei, X., Han, Y., Lin, D. Robust face clustering via tensor decomposition. IEEE Trans. Cybernetics, 45(11): 2546–2557, 2015.Google Scholar
[83] Cardoso, J.F. On the performance of orthogonal source separation algorithms. In Proc. Europ. Assoc. Signal Processing 94, VII, Edinburgh, U.K., 776C779, 1994.
[84] Cardoso, J.F., Souloumiac, A. Blind beamforming for non-Gaussian signals. Proc. IEE, F, 40(6): 362–370, 1993.Google Scholar
[85] Cardoso, J.F., Souloumiac, A. Jacobi angles for simultaneous diagonalization. SIAM J. Matrix Anal. Appl., 17(1): 161–164, 1996.Google Scholar
[86] Carroll, C.W. The created response surface technique for optimizing nonlinear restrained systems. Oper. Res., 9: 169–184, 1961.Google Scholar
[87] Carroll, J.D., Chang, J. Analysis of individual differences in multidimensional scaling via an N-way generalization of Eckart–Young decomposition. Psychometrika, 35: 283–319, 1970.Google Scholar
[88] Cattell, R.B. Parallel proportional profiles and other principles for determining the choice of factors by rotation. Psychometrika, 9: 267–283, 1944.Google Scholar
[89] Chan, R.H., Ng, M.K. Conjugate gradient methods for Toepilitz systems. SIAM Review, 38(3): 427–482, 1996.Google Scholar
[90] Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Wilisky, A S. Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2): 572–596, 2011.
[91] Chang, C.I., Du, Q. Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sensing, 42(3): 608–619, 2004.Google Scholar
[92] Chang, K.C., Pearson, K., Zhang, T. On eigenvalue problems of real symmetric tensors. J. Math. Anal. Appl., 290(350): 416–422, 2009.Google Scholar
[93] Chatelin, F. Eigenvalues of Matrices. New York: Wiley, 1993.
[94] Chen, B., Petropulu, A.P. Frequency domain blind MIMO system identification based on second- and higher order statistics. IEEE Trans. Signal Processing, 49(8): 1677–1688, 2001.Google Scholar
[95] Chen, S.S., Donoho, D.L., Saunders, M.A. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1): 33–61, 1998.Google Scholar
[96] Chen, S.S., Donoho, D.L., Saunders, M.A. Atomic decomposition by basis pursuit. SIAM Review, 43(1): 129–159, 2001.Google Scholar
[97] Chen, W., Chen, M., Zhou, J. Adaptively regularized constrained total leastsquares image restoration. IEEE Trans. Image Processing, 9(4): 588–596, 2000.Google Scholar
[98] Chen, Y. Incoherence-optimal matrix completion. IEEE Trans. Inform. Theory, 61(5): 2909–2913, 2015.Google Scholar
[99] Chen, Y.L., Hsu, C.T., Liao, H.Y. M. Simultaneous tensor decomposition and completion using factor priors. IEEE Trans. Pattern Anal. Machine Intell., 36(3): 577–591, 2014.Google Scholar
[100] Chua, L.O. Dynamic nonlinear networks: state-of-the-art. IEEE Trans. Circuits Syst., 27: 1024–1044, 1980.Google Scholar
[101] Cichocki, A., Cruces, S., Amari, S.I. Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy, 13: 134–170, 2011.Google Scholar
[102] Cichocki, A., Lee, H., Kim, Y.D., Choi, S. Non-negative matrix factorization with α-divergence. Pattern Recog. Lett., 29: 1433–1440, 2008.Google Scholar
[103] Cichocki, A., Mandic, D.P., Phan, A.H., Caiafa, C.F., Zhou, G., Zhao, Q., Lathauwer, L.D. Tensor decompositions for signal processing applications. IEEE Signal Processing Mag., 34(3): 145–163, 2015.Google Scholar
[104] Cichocki, A., Zdunek, R., Amari, S.I. Nonnegative matrix and tensor factorization. IEEE Signal Processing Mag., 25(1): 142–145, 2008.Google Scholar
[105] Cichocki, A., Zdunek, R., Amari, S.I. Csiszar's divergences for nonnegative matrix factorization: family of new algorithms. In Lecture Notes in Computer Science; Springer: Charleston, SC, USA, 3889: 32–39, 2006.Google Scholar
[106] Cichocki, A., Zdunek, R., Choi, S., Plemmons, R., Amari, S.I. Novel multi-layer nonnegative tensor factorization with sparsity constraints. Springer LNCS, 4432: 271–280, 2007.Google Scholar
[107] Cirrincione, G., Cirrincione, M., Herault, J., et al. The MCA EXIN neuron for the minor component analysis. IEEE Trans. Neural Networks, 13(1): 160–187, 2002.Google Scholar
[108] Clark, J.V., Zhou, N., Pister, K.S. J. Modified nodal analysis for MEMS with multi-energy domains. In Proc. Inter. Conf. on Modeling and Simulation of Microsystems, Semiconductors, Sensors and Actuators. San Diego, USA, 2000. Available at http://www-bsac.EECS.Berkely.EDU/-cfm/publication.html
[109] Combettes, P.L., Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering, New York: Springer, 185–212, 2011.
[110] Comon, P., Golub, G., Lim, L.H., Mourrain, B. Symmetric tensors and symmetric tensor rank. SM Technical Report 06–02, Stanford University, 2006.
[111] Comon, P., Golub, G., Lim, L.H., Mourrain, B. Symmetric tensora and symmetric tensor rank. SIAM J. Matrix Anal. Appl., 30(3): 1254–1279, 2008.Google Scholar
[112] Comon, P., Moreau, E. Blind MIMO equalization and joint-diagonalization criteria. Proc. 2001 IEEE Inter. Conf. on Acoustics, Speech, and Signal Processing (ICASSP ‘01), 5: 2749–2752, 2001.Google Scholar
[113] Correa, N.M., Adall, T., Li, Y.O., Calhoun, V.D. Canonical correlation analysis for data fusion and group inferences. IEEE Signal Processing Mag., 27(4): 39–50, 2010.Google Scholar
[114] Dai, Y.H., Fletcher, R. Projected Barzilai–Borwein methods for large-scale box-constrained quadratic programming. Num. Math., 100: 21–47, 2005.Google Scholar
[115] Dai, W., Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory, 55(5): 2230–2249, 2009.Google Scholar
[116] Davila, C.E. A subspace approach to estimation of autoregressive parameters from noisy measurements, IEEE Trans. Signal Processing, 46: 531–534, 1998.Google Scholar
[117] Davis, G. A fast algorithm for inversion of block Toeplitz matrix. Signal Processing, 43: 3022–3025, 1995.Google Scholar
[118] Davis, G.,Mallat, S., Avellaneda, M. Adaptive greedy approximation. J. Constr. Approx., 13(1): 57–98, 1997.Google Scholar
[119] Decell, H.P. An application of the Cayley–Hamilton theorem to generalized matrix inversion. SIAM Review, 7(4): 526–528, 1965.
[120] Delsarte, P., Genin, Y. The split Levinson algorithm. IEEE Trans. Acoust., Speech, Signal Processing, 34: 471–478, 1986.Google Scholar
[121] Delsarte, P., Genin, Y. On the splitting of classical algorithms in linear prediction theory. IEEE Trans. Acoust., Speech, Signal Processing, 35: 645–653, 1987.Google Scholar
[122] Dembo, R.S., Steihaug, T. Truncated-Newton algorithms for large-scale unconstrainted optimization. Math Progr., 26: 190–212, 1983.Google Scholar
[123] Demoment, G. Image reconstruction and restoration: overview of common estimation problems. IEEE Trans. Acoust., Speech, Signal Processing, 37(12): 2024–2036, 1989.Google Scholar
[124] Dewester, S., Dumains, S., Landauer, T., Furnas, G., Harshman, R. Indexing by latent semantic analysis. J. Soc. Inf. Sci., 41(6): 391–407, 1990.Google Scholar
[125] Diamond, R. A note on the rotational superposition problem. Acta Crystallogr. Sect. A, 44: 211–216, 1988.Google Scholar
[126] Ding, W., Wei, Y. Generalized tensor eigenvalue problems. SIAM J. Matrix Anal. Appl., 36(3): 1073–1099, 2015.Google Scholar
[127] Doclo, S., Moonen, M. GSVD-based optimal filtering for single and multimicrophone speech enhancement. IEEE Trans. Signal Processing, 50(9): 2230–2244, 2002.Google Scholar
[128] Donoho, D.L. Compressed sensing. IEEE Trans. Inform. Theory, 52(4): 1289–1306, 2006.Google Scholar
[129] Donoho, D.L. For most large underdetermined systems of linear equations, the minimal _1 solution is also the sparsest solution. Commun. Pure Appl. Math., LIX: 797–829, 2006.Google Scholar
[130] Donoho, D.L., Elad, M. Optimally sparse representations in general (nonorthogonal) dictionaries via _1 minimization. Proc. Nat. Acad. Sci. USA, 100(5): 2197–2202, 2003.Google Scholar
[131] Donoho, D.L., Huo, X. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory, 47(7): 2845–2862, 2001.Google Scholar
[132] Donoho, D.L., Tsaig, Y. Fast solution of l1-norm minimization problems when the solution may be sparse. IEEE Trans. Inform. Theory, 54(11): 4789–4812, 2008.Google Scholar
[133] Donoho, D.L., Tsaig, T., Drori, T., Starck, J.-L. Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit (StOMP). Stanford Univ., Palo Alto, CA, Stat. Dept. Tech. Rep. 2006-02, Mar. 2006.
[134] Drakakis, K., Rickard, S. de Frein|R., Cichocki, A. Analysis of financial data using non-negative matrix factorization. International Mathematical Forum 3(38): 1853–1870, 2008.
[135] Drmac, Z. Accurate computation of the product-induced singular value decomposition with applications. SIAM J. Numer. Anal., 35(5): 1969–1994, 1998.Google Scholar
[136] Drmac, Z. A tangent algorithm for computing the generalized singular value decomposition. SIAM J. Numer. Anal., 35(5): 1804–1832, 1998.Google Scholar
[137] Drmac, Z. New accurate algorithms for singular value decomposition of matrix triplets. SIAM J. Matrix Anal. Appl., 21(3): 1026–1050, 2000.Google Scholar
[138] Duda, R.O., Hart, P E. Pattern Classification and Scene Analysis. New York: Wiley, 1973.
[139] Duncan, W.J. Some devices for the solution of large sets of simultaneous linear equations. The London, Edinburgh, anf Dublin Philosophical Magazine and J. Science, Seventh Series, 35: 660–670, 1944.Google Scholar
[140] Eckart, C., Young, G. The approximation of one matrix by another of lower rank. Psychometrica, 1: 211–218, 1936.Google Scholar
[141] Eckart, C., Young, G. A Principal axis transformation for non-Hermitian matrices. Null Amer. Math. Soc., 45: 118–121, 1939.Google Scholar
[142] Edelman, A., Arias, T.A., Smith, S.T. The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl., 20(2): 303–353, 1998.Google Scholar
[143] Efron, B., Hastie, T., Johnstone, I., Tibshirani, R. Least angle regression. Ann. Statist., 32: 407–499, 2004.Google Scholar
[144] Efroymson, G., Steger, A., Steeinberg, S. A matrix eigenvalue problem. SIAM Review, 22(1): 99–100, 1980.Google Scholar
[145] Eldar, Y.C., Oppenheim, A.V. MMSE whitening and subspace whitening. IEEE Trans. Inform. Theory, 49(7): 1846–1851, 2003.Google Scholar
[146] Eldar, Y.C., Opeenheim, A.V. Orthogonal and projected orthogonal matched filter detection. Signal Processing, 84: 677–693, 2004.Google Scholar
[147] Espig, M., Schuster, M., Killaitis, A., Waldren, N., et al. Tensor calculus library. Available at http://gitorious.org/tensorcalculus/(2012).
[148] Estienne, F., Matthijs, N., Massart, D.L., Ricoux, P., Leibovici, D. Multi-way modelling of high-dimensionality electroencephalographic data. Chemometrics Intell. Lab. Syst., 58(1): 59–72, 2001.Google Scholar
[149] Faddeev, D.K., Faddeeva, V.N. Computational Methods of Linear Algebra. San Francisco: W H Freedman Co., 1963.
[150] Farina, A., Golino, G., Timmoneri, L. Comparison between LS and TLS in adaptive processing for radar systems. IEE P-Radar Sonar Nav., 150(1): 2–6, 2003.Google Scholar
[151] Fernando, K.V., Hammarling, S.J. A product induced singular value decomposition (PSVD) for two matrices and balanced relation. In Proc. Conf. on Linear Algebra in Signals, Systems and Controls, Society for Industrial and Applied Mathematics (SIAM). PA: Philadelphia, 128–140, 1988.
[152] Fiacco, A.V., McCormick, G.P. Nonlinear Programming: Sequential Unconstrained Minimization Techniques. New York: Wiley, 1968; or Classics Appl. Math. 4, SIAM, PA: Philadelphia, Reprint of the 1968 original, 1990.
[153] Field, D.J. Relation between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Amer. A, 4: 2370–2393, 1984.Google Scholar
[154] Figueiredo, M.A. T., Nowak R.D.,Wright S.J. Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process., 1: 586–597, 2007.Google Scholar
[155] Flanigan, F. Complex Variables: Harmonic and Analytic Functions, 2nd edn. New York: Dover Publications, 1983.
[156] Fletcher, R. Conjugate gradient methods for indefinite systems. In Proc. Dundee Conf. on Num. Anal. New York: Springer-Verlag, 73–89, 1975.
[157] Fletcher, R. Practical Methods of Optimization, 2nd edn. New York: John Wiley & Sons, 1987.
[158] Forsgren, A., Gill, P.E., Wright, M.H. Interior methods for nonlinear optimization. SIAM Review, 44: 525–597, 2002.Google Scholar
[159] Forsythe, G.E., Golub, G.H. On the stationary values of a second-degree polynomial on the unit sphere. J. Soc. Ind. Appl. Math., 13(4): 1050–1068, 1965.Google Scholar
[160] Foucart, S., Lai, M.J. Sparsest solutions of underdetermined linear systems via lq-minimization for 0 q ≤ 1. Appl. Comput. Harmonic Anal., 26(3): 395–407, 2009.Google Scholar
[161] Frangi, A.F., Niessen, W.J., Viergever, M.A. Three-dimensional modeling for functional analysis of cardiac images: a review. IEEE Trans. Medical Imaging, 20(1): 2–5, 2001.Google Scholar
[162] Frankel, T. The Geometry of Physics: An Introduction (with corrections and additions). Cambridge: Cambridge University Press, 2001.
[163] Friedman, J., Hastie, T., Höfling, H., Tibshirani, R. Pathwise coordinate optimization. Ann. Appl. Statist., 1: 302–332, 2007.Google Scholar
[164] Friedlander, M.P., Hatz, K. Computing nonnegative tensor factorizations. Available at http://www.optimization-online.org/DBHTML/2006/10/1494.html(2006).
[165] Frobenius, G. Über Matrizen aus nicht negativen Elementen. Berlin: S.-B. Preuss Acad. Wiss. 456–477, 1912.
[166] Fuhrmann, D.R. An algorithm for subspace computation with applications in signal processing. SIAM J. Matrix Anal. Appl., 9: 213–220, 1988.Google Scholar
[167] Gabay, D., Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl., 2: 17–40, 1976.Google Scholar
[168] Galatsanou, N.P., Katsaggelos, A.K. Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation. IEEE Trans. Image Processing, 1(3): 322–336, 1992.Google Scholar
[169] Gander, W., Golub, G.H., Von Matt U. A constrained eigenvalue problem. Linear Algebra Appl., 114–115: 815–839, 1989.
[170] Gandy, S., Recht, B., Yamada, I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Prob., 27(2): 25 010–25 028, 2011.Google Scholar
[171] Gantmacher, F.R. The Theory of Matrices. London: Chelsea Publishing, 1977.
[172] Gersho, A., Gray, R.M. Vector Quantization and Signal Compression. Boston: Kluwer Acad. Press, 1992.
[173] Gilbert, A.C., Muthukrishnan, M., Strauss, M.J. Approximation of functions over redundant dictionaries using coherence. Proc. 14th Annu. ACM-SIAM Symp. Discrete Algorithms, Jan. 2003.
[174] Gillies, A.W. On the classfication of matrix generalized inverse. SIAM Review, 12(4): 573–576, 1970.Google Scholar
[175] Gleser, L.J. Estimation in a multivariate “errors in variables” regression model: large sample results. Ann. Statist., 9: 24-44, 1981.Google Scholar
[176] Glowinski, R., Marrocco, A. Sur l'approximation, par elements finis d'ordre un, et la resolution, par penalisation-dualité, d'une classe de problems de Dirichlet non lineares. Revue Fran,caise d'Automatique, Informatique, et Recherche Opérationelle, 9: 41–76, 1975.Google Scholar
[177] Goldstein, T., Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci., 2(2): 323–343, 2009.Google Scholar
[178] Golub, G.H., Van Loan C.F. An analysis of the total least squares problem. SIAM J. Numer. Anal., 17: 883–893, 1980.Google Scholar
[179] Golub, G.H., Van Loan C.F. Matrix Computation, 2nd edn. Baltimore: The John Hopkins University Press, 1989.
[180] Gonzaga, C.C., Karas, E.W. Fine tuning Nesterovs steepest descent algorithm for differentiable convex programming. Math. Program., Ser.A, 138: 141–166, 2013.Google Scholar
[181] Gonzales, E.F., Zhang, Y. Accelerating the Lee–Seung algorithm for nonnegative matrix factorization. Technical report. Department of Computational and Applied Mathematics, Rice University, 2005.
[182] Gorodnitsky, I.F., Rao, B.D. Sparse signal reconstruction from limited data using FOCUSS: a re-weighted norm minimization algorithm, IEEE Trans. Signal Process., 1997, 45: 600–616.Google Scholar
[183] Gourvenec, S., et al. CuBatch, 2005. Available at http://www.mathworks.com.
[184] Grassmann, H.G. Die Ausdehnungslehre. Berlin: Enslin, 1862.
[185] Gray, R.M. On the asymptotic eigenvalue distribution of Toeplitz matrices. IEEE Trans Inform. Theory, 18(6): 267–271, 1972.Google Scholar
[186] Graybill, F.A. Matrices with Applications in Statistics. Balmont CA: Wadsworth International Group, 1983.
[187] Graybill, F.A., Meyer, C.D., Painter, R.J. Note on the computation of the generalized inverse of a matrix. SIAM Review, 8(4): 522–524, 1966.Google Scholar
[188] Greville, T.N. E. Some applications of the pseudoinverse of a matrix. SIAM Review, 2: 15–22, 1960.Google Scholar
[189] Griffiths, J.W. Adaptive array processing: a tutorial. Proc. IEE, Part F, 130: 137–142, 1983.Google Scholar
[190] Grippo, L., Sciandrone, M. On the convergence of the block nonlinear Gauss– Seidel method under convex constraints. Operations Research Letter, 26: 127–136, 1999.Google Scholar
[191] Guan, N., Tao, D., Lou, Z., Yu, B. NeNMF: an optimal gradient method for nonnegative matrix factorization. IEEE Trans. Signal Processing, 60(6): 2082–2098, 2012.Google Scholar
[192] Guttman, L. Enlargement methods for computing the inverse matrix. Ann Math Statist, 17: 336–343, 1946.Google Scholar
[193] Haghighat, M. Abdel-Mottaleb|M., Alhalabi, W. Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition. IEEE Trans. Inform. Forensics Security, 11(9): 1984–1996, 2016.Google Scholar
[194] Hale, E.T., Yin, W., Zhang, Y. Fixed-point continuation for _1-minimization: methodology and convergence. SIAM J. Optim., 19(3): 1107–1130, 2008.Google Scholar
[195] Halmos, P.R. Finite Dimensional Vector Spaces. New York: Springer-Verlag, 1974.
[196] Han, X., Wu, J., Wang, L., Chen, Y., Senhadji, L., Shu, H. Linear total variation approximate regularized nuclear norm optimization for matrix completion. Abstract Appl. Anal., ID 765782, 2014.
[197] Hanchez, Y., Dooren, P.V. Elliptic and hyperbolic quadratic eigenvalue problems and associated distance problems. Linear Algebra Appl., 371: 31–44, 2003.Google Scholar
[198] Harshman, R.A. Foundation of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis. UCLA Working papers in Phonetics, 16: 1–84, 1970.Google Scholar
[199] Harshman, R.A. Parafac2: mathematical and technical notes. UCLA Working papers in Phonetics, 22: 30–44, 1972.Google Scholar
[200] Harshman, R.A., Hong, S., Lundy, M.E. Shifted factor analysis – Part i: models and properties. J. Chemometrics, 17(7): 363–378, 2003.Google Scholar
[201] Harshman, R.A., Lundy, M.E. Data preprocessing and the extended PARAFAC model. In Research Methods for Multimode Data Analysis (LawH., G., Snyder, C.W., Hattie, J.A., McDonald, R.P. eds.). New York: Praeger, pp. 216–284, 1984.
[202] Hazan, T., Polak, T., Shashua, A. Sparse image coding using a 3d nonnegative tensor factorization. Technical Report, The Hebrew University, 2005.
[203] He, X., Cai, D., Niyogi, P. Tensor subspace analysis. Adv. Neural Informat. Process. Syst., 18: 499, 2006.Google Scholar
[204] Heeg, R.S., Geurts, B.J. Spatial instabilities of the incompressible attachmentline flow using sparse matrix Jacobi–Davidson techniques. Appl. Sci. Res., 59: 315–329, 1998.Google Scholar
[205] Helmke, U., Moore, J.B. Optimization and Dynamical Systems. London, UK: Springer-Verlag, 1994.
[206] Hendeson, H.V., Searle, S.R. On deriving the inverse of a sum of matrices. SIAM Review, 23: 53–60, 1981.Google Scholar
[207] Henderson, H.V., Searle, S.R. The vec-permutation matrix, the vec operator and Kronecker products: a review. Linear Multilinear Alg., 9: 271–288, 1981.Google Scholar
[208] Herzog, R., Sachs, E. Preconditioned conjugate gradient method for optimal control problems with control and state constraints. SIAM. J. Matrix Anal. and Appl., 31(5): 2291–2317, 2010.Google Scholar
[209] Hestenes, M.R. Multiplier and gradient methods, J. Optim. Theory Appl., 4: 303–320, 1969.Google Scholar
[210] Hestenes, M.R., Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bureau of Standards, 49: 409–436, 1952.Google Scholar
[211] Hindi, H. A tutorial on convex optimization, in Proc. 2004 American Control Conf., Boston, Masachusetts June 30-July 2, 3252–3265, 2004.
[212] Hitchcock, F.L. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys., 6: 164–189, 1927.Google Scholar
[213] Hitchcock, F.L. Multilple invariants and generalized rank of a p-way matrix or tensor. J. Math. Phys., 7: 39–79, 1927.Google Scholar
[214] Horn, R.A., Johnson, C.R. Matrix Analysis. Cambridge: Cambridge University Press, 1985.
[215] Horn, R.A., Johnson, C.R. Topics in Matrix Analysis. Cambridge: Cambridge University Press, 1991.
[216] Hotelling, H. Some new methods in matrix calculation. Ann. Math. Statist, 14: 1–34, 1943.Google Scholar
[217] Hotelling, H. Further points on matrix calculation and simultaneous equations. Ann. Math. Statist, 14: 440–441, 1943.Google Scholar
[218] Howland, P., Jeon, M., Park, H. Structure preserving dimension reduction for clustered text data based on the generalized singular value decomposition. SIAM J. Matrix Anal. Appl., 25(1): 165–179, 2003.Google Scholar
[219] Howland, P., Park, H. Generalizing discriminant analysis using the generalized singular value decomposition. IEEE Trans. Pattern Anal. Machine Intell., 26(8): 995–1006, 2004.Google Scholar
[220] Hoyer, P.O. Non-negative matrix factorization with sparseness constraints. J. Machine Learning Res., 5: 1457–1469, 2004.Google Scholar
[221] Hu, Y., Zhang, D., Ye, J., Li, X., He, X. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Machine Intell., 35(9): 2117–2130, 2013.Google Scholar
[222] Hu, S., Huang, Z.H., Ling, C., Qi, L. On determinants and eigenvalue theory of tensors. J. Symbolic Comput., 50: 508–531, 2013.Google Scholar
[223] Huang, B. Detection of abrupt changes of total least squares models and application in fault detection. IEEE Trans. Control Syst. Technol., 9(2): 357–367, 2001.Google Scholar
[224] Huang, L. Linear Algebra in System and Control Theory (in Chinese). Beijing: Science Press, 1984.
[225] Huber, P.J. Robust estimation of a location parameter. Ann. Statist., 53: 73–101, 1964.Google Scholar
[226] Huffel, S.V., Vandewalle, J. The Total Least Squares Problems: Computational Aspects and Analysis. Frontiers in Applied Mathemamatics, vol. 9, Philadelphia: SIAM, 1991.CrossRef
[227] Hyland, D.C., Bernstein, D.S. The optimal projection equations for model reduction and the relationships among the methods of Wilson, Skelton and Moore. IEEE Trans. Automatic Control, 30: 1201–1211, 1985.Google Scholar
[228] Jain, P.K., Ahmad, K. Functional analysis, 2nd edn. New Age International, 1995.
[229] Jain, S.K., Gunawardena, A.D. Linear Algebra: An Interactive Approach. New York: Thomson Learning, 2003.
[230] Jennings, A., McKeown, J.J. Matrix Computations. New York: John Wiley & Sons, 1992.
[231] Johnson, D.H., Dudgeon, D.E. Array Signal Processing: Concepts and Techniques. Englewood Cliffs, NJ: PTR Prentice Hall, 1993.
[232] Johnson, L.W., Riess, R.D., Arnold, J.T. Introduction to Linear Algebra, 5th edn. New York: Prentice-Hall, 2000.
[233] Joho, M., Mathis, H. Joint diagonalization of correlation matrices by using Gradient methods with application to blind signal separation. In IEEE Proc. SAM, 273–277, 2002.
[234] Joho, M., Rahbar, K. Joint diagonalization of correlation matrices by using Newton methods with application to blind signal separation. In IEEE Proc. SAM, 403–407, 2002.
[235] Jolliffe, I. Principal Component Analysis. New York: Springer-Verlag, 1986.
[236] Jordan, C. Memoire sur les formes bilineaires. J. Math. Pures Appl., Deuxieme Serie, 19: 35–54, 1874.Google Scholar
[237] Kalogerias, D.S., Petropulu, A.P. Matrix completion in colocated MIMO radar: recoverability, bounds and theoretical guarantees. IEEE Trans. Signal Processing, 62(2): 309–321, 2014.Google Scholar
[238] Karahan, E., Rojas-Lopez P.A., Bringas-Vega M.L., Valdes-Hernandez P.A., and Valdes-Sosa P.A. Tensor analysis and fusion of multimodal brain images. Proc. IEEE, 103(9): 1531–1559, 2015.Google Scholar
[239] Karmarkar, N. A new polynomial-time algorithm for linear programming. Combinatorica, 4(4): 373–395, 1984.Google Scholar
[240] Kato, T. A Short Introduction to Perturbation Theory for Linear Operators. New York: Springer-Verlag, 1982.
[241] Kay, S.M. Modern Spectral Estimation: Theory and Applications. Englewood Cliffs, NJ: Prentice-Hall, 1988.
[242] Kayalar, S., Weinert, H.L. Oblique projections: formulas, algorithms, error bounds. Math. Control, Signals, Syst., 2(1): 33–45, 1989.Google Scholar
[243] Kelley, C.T. Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics, vol. 16, Philadelphia, PA: SIAM, 1995.
[244] Khatri, C.G., Rao, C.R. Solutions to some functional equations and their applications to characterization of probability distributions. Sankhya: The Indian J. Stat., Series A, 30: 167–180, 1968.Google Scholar
[245] Kiers, H.A. L. Towards a standardized notation and terminology in multiway analysis. J. Chemometrics, 14: 105–122, 2000.Google Scholar
[246] Kim, J., Park, H. Fast nonnegative matrix factorization: an active-set-like method and comparisons. SIAM J. Sci. Comput., 33(6): 3261–3281, 2011.Google Scholar
[247] Kim, H., Park, H. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics, 23(12): 1495–1502, 2007.Google Scholar
[248] Kim, S.J., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D. An interior-point method for large-scale _1-regularized least squares. IEEE J. Selec. Topics Signal Processing, 1(4): 606–617, 2007.Google Scholar
[249] Klein, J.D., Dickinson, B.W. A normalized ladder form of residual energy ratio algorithm for PARCOR estimation via projections. IEEE Trans. Automatic Control, 28: 943–952, 1983.Google Scholar
[250] Klema, V.C., Laub, A.J. The singular value decomposition: its computation and some applications. IEEE Trans. Automatic Control, 25: 164–176, 1980.Google Scholar
[251] Klemm, R. Adaptive airborne MTI: an auxiliary channel approach. Proc. IEE, Part F, 134: 269–276, 1987.Google Scholar
[252] Kofidis, E., Regalia, Ph. On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM J. Matrix Anal. Appl. 23: 863–884, 2002.Google Scholar
[253] Kolda, T.G. Orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl., 23(1): 243–255, 2001.Google Scholar
[254] Kolda, T.G. Multilinear operators for higher-order decompositions. Sandia Report SAND2006-2081, California, Apr. 2006.
[255] Kolda, T.G., Bader, B.W., Kenny, J.P. Higher-order web link analysis using multilinear algebra. Proc. 5th IEEE Inter. Conf. on Data Mining, 242–249, 2005.
[256] Kolda, T.G., Bader, B.W. The tophits model for higher-order web link analysis. In Workshop on Link Analysis, Counterterrorism and Security, 2006.
[257] Kolda, T.G., Bader, B.W. Tensor decompositions and applications. SIAM Review, 51(3): 455–500, 2009.Google Scholar
[258] Kolda, T.G., Mayo, J.R. Shifted power method for computing tensor eigenpairs. SIAM J. Matrix Anal. Appl., 32(4): 1095–1124, 2011.Google Scholar
[259] Kolda, T.G., Mayo, J.R. An adaptive shifted power method for computing generalized tensor eigenpairs. SIAM J. Matrix Anal. Appl., 35(4): 1563–1581, 2014.Google Scholar
[260] Komzsik, L. Implicit computational solution of generalized quadratic eigenvalue problems. Finite Elemnets in Analysis and Design, 37: 799–810, 2001.Google Scholar
[261] Kressner, D., Tobler, C. htucker-A MATLAB toolbox for tensors in hierarchical Tucker format. MATHICSE, EPF Lausanne. Available at http://anchp.epfl.ch/htucker/(2012).
[262] Kreutz-Delgado, K. Real vector derivatives and gradients. Available at http://dsp.ucsd.edu/ kreutz/PEI05.html (2005).
[263] Kreutz-Delgado, K. The complex gradient operator and the calculus. 2005. Available at http://dsp.ucsd.edu/ kreutz/PEI05.html (2005).
[264] Kreyszig, E. Advanced Engineering Mathematics, 7th edn. New York: John Wiley & Sons, Inc., 1993.
[265] Krishna, H., Morgera, S.D. The Levinson recurrence and fast algorithms for solving Toeplitz systems of linear equations. IEEE Trans. Acoust., Speech, Signal Processing, 35: 839–847, 1987.Google Scholar
[266] Kroonenberg, P. The three-mode company: a company devoted to creating three-mode software and promoting three-mode data analysis. Available at http://three-mode.leidenuniv.nl/(2000).
[267] Kruskal, J.B. Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Appl., 18: 95–138, 1977.Google Scholar
[268] Kruskal, J.B. Rank, decomposition, uniqueness for 3-way and N-way arrays, in Multiway Data Analysis (Coppi and Bolasco eds.), North-Holland: Elsevier Science Publishers B.V. 7–18, 1989.
[269] Kumar, R. A fast algorithm for solving a Toeplitz system of equations. IEEE Trans Acoust., Speech, Signal Processing, 33: 254–267, 1985.Google Scholar
[270] Kumaresan, R. Rank reduction techniques and burst error-correction decoding in real/complex fields. In Proc. 19h Asilomar Conf. Circuits Syst. Comput., CA: Pacific Grove, 1985.
[271] Kumaresan, R., Tufts, D.W. Estimating the angle of arrival of multiple plane waves. IEEE Trans. Aerospace Electron Syst., 19: 134–139, 1983.Google Scholar
[272] Lahat, D., Adali, T., Jutten, C. Multimodal data fusion: an overview of methods, challenges, prospects. Proc. IEEE, 103(9): 1449–1477, 2015.Google Scholar
[273] Lancaster, P. Lambda-Matrices and Vibrating Systems. Oxford: Pergamon Press, 1966.
[274] Lancaster, P. Quadratic eigenvalue problems. Linear Alg. Appl., 150: 499–506, 1991.Google Scholar
[275] Lancaster, P., Tismenetsky, M. The Theory of Matrices with Applications, 2nd edn. New York: Academic, 1985.
[276] Landsberg, J.M. Tensors: Geometry and Applications. Graduate Studres Mathematics, vol. 128, Providence RI: American Mathematical Society, 2011.
[277] Langville, A.N., Meyer, C.D., Albright, R., Cox, J., Duling, D. Algorithms, initializations, convergence for the nonnegative matrix factorization. SAS Technical Report, ArXiv: 1407.7299, 2014.
[278] Lasdon, L. Optimization Theory for Large Systems. New York: Macmillan, 1970.
[279] Lathauwer, L.D. A link between the canonical decomposition in multilinear algebra and simultaneous matrix diagonalization. SIAM J. Matrix Anal. Appl., 28: 642–666, 2006.Google Scholar
[280] Lathauwer, L.D. Decompositions of a higher-order tensor in block terms – Part II: definitions and uniqueness. SIAM J. Matrix Anal. Appl., 30(3): 1033–1066, 2008.Google Scholar
[281] Lathauwer, L.D., Moor, B.D., Vandewalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl., 21: 1253–1278, 2000.Google Scholar
[282] Lathauwer, L.D., Nion, D. Decompositions of a higher-order tensor in block terms – Part III: alternating least squares algorithms. SIAM J. Matrix Anal. Appl., 30(3): 1067–1083, 2008.Google Scholar
[283] Laub, A.J., Heath, M.T., Paige, C.C., Ward, R.C. Computation of system balancing transformations and other applications of simultaneous diagonalization algorithms. IEEE Trans. Automatic Control, 32: 115–122, 1987.Google Scholar
[284] Lauritzen, S. Graphical Models. London: Oxford University Press, 1996.
[285] Lay, D.C. Linear Algebra and Its Applications, 2nd edn. New York: Addison- Wesley, 2000.
[286] Lee, D.D., Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature, 401: 788–791, 1999.Google Scholar
[287] Lee, D.D., Seung, H.S. Algorithms for non-negative matrix factorization. Advances in Neural Information Proc. 13 (Proc. of NIPS 2000), MIT Press, 13: 556–562, 2001.Google Scholar
[288] Lee, K., Bresler, Y. Admira: atomic decomposition for minimum rank approximation. IEEE Trans. Inform. Theory, 56(9): 4402–4416, 2010.Google Scholar
[289] Leonard, I.E. The matrix exponential. SIAM Review, 38(3): 507–512, 1996.Google Scholar
[290] Letexier, D., Bourennane, S., Blanc-Talon, J. Nonorthogonal tensor matricization for hyperspectral image filtering. IEEE Geosci. Remote Sensing. Lett., 5(1): 3-7, 2008.Google Scholar
[291] Levinson, N. The Wiener RMS (root-mean-square) error criterion in filter design and prediction, J. Math. Phys., 25: 261–278, 1947.Google Scholar
[292] Lewis, A.S. The mathematics of eigenvalue optimization. Math. Program., 97(1-2): 155–176, 2003.Google Scholar
[293] Li, N., Kindermannb, S., Navasca, C. Some convergence results on the regularized alternating least-squares method for tensor decomposition. Linear Algebra Appl., 438(2): 796–812, 2013.Google Scholar
[294] Li, G., Qi, L., Yu, G. The Z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory. Numer. Linear Algebra Appl., 20: 1001–1029, 2013.Google Scholar
[295] Li, X.L., Zhang, X.D. Non-orthogonal approximate joint diagonalization free of degenerate solution. IEEE Trans. Signal Processing, 55(5): 1803–1814, 2007.Google Scholar
[296] Lim, L.H. Singular values and eigenvalues of tensors: a variational approach. In Proc. IEEE Inter. Workshop on Computational Advances in Multi-Sensor Adaptive Processing, vol. 1: 129–132, 2005.
[297] Lin, C.J. Projected gradient methods for nonnegative matrix factorization. Neural Comput., 19(10): 2756–2779, 2007.Google Scholar
[298] Lin, Z., Chen, M., Ma, Y. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG- 09-2215, Nov. 2009.
[299] Liu, H., Daniel, L., Wong, N. Model reduction and simulation of nonlinear circuits via tensor decomposition. IEEE Trans. Computer-Aided Design of Integ. Circuits Syst., 34(7): 1059–1069, 2015.Google Scholar
[300] Liu, J., Musialski, P.,Wonka, P., Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Machine Intell., 35(1): 208–220, 2013.Google Scholar
[301] Liu, S., Trenklerz, G. Hadamard, Khatri–Rao, Kronecker and other matrix products. Inter. J. Inform. Syst., 4(1): 160–177, 2008.Google Scholar
[302] Liu, X., Sidiropoulos, N.D. Cramer–Rao lower bounds for low-rank decomposition of multidimensional arrays, IEEE Trans. Signal Processing, 49(9): 2074–2086, 2001.Google Scholar
[303] Lorch, E.R. On a calculus of operators in reflexive vector spaces. Trans. Amer. Math. Soc., 45: 217–234, 1939.Google Scholar
[304] Lu, H., Plataniotis, K.N., Venetsanopoulos, A.N. MPCA: multilinear principal component analysis of tensor objects. IEEE Trans. Neural Networks, 19(1): 18–39, 2008.Google Scholar
[305] Luenberger, D. An Introduction to Linear and Nonlinear Programming, 2nd edn. MA: Addison-Wesley, 1989.
[306] Luo, Y., Liu, T., Tao, D., Xu, C. Multiview matrix completion for multilabel image classification. IEEE Trans. Image Processing, 24(8): 2355–2369, 2015.Google Scholar
[307] Lütkepohl, H. Handbook of Matrices. New York: John Wiley & Sons, 1996.
[308] Lyantse, V.E. Some properties of idempotent operators. Troret, i Prikl. Mat, 1: 16–22, 1958.Google Scholar
[309] MacDuffee, C.C. The Theory of Matrices. Berlin: Springer-Verlag, 1933.
[310] Magnus, J. R, Neudecker H. The commutation matrix: some properties and applications. Ann. Statist., 7: 381–394, 1979.Google Scholar
[311] Magnus, J.R., Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics, revised edn. Chichester: Wiley, 1999.
[312] Mahalanobis, P.C. On the generalised distance in statistics. Proc. of the Natl. Inst. Sci. India, 2(1): 49–55, 1936.Google Scholar
[313] Mallat, S.G., Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Processing, 41(12): 3397–3415, 1993.Google Scholar
[314] Manolakis, D., Truslow, E., Pieper, M., Cooley, T., Brueggeman, M. Detection algorithms in hyperspectral imaging systems: an overview of practical algorithms. IEEE Signal Processing. Mag., 31(1): 24–33, 2014.Google Scholar
[315] Marshall, T.G. Coding of real-number sequences for error correction: a digital signal processing problem. IEEE J. Select. Areas Commun., 2(2): 381–392, 1984.
[316] Mateos, G., Bazerque, J.A., Giannakis, G.B. Distributed sparse linear regression. IEEE Trans. Signal Processing, 58(10): 5262–5276, 2010.Google Scholar
[317] Mathew, G., Reddy, V. Development and analysis of a neural network approach to Pisarenko's harmonic retrieval method. IEEE Trans. Signal Processing, 42: 663–667, 1994.Google Scholar
[318] Mathew, G., Reddy, V. Orthogonal eigensubspaces estimation using neural networks. IEEE Trans. Signal Processing, 42: 1803–1811, 1994.Google Scholar
[319] Matthews, K.R. Linear algebra notes. Department of Mathematics, University of Queensland, MP274, 1991.
[320] McLeod, K., Sermesant, M., Beerbaum, P., Pennec, X. Spatio-temporal tensor decomposition of a polyaffine motion model for a better analysis of pathological left ventricular dynamics. IEEE Trans. Medical Imaging, 34(7): 1562–1575, 2015.Google Scholar
[321] Meerbergen, K. Locking and restarting quadratic eigenvalue solvers. SIAM J. Sci. Comput., 22(5): 1814–1839, 2001.Google Scholar
[322] Meng, S., Huang, L.T., Wang, W.Q. Tensor decomposition and PCA jointed algorithm for hyperspectral image denoising. IEEE Signal Processing Lett., 13(7): 897–901, 2016.Google Scholar
[323] Mesarovic, V.Z., Galatsanos, N.P., Katsaggelos, K. Regularized constrained total least squares image restoration. IEEE Trans. Image Processing, 4(8): 1096–1108, 1995.Google Scholar
[324] Meyer, C. Matrix Analysis and Applied Linear Algebra. Philadephia: SIAM, 2000.
[325] Michalewicz, Z., Dasgupta, D. Le Riche|R., Schoenauer, M. Evolutionary algorithms for constrained engineering problems. Computers & Industrial Eng. J., 30: 851–870, 1996.Google Scholar
[326] Micka, O.J., Weiss, A.J. Estimating frequencies of exponentials in noise using joint diagonalization. IEEE Trans. Signal Processing, 47(2): 341–348, 1999.Google Scholar
[327] Million, E. The Hadamard product. Available at http://buzzard.ups.edu/courses/2007spring/projects/million-paper.pdf(2007).
[328] Mirsky, L. Symmetric gauge functions and unitarily invariant norms. Quart J. Math. Oxford, 11: 50–59, 1960.Google Scholar
[329] Mohanty, N. Random Signal Estimation and Identification. New York: Van Nostrand Reinhold, 1986.
[330] Moore, E.H. General analysis, Part 1. Mem. Amer. Philos. Soc., 1: 1, 1935.
[331] Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace Hilbertien. Rep. Paris Acad. Sci., Series A, 255: 2897–2899, 1962.Google Scholar
[332] Moreau, E. A generalization of joint-diagonalization criteria for source separation. IEEE Trans. Signal Processing, 49(3): 530–541, 2001.Google Scholar
[333] Morup, M. ERPWAVELAB, 2006.
[334] Morup, M., Hansen, L.K., Herrmann, C.S., Parnas, J., Arnfred, S.M. Parallel factor analysis as an exploratory tool for wavelet transformed event-related EEG. NeuroImage, 29: 938–947, 2006.Google Scholar
[335] Murray, F.J. On complementary manifofolds and projections in Lp and lp. Trans. Amer. Math. Soc, 43: 138–152, 1937.Google Scholar
[336] Nasrabadi, N.M. Hyperspectral target detection: an overview of current and future challenges. IEEE Signal Processing Magazine, 31(1); 34–44, 2014.Google Scholar
[337] Natarajan, B K. Sparse approximate solutions to linear systems. SIAM J. Comput., 24: 227–234, 1995.Google Scholar
[338] Navasca, C., Lathauwer, L.D., Kindermann, S. Swamp reducing technique for tensor decomposition. In Proc. 16th European Signal Processing Conf., Lausanne, Switzerland, August 25–29, 2008.
[339] Neagoe, V.E. Inversion of the Van der Monde Matrix. IEEE Signal Processing Letters, 3: 119–120, 1996.Google Scholar
[340] Needell, D., Vershynin, R. Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comput. Math., 9(3): 317–334, 2009.Google Scholar
[341] Needell, D., Vershynin, R. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Topics Signal Processing, 4(2): 310–316, 2009.Google Scholar
[342] Needell, D., Tropp, J.A. CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmonic Anal., 26(3): 301–321, 2009.Google Scholar
[343] Nesterov, Y. A method for solving a convex programming problem with rate of convergence O(1/k2). Soviet Math. Doklady, 269(3): 543–547, 1983.Google Scholar
[344] Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course. Boston MA: Kluwer Academic, 2004.CrossRef
[345] Nesterov, Y. Smooth minimization of nonsmooth functions. Math. Program. 103(1): 127–152, 2005.Google Scholar
[346] Nesterov, Y., Nemirovsky, A. A general approach to polynomial-time algorithms design for convex programming. Report of the Central Economical and Mathematical Institute, USSR Academy of Sciences, Moscow, 1988.
[347] Neumaier, A. Solving ill-conditioned and singular linear systems: a tutorial on regularization. SIAM Review, 40(3): 636–666, 1998.Google Scholar
[348] Ng, L., Solo, V. Error-in-variables modeling in optical fow estimation. IEEE Trans. Image Processing, 10(10): 1528–1540, 2001.Google Scholar
[349] Ni, G., Qi, L., Bai, M. Geometric measure of entanglement and U-eigenvalues of tensors. SIAM J. Matrix Anal. Appl., 35(1): 73–87, 2014.Google Scholar
[350] Nievergelt, Y. Total least squares: state-of-the-art regression in numerical analysis. SIAM Review, 36(2): 258–264, 1994.Google Scholar
[351] Nionand, D., Sidiropoulos, N.D. Tensor algebra and multidimensional harmonic retrieval in signal processing for MIMO radar. IEEE Trans. Signal Processing, 58(11): 5693–5705, 2010.Google Scholar
[352] Noble, B., Danniel, J.W. Applied Linear Algebra, 3rd edn. Englewood Cliffs, NJ: Prentice-Hall, 1988.
[353] Nocedal, J., Wright, S.J. Numerical Optimization. New York: Springer-Verlag, 1999.
[354] Nour-Omid, B., Parlett, B.N., Ericsson, T., Jensen, P.S. How to implement the spectral transformation. Math. Comput., 48: 663–673, 1987.Google Scholar
[355] Noutsos, D. Perron Frobenius theory and some extensions. Department of Mathematics, University of Ioannina, May 2008.
[356] Ohlsson, H., Ljung, L., Boyd, S. Segmentation of ARX-models using sum-ofnorms regularization. Automatica, 46: 1107–1111, 2010.Google Scholar
[357] Ohsmann, M. Fast cosine transform of Toeplitz matrices, algorithm and applications. IEEE Trans. Signal Processing, 41: 3057–3061, 1993.Google Scholar
[358] Oja, E. The nonlinear PCA learning rule in independent component analysis. Neurocomputing, 17: 25–45, 1997.Google Scholar
[359] Olson, L., Vandini, T. Eigenproblems from finite element analysis of fluid– structure interactions. Comput. Struct., 33: 679–687, 1989.Google Scholar
[360] Ortega, J.M., Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables. New York/London: Academic Press, 1970.
[361] Oseledets, I. TT-toolbox 2.2. Available at http://github.com/oseledets/TT-Toolbox/(2012).
[362] Oseledets, I., Saluev, T., Savostyanov, D.V., Dolgov, S.V. ttpy: python implementation of the tensor train, 2014.
[363] Osborne, M., Presnell, B., Turlach, B. A new approach to variable selection in least squares problems. IMA J. Numer. Anal., 20: 389–403, 2000.Google Scholar
[364] Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul., 4(2): 460–489, 2005.Google Scholar
[365] Ottersten, B., Asztely, D., Kristensson, M., Parkvall, S. A statistical approach to subspace based estimation with applications in telecommunications. In Recent Advances in Total Least Squares Techniques and Error-in-Variables Modeling (Van Huffel, S. ed.), Philadelphia, PA: SIAM, 1997.
[366] Paatero, P. A weighted non-negative least squares algorithm for three-way PARAFAC factor analysis. Chemometr. Intell. Lab. Syst., 38: 223–242, 1997.Google Scholar
[367] Paatero, P., Tapper, U. Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values. Environmetrics, 5: 111–126, 1994.Google Scholar
[368] Paatero, P., Tapper, U. Least squares formulation of robust non-negative factor analysis. Chemometrics Intell. Lab, 37: 23–35, 1997.Google Scholar
[369] Paige, C.C. Computing the generalized singular value decomposition. SIAM J. Sci. Statist. Comput., 7: 1126–1146, 1986.Google Scholar
[370] Paige, C.C., Saunders, N.A. Towards a generalized singular value decomposition. SIAM J. Numer. Anal., 18: 269–284, 1981.Google Scholar
[371] Pajunnen, P., Karhunen, J. Least-squares methods for blind source Separation based on nonlinear PCA. Int. J. of Neural Syst., 8: 601–612, 1998.Google Scholar
[372] Papoulis, A. Probability, Random Variables and Stochastic Processes. New York: McGraw-Hill, 1991.
[373] Parikh, N., Bord, S. Proximal algorithms. Found. Trends Optim., 1(3): 123–231, 2013.Google Scholar
[374] Parlett, B.N. The Rayleigh quotient iteration and some generalizations for nonormal matrices. Math. Comput., 28(127): 679–693, 1974.Google Scholar
[375] Parlett, B.N. The Symmetric Eigenvalue Problem. Englewood Cliffs, NJ: Prentice-Hall, 1980.
[376] Parra, L., Spence, C., Sajda, P., Ziehe, A., Muller, K. Unmixing hyperspectral data. In Advances in Neural Information Processing Systems, vol. 12. Cambridge, MA: MIT Press, 942–948, 2000.
[377] Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proc. 27th Ann. Asilomar Conf. Signals Syst. Comput., vol. 1, 40–44, 1993.Google Scholar
[378] Pauca, V.P., Piper, J., Plemmons, R. Nonnegative matrix factorization for spectral data analysis. Linear Algebra Appl., 416(1): 29–47, 2006.Google Scholar
[379] Pauca, V.P., Shahnaz, F., Berry, M.W., Plemmons, R.J. Text mining using non- hnegative matrix factorizations. In Proc. 4th SIAM Inter. Conf. on Data Mining, Lake Buena Vista, Florida, April 22-24, 2004.
[380] Pavon, M. New results on the interpolation problem for continue-time stationary-increments processes. SIAM J. Control Optim., 22: 133–142, 1984.Google Scholar
[381] Pearson, K. On lines and planes of closest fit to points in space. Phil. Mag, 559–572, 1901.
[382] Pease, M.C. Methods of Matrix Algebra. New York: Academic Press, 1965.
[383] Peng, C.Y., Zhang, X.D. On recursive oblique projectors. IEEE Signal Processing Lett., 12(6): 433–436, 2005.Google Scholar
[384] Penrose, R.A. A generalized inverse for matrices. Proc. Cambridge Philos. Soc., 51: 406–413, 1955.Google Scholar
[385] Perron, O. Zur Theorie der Matrices. Math. Ann., 64(2): 248–263, 1907.Google Scholar
[386] Petersen, K.B., Petersen, M.S. The Matrix Cookbook. Available at http://matrixcookbook.com(2008).
[387] Pham, D.T. Joint approximate diagonalization of positive defnite matrices. SIAM J. Matrix Anal. Appl., 22(4): 1136–1152, 2001.Google Scholar
[388] Phan, A.H., Tichavsky, P., Cichocki, A. TENSORBOX: a MATLAB package for tensor decomposition, LABSP, RIKEN, Japan. Available at http://www.bsp.brain.riken.jp/phan/tensorbox.php/(2012).
[389] Phan, A.H., Tichavsky, P., Cichocki, A. Low complexity damped Gauss– Newton algorithms for CANDECOMP/PARAFAC. SIAM J. Matrix Anal. Appl., 34: 126–147, 2013.Google Scholar
[390] Phillip, A., Regalia, P.A., Mitra, S. Kronecker propucts, unitary matrices and signal processing applications. SIAM Review, 31(4): 586–613, 1989.Google Scholar
[391] Piegorsch, W.W., Casella, G. The early use of matrix diagonal increments in statistical problems. SIAM Review, 31: 428–434, 1989.Google Scholar
[392] Piegorsch, W.W., Casella, G. Erratum: inverting a sum of matrices. SIAM Review, 32: 470, 1990.Google Scholar
[393] Pintelon, R., Guillaume, P., Vandersteen, G., Rolain, Y. Analyzes, development and applications of TLS algorithms in frequency domain system identification. In Recent Advances in Total Least Squares Techniques and Error-in-Variable Modeling (Van Huffel, S. ed.), Philadelphia, PA: SIAM, 1997.
[394] Pisarenko, V.F. The retrieval of harmonics from a covariance function. Geophysics, J. Roy. Astron. Soc., 33: 347–366, 1973.Google Scholar
[395] Piziak, R., Odell, P.L. Full rank factorization of matrices. Math. Mag., 72(3): 193–202, 1999.Google Scholar
[396] Polyak, B.T. Introduction to Optimization. New York: Optimization Software Inc., 1987.
[397] Poularikas, A.D. The Handbook of Formulas and Tables for Signal Processing. New York: CRC Press, Springer, IEEE Press, 1999.
[398] Powell, M.J. D. A method for nonlinear constraints in minimization problems. In Optimization (Fletcher, R. ed.), New York: Academic Press, 283–298, 1969.
[399] Powell, M.J. D. On search directions for minimization algorithms. Math. Progr., 4: 193–201, 1973.Google Scholar
[400] Price, C. The matrix pseudoinverse and minimal variance estimates. SIAM Review, 6: 115–120, 1964.Google Scholar
[401] Pringle, R.M., Rayner, A.A. Generalized Inverse ofMatrices with Applications to Statistics. London: Griffin 1971.
[402] Puig, A.T., Wiesel, A., Fleury, G., Hero, A.O. Multidimensional shrinkagethresholding operator and group LASSO penalties. IEEE Signal Processing Lett., 18(6): 343–346, 2011.Google Scholar
[403] Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 40: 1302–1324, 2005.Google Scholar
[404] Qi, L., Teo, K.L. Multivariate polynomial minimization and its application in signal processing. J. Global Optim., 26: 419–433, 2003.Google Scholar
[405] Qi, L., Wang, Y., Wu, E.X. D-eigenvalues of diffusion kurtosis tensors, J. Comput. Appl. Math., 221: 150–157, 2008.Google Scholar
[406] Qi, L., Xu, C., Xu, Y. Nonnegative tensor factorization, completely positive tensors, hierarchical elimination algorithm. SIAM J. Matrix Anal. Appl., 35(4): 1227–1241, 2014.Google Scholar
[407] Rado, R. Note on generalized inverse of matrices. Proc. Cambridge Philos. Soc., 52: 600–601, 1956.Google Scholar
[408] Ram, I., Elad, M., Cohen, I. Image processing using smooth ordering of its patches. IEEE Trans. Image Process., 22(7): 2764–2774, 2013.Google Scholar
[409] Rao, C.R. Estimation of heteroscedastic variances in linear models. J. Amer. Statist. Assoc., 65: 161–172, 1970.Google Scholar
[410] Rao, C.R., Mitra, S.K. Generalized Inverse of Matrices. New York: John Wiley & Sons, 1971.
[411] Rayleigh, L. The Theory of Sound, 2nd edn. New York: Macmillian, 1937.
[412] Recht, B., Fazel, M., Parrilo, P.A. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3): 471–501, 2010.Google Scholar
[413] Regalia, P.A., Mitra, S.K. Kronecker products, unitary matrices and signal processing applications. SIAM Review, 31(4): 586–613, 1989.Google Scholar
[414] Riba, J., Goldberg, J., Vazquez, G. Robust beamforming for interference rejection in mobile communications. IEEE Trans. Signal Processing, 45(1): 271–275, 1997.Google Scholar
[415] Robeva, E. Orthogonal decomposition of symmetric tensors. SIAM J. Matrix Anal. Appl., 37(1): 86–102, 2016.Google Scholar
[416] Roos, C. A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim., 16(1): 1110–1136, 2006.Google Scholar
[417] Roos, C., Terlaky, T., Vial, J. Ph. Theory and Algorithms for Linear Optimization: An Interior-Point Approach. Chichester, UK: John Wiley & Sons, 1997.
[418] Roth, V. The Generalized LASSO. IEEE Trans. Neural Networks, 15(1): 16–28, 2004.Google Scholar
[419] Roy, R., Kailath, T. ESPRIT – Estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust., Speech, Signal Processing, 37: 297–301, 1989.Google Scholar
[420] Rudin, L., Osher, S., Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D, 60: 259–268, 1992.Google Scholar
[421] Saad, Y. The Lanczos biothogonalization algorithm and other oblique projection methods for solving large unsymmetric systems. SIAM J. Numer. Anal., 19: 485-506, 1982.Google Scholar
[422] Saad, Y. Numerical Methods for Large Eigenvalue Problems. New York: Manchester University Press, 1992.
[423] Salehi, H. On the alternating projections theorem and bivariate stationary stichastic processes. Trans. Amer. Math. Soc., 128: 121–134, 1967.Google Scholar
[424] Samson, C. A unified treatment of fast algorithms for identification. Inter. J. Control, 35: 909–934, 1982.Google Scholar
[425] Scales, L.E. Introduction to Non-linear Optimization. London: Macmillan, 1985.
[426] Schmidt, R.O. Multiple emitter location and signal parameter estimation. Proc RADC Spectral Estimation Workshop, NY: Rome, 243–258, 1979.
[427] Schmidt, R.O. Multiple emitter location and signal parameter estimation. IEEE Trans. Antenna Propagat., 34: 276–280, 1986.Google Scholar
[428] Schott, J.R. Matrix Analysis for Statistics. Wiley: New York, 1997.
[429] Schoukens, J., Pintelon, R., Vandersteen, G., Guillaume, P. Frequency-domain system identification using nonparametric noise models estimated from a small number of data sets. Automatica, 33(6): 1073–1086, 1997.Google Scholar
[430] Schutz, B. Geometrical Methods of Mathematical Physics. Cambridge: Cambridge University Press, 1980.
[431] Schultz, T., Seidel, H.P. Estimating crossing fibers: a tensor decomposition approach. IEEE Trans. Visualization Comput. Graphics, 14: 1635–1642, 2008.Google Scholar
[432] Scutari, G., Palomar, D.P., Facchinei, F., Pang, J.S. Convex optimization, game theory, variational inequality theory. IEEE Signal Processing Mag., 27(3): 35–49, 2010.Google Scholar
[433] Searle, S.R. Matrix Algebra Useful for Statististics. New York: John Wiley & Sons, 1982.
[434] Shahnaz, F., Berry, M.W., Pauca, V.P., Plemmons, R.J. Document clustering using nonnegative matrix factorization. Inf. Processing & Management, 42(2): 373–386, 2006.Google Scholar
[435] Shashua, A., Levin, A. Linear image coding for regression and classification using the tensor-rank principle. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2001.
[436] Shashua, A., Zass, R., Hazan, T. Multi-way clustering using super-symmetric nonnegative tensor factorization. In Proc. European Conf. on Computer Vision, Graz, Austria, May 2006.
[437] Shavitt, I., Bender, C.F., Pipano, A., Hosteny, R.P. The iterative calculation of several of the lowest or highest eigenvalues and corresponding eigenvectors of very large symmetric matrices. J. Comput. Phys., 11: 90–108, 1973.Google Scholar
[438] Sherman, J., Morrison, W.J. Adjustment of an inverse matrix corresponding to changes in the elements of a given column or a given row of the original matrix (abstract). Ann. Math. Statist., 20: 621, 1949.Google Scholar
[439] Sherman, J., MorrisonW. J. Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Statist., 21: 124–127, 1950.Google Scholar
[440] Shewchuk, J.R. An introduction to the conjugate gradient method without the agonizing pain. Available at http://quake-papers/painless-conjugategradient- pics.ps(1994).
[441] Shivappa, S.T., Trivedi, M.M., Rao, B.D. Audiovisual information fusion in human-computer interfaces and intelligent environments: a survey. Proc. IEEE, 98(10): 1692–1715, 2010.Google Scholar
[442] Sidiropoulos, N.D., Bro, R. On the uniqueness of multilinear decomposition of N-way arrays. J. Chemometrics, 14: 229–239, 2000.Google Scholar
[443] Sidiropoulos, N.D., Bro, R. On communication diversity for blind identifiability and the uniqueness of low-rank decomposition of N-way arrays. In Proc. Int. Conf. Acoust. Speech Signal Processing, vol. 5: 2449–2452, 2000.Google Scholar
[444] Sidiropoulos, N.D., Bro, R., Giannakis, G.B. Parallel factor analysis in sensor array processing. IEEE Trans. Signal Processing, 48(8): 2377–2388, 2000.Google Scholar
[445] Sidiropoulos, N.D., Liu, H. Identifiability results for blind beamforming in incoherent multipath with small delay spread. IEEE Trans. Signal Processing, 49(1): 228–236, 2001.Google Scholar
[446] Sidiropoulos, N.D., Papalexakis, E.E., Faloutsos, C. Parallel randomly compressed cubes: a scalable distributed architecture for big tensor decomposition. IEEE Signal Processing Mag., 33(9): 57–70, 2014.Google Scholar
[447] Silva, V., Lim, L.H. Tensor rank and the ill-posedness of the best low-rank approximation problem, SIAM J. Matrix Anal. Appl., 30(3): 1084–1127, 2008.Google Scholar
[448] Simon, J.C. Patterns and Operators: The Foundations and Data Representation. North Oxford Academic Publishers, 1986.
[449] Solomentsev, E.D. Euclidean space. Encyclopedia of Mathematics. Springer, 2011.
[450] Sorber, L., Barel, M.V., Lathauwer, D.L. Tensorlab v2.0. Available at http://www.tensorlab.net/(2014).
[451] Sorensen, M., Lathauwer, L.D. Blind signal separation via tensor decomposition with Vandermonde factor: canonical polyadic decomposition. IEEE Trans. Signal Processing, 61(22): 5507–5519, 2013.Google Scholar
[452] Spivak, M. A Comprehensive Introduction to Differential Geometry, 2nd edn. (five volumes). New York: Publish or Perish Press, 1979.
[453] Stein, D.W. J., Beaven S.G., Hoff L.E., Winter E.M., Schaum A.P., Stocker A.D. Anomaly detection from hyperspectral imagery. IEEE Signal Processing Mag., 19(1): 58–69, 2002.Google Scholar
[454] Stewart, G.W. On the early history of the singular value decomposition. SIAM Review, 35(4): 551–566, 1993.Google Scholar
[455] Stewart, G.W., Sun, J.G. Matrix Perturbation Theory. New York: Academic Press, 1990.
[456] Stiefel, E. Richtungsfelder und ferparallelismus in n-dimensionalem mannig faltigkeiten. Commentarii Math Helvetici, 8: 305–353, 1935.Google Scholar
[457] Stoica, P., Sorelius, J., Cedervall, M., Söderström T. Error-in-variables modeling: an instrumental variable approach. In Recent Advances in Total Least Squares Techniques and Error-in-Variables Modeling (Van Huffel, S. ed.). Philadelphia, PA: SIAM, 1997.
[458] Sun, J., Zeng, H., Liu, H., Lu, Y., Chen, Z. Cubesvd: a novel approach to personalized web search. In Proc. 14th Inter. Conf. on World Wide Web, 652–662, 2005.
[459] Sun, Y., Gao, J., Hong, X., Mishra, B., Yin, B. Heterogeneous tensor decomposition for clustering via manifold optimization. IEEE Trans. Pattern Anal. Machine Intell., 38(3): 476–489, 2016.Google Scholar
[460] Syau, Y.R. A note on convex functions. Inter. J. Math. & Math. Sci. 22(3): 525–534, 1999.Google Scholar
[461] Takeuchi, K., Yanai, H., Mukherjee, B.N. The Foundations of Multivariante Analysis. New York: Wiley, 1982.
[462] Tibshirani, R. Regression shrinkage and selection via the lasso. J.R. Statist. Soc. B, 58: 267–288, 1996.Google Scholar
[463] Tichavsky, P., Phan, A.H., Koldovsky, Z. Cramer–Rao-induced bounds for CANDECOMP/PARAFAC tensor decomposition. IEEE Trans. Signal Processing, 61(8): 1986–1997, 2013.Google Scholar
[464] Tikhonov, A. Solution of incorrectly formulated problems and the regularization method, Soviet Math. Dokl., 4: 1035–1038, 1963.Google Scholar
[465] Tikhonov, A., Arsenin, V. Solution of Ill-Posed Problems. New York: Wiley, 1977.
[466] Tisseur, F., Meerbergen, K. Quadratic eigenvalue problem. SIAM Review, 43(2): 235–286, 2001.Google Scholar
[467] Toeplitz, O. Zur Theorie der quadratischen und bilinearen Formen von unendlichvielen Veränderlichen. I Teil: Theorie der L-Formen, Math Annal, 70: 351–376, 1911.Google Scholar
[468] Tomasi, G., Bro, R. A comparison of algorithms for fitting the PARAFAC model. Computat. Statist. Data Anal., 50(7): 1700–1734, 2006.Google Scholar
[469] Tou, J.T., Gonzalez, R.C. Pattern Recognition Principles. London: Addison- Wesley Publishing Comp, 1974.
[470] Tropp, J.A. Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inform. Theory, 50(10): 2231–2242, 2004.Google Scholar
[471] Tropp, J.A. Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inform. Theory, 52(3): 1030–1051, 2006.Google Scholar
[472] Tropp, J.A., Wright, S.J. Computational methods for sparse solution of linear inverse problems. Proc. IEEE, 98(6): 948–958, 2010.Google Scholar
[473] Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Statist. Phys., 52: 479–487, 1988.Google Scholar
[474] Tsallis, C. The nonadditive entropy Sq and its applications in physics and elsewhere: some remarks. Entropy, 13: 1765–1804, 2011.Google Scholar
[475] Tucker, L.R. Implications of factor analysis of three-way matrices for measurement of change. In Problems in Measuring Change (Harris, C.W. ed.), Wisconsin: University of Wisconsin Press, 122–137, 1963.
[476] Tucker, L.R. The extension of factor analysis to three-dimensional matrices. In Contributions to Mathematical Psychology (Gulliksen H., Frederiksen N. eds.), New York: Holt, Rinehart & Winston, 109–127, 1964.
[477] Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika, 31: 279–311, 1966.Google Scholar
[478] Turk, M. Multimodal interaction: a review. Pattern Recog. Lett., 36(1): 189–195, 2014.Google Scholar
[479] van der Kloot, W.A., Kroonenberg, P.M. External analysis with three-mode principal component models, Psychometrika, 50: 479-494, 1985.Google Scholar
[480] van der Veen, A.J. Joint diagonalization via subspace fitting techniques. Proc. 2001 IEEE Inter. Conf. on Acoust., Speech, Signal Processing, vol. 5: 2773– 2776, 2001.
[481] Van Huffel, S. (ed.). Recent Advances in Total Least Squares Techniques and Error-in-Variables Modeling. Philadelphia, PA: SIAM, 1997.
[482] Van Huffel, S. TLS applications in biomedical signal processing. In Recent Advances in Total Least Squares Techniques and Error-in-Variables Modeling (Van Huffel, S. ed.). Philadelphia, PA: SIAM, 1997.
[483] Van Loan, C.F. Generalizing the singular value decomposition. SIAM J. Numer. Anal., 13: 76–83, 1976.Google Scholar
[484] Van Loan, C.F. Matrix computations and signal processing. In Selected Topics in Signal Processing (Haykin, S. ed.). Englewood Cliffs: Prentice-Hall, 1989.
[485] van Overschee, P., De Moor, B. Subspace Identification for Linear Systems. Boston, MA: Kluwer, 1996.
[486] Van Huffel, S., Vandewalle, J. Analysis and properties of the generalized total least squares problem Ax = b when some or all columns in A are subject to error. SIAM J. Matrix Anal. Appl., 10: 294–315, 1989.Google Scholar
[487] Vandaele, P., Moonen, M. Two deterministic blind channel estimation algorithms based on oblique projections. Signal Processing, 80: 481–495, 2000.Google Scholar
[488] Vandenberghe, L. Lecture Notes for EE236C (Spring 2011-12), UCLA.
[489] Vanderbei, R.J. An interior-point algorithm for nonconvex nonlinear programming. Available at http://orfe.princeton.edu/rvdb/pdf/talks/level3/nl.pdf (2000)
[490] Vanderbei, R.J., Shanno, D.F. An interior-point algorithm for nonconvex nonlinear programming. Comput. Optim. Appl., 13(1-3): 231–252, 1999.
[491] Vasilescu, M.A.O., Terzopoulos, D. Multilinear analysis of image ensembles: TensorFaces. In Proc. European Conf. on Computer Vision, Copenhagen, Denmark, 447–460, 2002.
[492] Vasilescu, M.A.O., Terzopoulos D. Multilinear image analysis for facial recognition. In Proc. Inter. Conf. on Pattern Recognition, Quebec City, Canada, 2002.
[493] Veen, A.V. D. Algebraic methods for deterministic blind beamforming. Proc. IEEE, 86: 1987–2008, 1998.Google Scholar
[494] Veganzones, M.A., Cohen, J.E., Farias, R.C., Chanussot, J., Comon, P. Nonnegative tensor CP decomposition of hyperspectral data. IEEE Trans. Geosci. Remote Sensing, 54(5): 2577–2588, 2016.Google Scholar
[495] Viberg, M., Ottersten, B. Sensor array processing based on subspace fitting. IEEE Trans. Signal Processing, 39: 1110–1121, 1991.Google Scholar
[496] Vollgraf, R., Obermayer, K. Quadratic optimization for simultaneous matrix diagonalization. IEEE Trans. Signal Processing, 54(9): 3270–3278, 2006.Google Scholar
[497] Wächter, A., Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Progr. Ser. A, 106(1): 25–57, 2006.Google Scholar
[498] Wang, H. Coordinate descent algorithm for covariance graphical lasso. Stat. Comput., 24: 521–529, 2014.Google Scholar
[499] Wang, H., Ahuja, N. Compact representation of multidimensional data using tensor rank-one decomposition. In Proc. Inter. Conf. on Pattern Recognition, Vol. 1, 44–47, 2004.Google Scholar
[500] Wang, H., Amini, A. Cardiac motion and deformation recovery from MRI: a review. IEEE Trans. Medical Imaging, 31(2): 487–503, 2012.Google Scholar
[501] Wang, W., Sanei, S., Chambers, J.A. Penalty function-based joint diagonalization approach for convolutive blind separation of nonstationary sources. IEEE Trans. Signal Processing, 53(5): 1654–1669, 2005.Google Scholar
[502] Watkins, D.S. Understanding the QR algorithm. SIAM Review, 24(4): 427–440, 1982.Google Scholar
[503] Watson, G.A. Characterization of the subdifferential of some matrix norms, Linear Algebra Appl., 170: 33–45, 1992.
[504] Wax, M., Sheinvald, J. A least squares approach to joint diagonalization. IEEE Signal Processing Lett., 4(2): 52–53, 1997.Google Scholar
[505] Weiss, A.J., Friedlander, B. Array processing using joint diagonalization. Signal Processing, 50(3): 205–222, 1996.Google Scholar
[506] Welling, M., Weber, M. Positive tensor factorization. Pattern Recognition Lett., 22: 1255–1261, 2001.Google Scholar
[507] Whiteford, H.A., Degenhardt, L., et al. Global burden of disease attributable to mental and substance use disorders: findings from the Global Burden of Disease Study 2010. Lancet, 382(9904): 1575–1586, 2013.Google Scholar
[508] Wiesel, A., Eldar, Y.C., Beck, A. Maximum likelihood estimation in linear models with Gaussian model matrix, IEEE Signal Processing Lett., 13: 292–295, 2006.Google Scholar
[509] Wiesel, A., Eldar, Y.C., Yeredor, A. Linear regression with Gaussian model uncertainty: algorithms and bounds, IEEE Trans. Signal Processing, 56: 2194–2205, 2008.Google Scholar
[510] Wilkinson, J.H. The Algerbaic Eigenvalue Problem. Oxford, UK: Clarendon Press, 1965.
[511] Wirtinger, W. Zur formalen Theorie der Funktionen von mehr Komplexen vromnderlichen. Math. Ann., 97: 357–375, 1927.Google Scholar
[512] Wolf, J.K. Redundancy, the discrete Fourier transform, and impulse noise cancellation. IEEE Trans. Commun, 31: 458–461, 1983.Google Scholar
[513] Woodbury, M.A. Inverting modified matrices. Memorandum Report 42, Statistical Research Group, Princeton, 1950.
[514] Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y. Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. In Proc. Adv. Neural Inf. Processing Syst., 87(4): 20:3–20:56, 2009.Google Scholar
[515] Xu, H., Caramanis, C., Mannor, S. Robust regression and Lasso. IEEE Trans. Inform. Theory, 56(7): 3561–3574, 2010.Google Scholar
[516] Xu, G., Cho, Y., Kailath, T. Application of fast subspace decomposition to signal processing and communication problems. IEEE Trans. Signal Processing, 42: 1453–1461, 1994.Google Scholar
[517] Xu, G., Kailath, T. A fast algorithm for signal subspace decomposition and its performance analysis. In Proc. Inter. Conf. of Acoust., Speech and Signal Processing, Toronto, Canada, 3069–3072, 1991.
[518] Xu, G., Kailath, T. Fast subspace decomposition. IEEE Trans. Signal Processing, 42: 539–551, 1994.Google Scholar
[519] Xu, G., Kailath, T. Fast estimation of principal eigenspace using Lanczos algorithm. SIAM J. Matrix Anal. Appl. 15(3): 974–994, 1994.
[520] Xu, L., Oja, E., Suen, C. Modified Hebbian learning for curve and surface fitting. Neural Networks, 5: 441–457, 1992.Google Scholar
[521] Yan, H., Paynabar, K., Shi, J. Image-based process monitoring using low-rank tensor decomposition. IEEE Trans. Automation Sci. Eng., 12(1): 216–227, 2015.Google Scholar
[522] Yang, B. Projection approximation subspace tracking. IEEE Trans. Signal Processing, 43: 95–107, 1995.Google Scholar
[523] Yang, B. An extension of the PASTd algorithm to both rank and subspace tracking. IEEE Signal Processing Lett., 2(9): 179–182, 1995.Google Scholar
[524] Yang, J., Yuan, X. Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput., 82: 301–329, 2013.Google Scholar
[525] Yang, J.F., Kaveh, M. Adaptive eigensubspace algorithms for direction or frequency estimation and tracking. IEEE Trans. Acoust., Speech, Signal Processing, 36: 241–251, 1988.Google Scholar
[526] Yang, X., Sarkar, T.K., Arvas, E. A survey of conjugate gradient algorithms for solution of extreme eigen-problems of a symmetric matrix. IEEE Trans. Acoust., Speech, Signal Processing, 37: 1550–1556, 1989.Google Scholar
[527] Yang, Y., Yang, Q. Further results for Perron–Frobenius theorem for nonnegative tensors. SIAM J. Matrix Anal. Appl., 31: 2517–2530, 2010.Google Scholar
[528] Yang, W., Gao, Y., Shi, Y., Cao, L. MRM-Lasso: a sparse multiview feature selection method via low-rank analysis. IEEE Trans. Neural Networks Learning Syst., 26(11): 2801-2815, 2015.Google Scholar
[529] Yeniay, O., Ankara, B. Penalty function methods for constrained optimization with genetic algorithms. Math. Comput. Appl., 10(1): 45–56, 2005.Google Scholar
[530] Yeredor, A. Non-orthogonal joint diagonalization in the least squares sense with application in blind source separation. IEEE Trans. Signal Processing, 50(7): 1545–1553, 2002.Google Scholar
[531] Yeredor, A. Time-delay estimation in mixtures. In Proc. 2003 IEEE Inter. Conf. on Acoustics, Speech, and Signal Processing, 5: 237–240, 2003.Google Scholar
[532] Yin, W., Osher, S., Goldfarb, D., Darbon, J. Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J. Imaging Sci., 1(1): 143–168, 2008.Google Scholar
[533] Yokota, T., Zhao, Q., Cichocki, A. Smooth PARAFAC decomposition for tensor completion. IEEE Trans. Signal Processing, 64(20): 5423–5436, 2016.Google Scholar
[534] Youla, D.C. Generalized image restoration by the method of altenating projections. IEEE Trans. Circuits Syst., 25: 694–702, 1978.Google Scholar
[535] Yu, X., Tong, L. Joint channel and symbol estimation by oblique projections. IEEE Trans. Signal Processing, 49(12): 3074–3083, 2001.Google Scholar
[536] Yu, Y.L. Nesterov's optimal gradient method. 2009. Available at http://www.webdocs.cs. ualberta.ca/yaoliang/mytalks/NS.pdf(2009).
[537] Yuan, M., Lin, Y. Model selection and estimation in regression with grouped variables. J. Royal Statist. Soc., Series B, 68(1): 49–67, 2006.Google Scholar
[538] Zass, R., Shashua, A. Nonnegative sparse PCA. In Proc. Conf. on Neural Information Processing Systems, Vancuver, Canada, 2006.
[539] Zdunek, R., Cichocki, A. Nonnegative matrix factorization with constrained second-order optimization. Signal Processing, 87(8): 1904–1916, 2007.Google Scholar
[540] Zha, H. The restricted singular value decomposition of matrix triplets. SIAM J. Matrix Anal. Appl., 12: 172–194, 1991.Google Scholar
[541] Zhang, X., Wen, G., Dai, W. A tensor decomposition-based anomaly detection algorithm for hyperspectral image. IEEE Trans. Geosci. Remote Sensing, 54(10): 5801–5820, 2016.Google Scholar
[542] Zhang, X.D. Numerical computations of left and right pseudo inverse matrices (in Chinese). Kexue Tongbao, 7(2): 126, 1982.Google Scholar
[543] Zhang, X.D., Liang, Y.C. Prefiltering-based ESPRIT for estimating parameters of sinusoids in non-Gaussian ARMA noise. IEEE Trans. Signal Processing, 43: 349–353, 1995.Google Scholar
[544] Zhang, Z.Y. Nonnegative matrix factorization: models, algorithms and applications. Intell. Syst. Ref. Library, 24(6): 99–134, 2012.Google Scholar
[545] Zhou, G., Cichocki, A. TDALAB: tensor decomposition laboratory, LABSP, Wako-shi, Japan. Available at http://bsp.brain.riken.jp/TDALAB/(2013).
[546] Zhou, H. TensorReg toolbox for Matlab, 2013.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • References
  • Xian-Da Zhang, Tsinghua University, Beijing
  • Book: Matrix Analysis and Applications
  • Online publication: 25 October 2017
  • Chapter DOI: https://doi.org/10.1017/9781108277587.012
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • References
  • Xian-Da Zhang, Tsinghua University, Beijing
  • Book: Matrix Analysis and Applications
  • Online publication: 25 October 2017
  • Chapter DOI: https://doi.org/10.1017/9781108277587.012
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • References
  • Xian-Da Zhang, Tsinghua University, Beijing
  • Book: Matrix Analysis and Applications
  • Online publication: 25 October 2017
  • Chapter DOI: https://doi.org/10.1017/9781108277587.012
Available formats
×