Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-25T19:24:26.748Z Has data issue: false hasContentIssue false

Bibliography

Published online by Cambridge University Press:  18 May 2017

Zhu Han
Affiliation:
University of Houston
Mingyi Hong
Affiliation:
Iowa State University
Dan Wang
Affiliation:
Hong Kong Polytechnic University
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] https://en.wikipedia.org/wiki/Andrew_File_System.
[2] https://en.wikipedia.org/wiki/Network_File_System.
[3] S., Ghemawat, H., Gobioff, and S.–T., Leung, “The google file system,” in Proceedings of USENIX Symposium on Operating System Principles, Lake George, NY, USA, October 2003.
[4] G., Malewicz, M. H., Austern, A. J. C., Bik, J. C., Dehnert, I., Horn, N., Leiser, and G., Czajkowski, “Pregel: A system for large–scale graph processing,” in Proceedings of ACM Special Interest Group on Management of Data Conference, Indianapolis, IN, USA, June 2010.
[5] Y., Low, J., Gonzalez, A., Kyrola, D., Bickson, C., Guestrin, and J. M., Hellerstein, “Graphlab: A new framework for parallel machine learning,” in Proceedings of Conference on Uncertainty in Artificial Intelligence, Catalina Island, USA, July 2010.
[6] Y., Low, J., Gonzalez, A., Kyrola, D., Bickson, C., Guestrin, and J. M., Hellerstein, “Distributed graphlab: A framework for machine learning and data mining in the cloud,” in Proceedings of International Conference on Very Large Data Bases, Istanbul, Turkey, August 2012.
[7] X., Wang, M., Hong, S., Ma, and Z.–Q., Luo, “Solving multiple–block separable convex minimization problems using two–block alternating direction method of multipliers,” Pacific Journal on Optimization, vol. 11, no. 4, pp. 645–667, 2015.Google Scholar
[8] J., Tan, S., Meng, X., Meng, and L., Zhang, “Improving reduce task data locality for sequential MapReduce jobs,” in Proceedings of IEEE International Conference on Computer Communications, Turin, Italy, April 2013.
[9] M., Zaharia, K., Elmeleegy, D., Borthakur, S., Shenker, J. S., Sarma, and I., Stoica, “Delay scheduling: A simple technique for achieving locality and fairness in cluster scheduling,” in Proceedings of Eurosys Conference, Paris, France, April 2010.
[10] B. W., Lampson, “A scheduling philosophy for multiprocessing systems,” Communication of the ACM, vol. 11, no. 5, pp. 346–360, May 1968.Google Scholar
[11] A. S., Schulz, “Polytopes and scheduling,” PhD dissertation, Technical University of Berlin, 1996.
[12] H., Kasahara and S., Narita, “Practical multiprocessor scheduling algorithms for efficient parallel processing,” IEEE Transactions on Computers, vol. 33, no. 11, pp. 1023–1029, November 1984.Google Scholar
[13] T. L., Adam, K. M., Chandy, and J. R., Dickson, “A comparison of list schedules for parallel processing systems,” Communications of the ACM, vol. 17, no. 12, pp. 685–690, December 1974.Google Scholar
[14] M., Queyranne and A. S., Schulz, “Approximation bounds for a general class of precedence constrained parallel machine scheduling problems,” SIAM Journal on Computing, vol. 35, no. 5, pp. 1241–1253, March 2006.Google Scholar
[15] H., Chang, M., Kodialam, R. R., Kompella, T. V., Lakshman, M., Lee, and S., Mukherjee, “Scheduling in mapreduce–like systems for fast completion time,” in Proceedings of IEEE International Conference on Communications, Shanghai, China, April 2011.
[16] F., Chen, M., Kodialam, and T. V., Lakshman, “Joint scheduling of processing and shuffle phases in mapreduce systems,” in Proceedings of IEEE International Conference on Communications, Orlando, FL, USA, March 2012.
[17] Y., Yuan, D., Wang, and J., Liu, “Joint scheduling of mapreduce jobs with servers: Performance bounds and experiments,” in Proceedings of IEEE International Conference on Computer Communications, Toronto, Canada, April 2014.
[18] J., Dean and S., Ghemawat, “Mapreduce: Simplified data processing on large clusters,” in Proceedings of USENIX Operating System Design and Implementation, San Francisco, CA, USA, December 2004.
[19] J., Lin, “The curse of zipf and limits to parallelization: A look at the stragglers problem in mapreduce,” in The 7th Workshop on Large–Scale Distributed Systems for Information Retrieval, Boston, MA, USA, July 2009.
[20] B., Gufler, N., Augsten, A., Reiser, and A., Kemper, “Load balancing in MapReduce based on scalable cardinality estimates,” in Proceedings of IEEE International Conference on Data Engineering, Washington, DC, USA, April 2012.
[21] Y., Kwon, M., Balazinska, B., Howe, and J., Rolia, “A study of skew in MapReduce applications,” in Open Cirrus Summit 2011, Atlanta, GA, USA, October 2011.
[22] Y., Kwon, M., Balazinska, B., Howe, and J., Rolia, “Skewtune: Mitigating skew in MapReduce applications,” in Proceedings of ACMSpecial Interest Group on Management of Data Conference, Scottsdale, AZ, USA, May 2012.
[23] B., Wang, J., Jiang, and G., Yang, “Actcap: Accelerating MapReduce on heterogeneous clusters with capability–aware data placement,” in Proceedings of IEEE International Conference on Computer Communications, Hong Kong, China, April 2015.
[24] Y., Le, J., Liu, F., Ergun, and D., Wang, “Online load balancing for MapReduce with skewed data input,” in Proceedings of IEEE International Conference on Computer Communications, Toronto, Canada, April 2014.
[25] S., Ibrahim, H., Jin, L., Lu, S., Wu, B., He, and L., Qi, “Leen: Locality/fairness–aware key partitioning for mapreduce in the cloud,” in IEEE Second International Conference on Cloud Computing Technology and Science, Washington, DC, USA, November 2010.
[26] B., Gufler, N., Augsten, A., Reiser, and A., Kemper, “Handling data skew in MapReduce,” in Proceedings of International Conference on Cloud Computing and Services Science, Noordwijkerhout, the Netherlands, May 2011.
[27] K., Ousterhout, A., Panda, J., Rosen, S., Venkataraman, R., Xin, S., Ratnasamy, S., Shenker, and I., Stoica, “The case for tiny tasks in compute clusters,” in Proceedings of USENIX Hot Topics in Operating Systems, Santa Ana Pueblo, NM, USA, May 2013.
[28] K., Ousterhout, P., Wendell, M., Zaharia, and I., Stoica, “Sparrow: Distributed, low latency scheduling,” in Proceedings of USENIX Symposium on Operating System Principles, Farmington, PA, USA, November 2013.
[29] P., Delgado, F., Dinu, A.–M., Kermarrec, and W., Zwaenepoel, “Hawk: Hybrid datacenter scheduling,” in Proceedings of USENIX Annual Technical Conference, Santa Clara, CA, USA, July 2015.
[30] Y., Yuan, H., Wang, D., Wang, and J., Liu, “On interference–aware provisioning for cloudbased big data processing,” in Proceedings of IEEE/ACM Symposium on Quality of Services, Montreal, Canada, June 2013.
[31] D., Xie, N., Ding, Y. C., Hu, and R., Kompella, “The only constant is change: Incorporating time–varying network reservations in data centers,” in Proceedings of the ACM Special Interest Group on Data Communications Annual Conference, Helsinki, Finland, August 2012.
[32] R., Shea, F., Wang, H., Wang, and J., Liu, “A deep investigation into network performance in virtual machine based cloud environments,” in Proceedings of IEEE International Conference on Computer Communications, Toronto, Canada, April 2014.
[33] C., Reiss, A., Tumanov, G. R., Ganger, R. H., Katz, and M. A., Kozuch, “Heterogeneity and dynamicity of clouds at scale: Google trace analysis,” in Proceedings of ACM Symposium on Cloud Computing, San Jose, CA, USA, October 2012.
[34] X., Ling, Y., Yuan, D., Wang, and J., Yang, “Tetris: Optimizing cloud resource usage unbalance with elastic VM,” in Proceedings of IEEE/ACM International Symposium on Quality of Services, Beijing, China, June 2016.
[35] B., Hindman, A., Konwinski, M., Zaharia, A., Ghodsi, A. D., Joseph, R., Katz, S., Shenker, and I., Stoica, “Mesos: A platform for fine–grained resource sharing in the data center,” in Proceedings of USENIX Networking System Design and Implementation, Boston, MA, USA, March 2011.
[36] M., Schwarzkopf, A., Konwinski, M., Abd–El–Malek, and J., Wilkes, “Omega: Flexible, scalable schedulers for large compute clusters,” in Proceedings of ACM Eurosys Conference, Prague, Czech Republic, April 2013.
[37] A., Verma, L., Pedrosa, M., Korupolu, D., Oppenheimer, E., Tune, and J., Wilkes, “Large–scale cluster management at Google with Borg,” in Proceedings of ACM Eurosys Conference, Bordeaux, France, April 2015.
[38] E., Boutin, J., Ekanayake, W., Lin, B., Shi, J., Zhou, Z., Qian, M., Wu, and L., Zhou, “Apollo: Scalable and coordinated scheduling for cloud–scale computing,” in Proceedings of USENIX Annual Technical Conference, Broomfield, CO, USA, October 2014.
[39] A., Goder, A., Spiridonov, and Y., Wang, “Bistro: Scheduling data–parallel jobs against live production systems,” in Proceedings of USENIX Annual Technical Conference, Santa Clara, CA, USA, July 2015.
[40] A., Ghodsi, M., Zaharia, B., Hindman, A., Konwinski, S., Shenker, and I., Stoica, “Dominant resource fairness: Fair allocation of multiple resources types,” in Proceedings of USENIX Networking System Design and Implementation, Boston, MA, USA, March 2011.
[41] G., Ananthanarayanan, C., Douglas, R., Ramakrishnan, S., Rao, and I., Stoica, “True elasticity in multi–tenant data–intensive compute clusters,” in Proceedings of ACM Symposium on Cloud Computing, San Jose, CA, USA, October 2012.
[42] M., Isard, V., Prabhakaran, J., Currey, U., Wieder, K., Talwar, and A., Goldberg, “Quincy: Fair scheduling for distributed computing clusters,” in Proceedings of USENIX Symposium on Operating System Principles, Big Sky, MT, USA, October 2009.
[43] K., Kambatla, A., Pathak, and H., Pucha, “Towards optimizing hadoop provisioning in the cloud,” in Proceedings of USENIX Hot Topics in Cloud Computing, SanDiego, CA, USA, June 2009.
[44] F., Tian and K., Chen, “Towards optimal resource provisioning for running mapreduce programs in public clouds,” in Proceedings of IEEE International Conference on Cloud Computing, Washington, DC, USA, July 2011.
[45] H., Herodotou, F., Dong, and S., Babu, “No one (cluster) size fits all: Automatic cluster sizing for data–intensive analytics,” in Proceedings of ACM Symposium on Cloud Computing, Cascais, Portugal, October 2011.
[46] L., Zhang, C., Wu, Z., Li, C., Guo, M., Chen, and F. C., Lau, “Moving big data to the cloud: An online cost–minimizing approach,” IEEE Journal on Selected Areas in Communications Special Issue on Networking Challenges in Cloud Computing Systems and Applications, vol. 31, no. 12, pp. 2710–2721, December 2013.Google Scholar
[47] S.–H., Park, O., Simeone, O., Sahin, and S., Shamai, “Robust and efficient distributed compression for cloud radio access networks,” IEEE Transactions on Vehicular Technology, vol. 62, no. 2, pp. 692–703, February 2013.Google Scholar
[48] Y., Huangfu, J., Cao, H., Lu, and G., Liang, “Matrixmap: Programming abstraction and implementation of matrix computation for big data applications,” in Proceedings of IEEE International Conference on Parallel and Distributed Systems, Melbourne, Australia, December 2015.
[49] X., Meng, J., Bradley, B., Yavuz, E., Sparks, S., Venkataraman, D., Liu, J., Freeman, D., Tsai, M., Amde, S., Owen, D., Xin, R., Xin, M. J., Franklin, R., Zadeh, M., Zaharia, and A., Talwalkar, “Mllib: Machine learning in apache spark,” Journal of Machine Learning Research, vol. 17, no. 34, pp. 1–7, 2016.Google Scholar
[50] http://spark.apache.org/docs/latest/mllib–guide.html.
[51] D. P., Bertsekas, Nonlinear Programming, 2nd ed., Belmont, MA, USA: Athena Scientific, 1999.
[52] Y., Nesterov, Introductory lectures on Convex Optimization: A Basic Course, Springer, 2004.
[53] Z.–Q., Luo and P., Tseng, “On the convergence of the coordinate descent method for convex differentiable minimization,” Journal of Optimization Theory and Application, vol. 72, no. 1, pp. 7–35, 1992.Google Scholar
[54] Z.–Q., Luo and P., Tseng, “On the linear convergence of descent methods for convex essentially smooth minimization,” SIAM Journal on Control and Optimization, vol. 30, no. 2, pp. 408–425, 1992.Google Scholar
[55] M., Hong, M., Razaviyayn, Z.–Q., Luo, and J.–S., Pang, “A unified algorithmic framework for block–structured optimization involving big data,” IEEE Signal Processing Magazine, vol. 33, no. 1, pp. 57–77, 2016.Google Scholar
[56] D. P., Bertsekas and J. N., Tsitsiklis, Neuro–Dynamic Programming, Belmont, MA, USA: Athena Scientific, 1996.
[57] F., Facchinei, S., Sagratella, and G., Scutari, “Flexible parallel algorithms for big data optimization,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2014, pp. 7208–7212.
[58] G., Scutari, F., Facchinei, P., Song, D. P., Palomar, and J.–S., Pang, “Decomposition by partial linearization: Parallel optimization of multi–agent systems,” IEEE Transactions on Signal Processing, vol. 63, no. 3, pp. 641–656, February 2014.Google Scholar
[59] M., Razaviyayn, “Successive convex approximation: Analysis and applications,” PhD thesis, University of Minnesota, 2014.
[60] M., Razaviyayn, M., Hong, Z.–Q., Luo, and J. S., Pang, “Parallel successive convex approximation for nonsmooth nonconvex optimization,” in Proceedings of the Neural Information Processing Systems, Montreal, Canada, December 2014, pp. 1440–1448.
[61] J., Ortega and W., Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, 1970.
[62] C., Hildreth, “A quadratic programming procedure,” Naval Research Logistics Quarterly, vol. 4, no. 1, pp. 79–85, March 1957.Google Scholar
[63] J., Warga, “Minimizing certain convex functions,” Journal of the Society for Industrial and Applied Mathematics, vol. 11, no. 3, pp. 588–593, September 1963.Google Scholar
[64] A., Auslender, Optimisation. Méthodes numériques, Masson, 1976.
[65] R., Sargent and D., Sebastian, “On the convergence of sequential minimization algorithms,” Journal of Optimization Theory and Applications, vol. 12, no. 6, pp. 567–575, December 1973.Google Scholar
[66] M. J. D., Powell, “On search directions for minimization algorithms,” Mathematical Programming, vol. 4, no. 1, pp. 193–201, December 1973.Google Scholar
[67] M., Razaviyayn, M., Hong, and Z.–Q., Luo, “A unified convergence analysis of block successive minimization methods for nonsmooth optimization,” SIAM Journal on Optimization, vol. 23, no. 2, pp. 1126–1153, June 2013.Google Scholar
[68] I., Necoara and A., Patrascu, “A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints,” Computational Optimization and Applications, vol. 57, no. 2, pp. 307–377, March 2014.Google Scholar
[69] P., Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimization,” Journal of Optimization Theory and Applications, vol. 103, no. 9, pp. 475–494, June 2001.Google Scholar
[70] B., Chen, S., He, Z., Li, and S., Zhang, “Maximum block improvement and polynomial optimization,” SIAM Journal on Optimization, vol. 22, no. 1, pp. 87–107, January 2012.Google Scholar
[71] Z.–Q., Luo and P., Tseng, “Error bounds and convergence analysis of feasible descent methods: A general approach,” Annals of Operations Research, vol. 46, no. 1, pp. 157–178, February 1993.Google Scholar
[72] Y., Nesterov, “Efficiency of coordinate descent methods on huge–scale optimization problems,” SIAM Journal on Optimization, vol. 22, no. 2, pp. 341–362, April 2012.Google Scholar
[73] A., Beck and L., Tetruashvili, “On the convergence of block coordinate descent type methods,” SIAM Journal on Optimization, vol. 23, no. 4, pp. 2037–2060, 2013.Google Scholar
[74] M., Hong, X., Wang, M., Razaviyayn, and Z.–Q., Luo, “Iteration complexity analysis of block coordinate descent methods,” preprint, 2013, http://arXiv:1310.6957.
[75] Z., Lu and L., Xiao, “Randomized block coordinate non–monotone gradient method for a class of nonlinear programming,” preprint, 2013, http://arXiv:1306.5918.
[76] P., Richtárik and M., Takáč, “Iteration complexity of randomized block–coordinate descent methods for minimizing a composite function,” Mathematical Programming, vol. 144, no. 1–2, pp. 1–38, 2014.Google Scholar
[77] A. P., Dempster, N. M., Laird, and D. B., Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society Series B, vol. 39, no. 1, pp. 1–38, 1977.Google Scholar
[78] A. L., Yuille and A., Rangarajan, “The concave–convex procedure,” Neural Computation, vol. 15, no. 4, pp. 915–936, April 2003.Google Scholar
[79] D. D., Lee and H. S., Seung, “Algorithms for non–negative matrix factorization,” in Advances in Neural Information Processing Systems, ed. T. K., Leen, T. G., Dietterich, and V., Tresp, MIT Press, 2001, pp. 556–562.
[80] Q., Shi, M., Razaviyayn, Z.–Q., Luo, and C., He, “An iteratively weighted MMSE approach to distributed sum–utility maximization for a MIMO interfering broadcast channel,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4331–4340, September 2011.Google Scholar
[81] M. R., Hestenes, “Multiplier and gradient methods,” Journal of Optimization Theory and Applications, vol. 4, no. 5, pp. 303–320, 1969.Google Scholar
[82] M. J. D., Powell, “A method for nonlinear constraints in minimization problems,” in Optimization, ed. R., Fletcher, New York: Academic Press, 1972, pp. 283–298.
[83] R., Glowinski and P., Le Tallec, Augmented Lagrangian and Operator–Splitting Methods in Nonlinear Mechanics, Philadelphia, PA, USA: SIAM, 1989.
[84] G., Chen and M., Teboulle, “Convergence analysis of a proximal–like minimization algorithm using Bregman functions,” SIAM Journal on Optimization, vol. 3, no. 3, pp. 538–543, 1993.Google Scholar
[85] T.–H., Chang, M., Hong, and X., Wang, “Multi–agent distributed optimization via inexact consensus ADMM,” IEEE Transactions on Signal Processing, vol. 63, no. 2, pp. 482–497, January 2015.Google Scholar
[86] S., Boyd, N., Parikh, E., Chu, B., Peleato, and J., Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.Google Scholar
[87] R., Glowinski, Numerical Methods for Nonlinear Variational Problems, New York: Springer–Verlag, 1984.
[88] B., He and X., Yuan, “On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method,” SIAM Journal on Numerical Analysis, vol. 50, no. 2, pp. 700–709, 2012.Google Scholar
[89] B., He and X., Yuan, “On nonergodic convergence rate of Douglas–Rachford alternating direction method of multipliers,” Numerische Mathematik, vol. 130, no. 3, pp. 567–577, 2015.Google Scholar
[90] D., Goldfarb, S., Ma, and K., Scheinberg, “Fast alternating linearization methods for minimizing the sum of two convex functions,” Mathematical Programming, vol. 141, no. 1–2, pp. 349–382, 2012.Google Scholar
[91] T., Goldstein, B., Oonoghue, and S., Setzer, “Fast alternating direction optimization methods,” UCLA CAM Report 12–35, 2012.Google Scholar
[92] W., Deng and W., Yin, “On the global linear convergence of the alternating direction method of multipliers,” Rice University CAAM Technical Report TR12–14, 2012.
[93] J., Eckstein and D., Bertsekas, “An alternating direction method for linear programming,” Laboratory for Information and Decision Systems, MIT, http://hdl.handle.net/1721.1/3197, 1990.
[94] W., Deng and W., Yin, “On the global and linear convergence of the generalized alternating direction method of multipliers,” Journal of Scientific Computing, vol. 66, no. 3, pp. 889– 916, 2016.Google Scholar
[95] M., Hong and Z.–Q., Luo, “On the linear convergence of the alternating direction method of multipliers,” accepted by Mathematical Programming Series A, 2016, http://arXiv:1208.3922.
[96] D., Boley, “Local linear convergence of the alternating direction method of multipliers on quadratic or linear programs,” SIAM Journal on Optimization, vol. 23, no. 4, pp. 2183– 2207, 2013.Google Scholar
[97] C., Chen, B., He, X., Yuan, and Y., Ye, “The direct extension of ADMM for multi–block convex minimization problems is not necessarily convergent,” Mathematical Programming, vol. 155, no. 1–2, pp. 57–98, 2016.Google Scholar
[98] B., He, M., Tao, and X., Yuan, “Alternating direction method with Gaussian back substitution for separable convex programming,” SIAM Journal on Optimization, vol. 22, no. 2, pp. 313–340, 2012.
[99] B., He, H., Xu, and X., Yuan, “On the proximal Jacobian decomposition of ALM for multipleblock separable convex minimization problems and its relationship to ADMM,” Journal of Scientific Computing, vol. 66, no. 3, pp. 1204–1217, 2016.Google Scholar
[100] W., Deng, M., Lai, Z., Peng, and W., Yin, “Parallel multi–block ADMM with o (1/k) convergence,” preprint, 2014, http://arXiv:1312.3040.
[101] M., Hong, T.–H., Chang, X., Wang, M., Razaviyayn, S., Ma, and Z.–Q., Luo, “A block successive upper bound minimization method of multipliers for linearly constrained convex optimization,” preprint, 2013, http://arXiv:1401.7079.
[102] X., Gao and S., Zhang, “First–order algorithms for convex optimization with nonseparate objective and coupled constraints,” Optimization online, vol. 3, p. 5, 2015.Google Scholar
[103] Y., Zhang, “An alternating direction algorithm for nonnegative matrix factorization,” preprint, 2010.
[104] D. L., Sun and C., Fevotte, “Alternating direction method of multipliers for non–negative matrix factorization with the beta–divergence,” in IEEE International Conference on Acoustics, Speech and Signal Processing, May 2014, pp. 6201–6205.
[105] B., Ames and M., Hong, “Alternating directions method of multipliers for l1–penalized zero variance discriminant analysis and principal component analysis,” Computational Optimization and Applications, vol. 64, no. 3, pp. 725–754, 2016.Google Scholar
[106] B., Jiang, S., Ma, and S., Zhang, “Alternating direction method of multipliers for real and complex polynomial optimization models,” Optimization, vol. 63, no. 6, pp. 883–898, 2014.Google Scholar
[107] R., Zhang and J. T., Kwok, “Asynchronous distributed ADMM for consensus optimization,” in Proceedings of International Conference on Machine Learning, Beijing, China, June 2014, pp. 1701–1709.
[108] P. A., Forero, A., Cano, and G. B., Giannakis, “Distributed clustering using wireless sensor networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 4, pp. 707–724, August 2011.Google Scholar
[109] Z., Wen, C., Yang, X., Liu, and S., Marchesini, “Alternating direction methods for classical and ptychographic phase retrieval,” Inverse Problems, vol. 28, no. 11, pp. 1–18, 2012.Google Scholar
[110] M., Hong, Z.–Q., Luo, and M., Razaviyayn, “Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems,” SIAM Journal on Optimization, vol. 26, no. 1, pp. 337–364, 2016.Google Scholar
[111] W. P., Ziemer, Weakly Differentiable Functions: Sobolev Spaces and Functions of Bounded Variation, Graduate Texts in Mathematics, New York, USA: Springer, 1989.
[112] S., Kullback, “The Kullback–Leibler distance,” The American Statistician, vol. 41, no. 4, pp. 340–341, 1987.Google Scholar
[113] S., Kullback, Information Theory and Statistics, Mineola, NY, USA: Dover Pubns, 1997.
[114] D., Hosmer and S., Lemeshow, Applied Logistic Regression, vol. 354, Hoboken, NJ, USA: Wiley–Interscience, 2000, vol. 354.
[115] J., Duchi, S., Shalev–Shwartz, Y., Singer, and T., Chandra, “Efficient projections onto the l1– ball for learning in high dimensions,” in Proceedings of the International Conference on Machine Learning, New York, NY, USA: ACM, July 2008, pp. 272–279.
[116] A., Quattoni, X., Carreras, M., Collins, and T., Darrell, “An efficient projection for regularization,” in Proceedings of the Annual International Conference on Machine Learning, Montreal, QC, Canada, 2009, pp. 857–864.
[117] J., Liu and J., Ye, “Efficient Euclidean projections in linear time,” in Proceedings of Annual International Conference on Machine Learning, Montreal, QC, Canada, 2009, pp. 657–664.
[118] E., van den Berg and M., Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 890–912, 2008.Google Scholar
[119] E., van den Berg, M., Schmidt, M., Friedlander, and K., Murphy, “Group sparsity via lineartime projection,” Optimization Online, 2008.
[120] S., Boyd and L., Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[121] P., Tseng and S., Yun, “A coordinate gradient descent method for nonsmooth separable minimization,” Mathematical Programming, vol. 117, no. 1, pp. 387–423, 2009.Google Scholar
[122] M., Razaviyayn, M., Hong, and Z., Luo, “A unified convergence analysis of coordinatewise successive minimization methods for nonsmooth optimization,” Report of University of Minnesota, Twin Cites 2012.
[123] X., Wei, Y., Yuan, and Q., Ling, “DOA estimation using a greedy block coordinate descent algorithm,” IEEE Transactions on Signal Processing, vol. 60, no. 12, pp. 6382–6394, December 2012.Google Scholar
[124] D., Donoho, “De–noising by soft–thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, May 1995.Google Scholar
[125] J. F., Cai, E. J., Candes, and Z., Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2008.Google Scholar
[126] S., Ma, D., Goldfarb, and L., Chen, “Fixed point and Bregman iterative methods for matrix rank minimization,” Mathematical Programming Series A, vol. 128, no. 1–2, pp. 321–353, 2011.Google Scholar
[127] J., Cai and S., Osher, “Fast singular value thresholding without singular value decomposition,” UCLA CAM Report 10–24, vol. 5, 2010.Google Scholar
[128] L., Rudin, S., Osher, and E., Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1–4, pp. 259–268, 1992.Google Scholar
[129] J., Darbon and M., Sigelle, “Image restoration with discrete constrained total variation, Part I: Fast and exact optimization,” Journal of Mathematical Imaging and Vision, vol. 26, no. 3, pp. 261–276, 2006.Google Scholar
[130] D., Goldfarb and W., Yin, “Parametric maximum flow algorithms for fast total variation minimization,” SIAM Journal on Scientific Computing, vol. 31, no. 5, pp. 3712– 3743, 2009.Google Scholar
[131] T. F., Chan, H. M., Zhou, and R. H., Chan, “Continuation method for total variation denoising problems,” Advanced Signal Processing Algorithms, vol. 2563, no. 1, pp. 314–325, 1995.Google Scholar
[132] A., Chambolle, “An algorithm for total variation minimization and applications,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1–2, pp. 89–97, 2004.Google Scholar
[133] A., Chambolle, “Total variation minimization and a class of binary MRF models,” Energy Minimization Methods in Computer Vision and Pattern Recognition, Lecture Notes in Computer Science 3757, pp. 136–152, 2005.
[134] B., Wohlberg and P., Rodriguez, “An iteratively reweighted norm algorithm for minimization of total variation functionals,” IEEE Signal Processing Letters, vol. 14, no. 12, pp. 948–951, December 2007.Google Scholar
[135] Y., Wang, J., Yang, W., Yin, and Y., Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.Google Scholar
[136] T., Goldstein and S., Osher, “The split Bregman method for l1 regularized problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323–343, 2009.Google Scholar
[137] Y.–L., Yu, “Better approximation and faster algorithm using the proximal average,” in Proceedings of the Neural Information Processing Sytems, Lake Tohoe, NV, USA, December 2013, pp. 458–466.
[138] Y.–L., Yu, “On decomposing the proximal map,” in Proceedings of the Neural Information Processing Sytems, 2013, pp. 91–99.
[139] M., Figueiredo and R., Nowak, “An EM algorithm for wavelet–based image restoration,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 906–916, August 2003.Google Scholar
[140] C., De Mol and M., Defrise, “A note on wavelet–based inversion algorithms,” Contemporary Mathematics, vol. 313, pp. 85–96, 2002.Google Scholar
[141] J., Bect, L., Blanc–Feraud, G., Aubert, and A., Chambolle, “A unified variational framework for image restoration,” European Conference on Computer Vision, Prague, Lecture Notes in Computer Sciences 3024, pp. 1–13, 2004.
[142] J., Douglas and H. H., Rachford, “On the numerical solution of the heat conduction problem in 2 and 3 space variables,” Transactions of the American Mathematical Society, vol. 82, pp. 421–439, 1956.Google Scholar
[143] D. H., Peaceman and H. H., Rachford, “The numerical solution of parabolic elliptic differential equations,” SIAM Journal on Applied Mathematics, vol. 3, no. 1, pp. 28–41, 1955.Google Scholar
[144] D., Gabay and B., Mercier, “A dual algorithm for the solution of nonlinear variational problems via finite–element approximations,” Computers and Mathematics with Applications, vol. 2, no. 1, pp. 17–40, 1976.Google Scholar
[145] J., Eckstein and D. P., Bertsekas, “On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol. 55, no. 1–3, pp. 293–318, 1992.
[146] M., Fortin and R., Glowinski, Augmented Lagrangian Methods, Ambsterdam, New York, USA: North–Holland, 1983.
[147] P. L., Combettes and V. R., Wajs, “Signal recovery by proximal forward–backward splitting,” Multiscale Modeling and Simulation, vol. 4, no. 4, pp. 1168–1200, 2005.Google Scholar
[148] R., Rockafellar, Convex Analysis, Princeton, USA: Princeton University Press, 1970.
[149] S., Ma, W., Yin, Y., Zhang, and A., Chakraborty, “An efficient algorithm for compressed MR imaging using total variation and wavelets,” IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1–8, September 2008.
[150] R., Tibshirani, M., Saunders, S., Rosset, J., Zhu, and K., Knight, “Sparsity and smoothness via the fused lasso,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 1, pp. 91–108, 2005.Google Scholar
[151] Z.–Q., Luo and P., Tseng, “On the linear convergence of descent methods for convex essentially smooth minimization,” SIAM Journal on Control and Optimization, vol. 30, no. 2, pp. 408–425, 1990.Google Scholar
[152] E. T., Hale, W., Yin, and Y., Zhang, “Fixed–point continuation for l1–minimization: Methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008.Google Scholar
[153] Y., Nesterov, “A method of solving a convex programming problem with convergence rate O(1/k2),” Soviet Mathematics Doklady, vol. 27, no. 2, pp. 372–376, 1983.Google Scholar
[154] Y., Nesterov, “Gradient methods for minimizing composite objective function,” www.optimization–online.org, CORE Discussion Paper 2007/76, 2007.
[155] P., Tseng, “On accelerated proximal gradient methods for convex–concave optimization,” submitted to SIAM Journal on Optimization, 2008.
[156] A., Beck and M., Teboulle, “A fast iterative shrinkage–thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.Google Scholar
[157] S., Becker, J., Bobin, and E., Candès, “NESTA: A fast and accurate first–order method for sparse recovery,” SIAM Journal on Imaging Sciences, vol. 4, no. 1, pp. 1–39, 2011.Google Scholar
[158] W., Deng, W., Yin, and Y., Zhang, “Group sparse optimization by alternating direction method,” Rice University CAAM Technical Report TR11–06, 2011.
[159] B., Recht, M., Fazel, and P., Parrilo, “Guaranteed minimum–rank solutions of linear matrix equations via nuclear norm minimization,” SIAM Review, vol. 52, no. 3, pp. 471–501, 2010.Google Scholar
[160] L., Bregman, “The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming,” USSR Computational Mathematics and Mathematical Physics, vol. 7, no. 3, pp. 200–217, 1967.Google Scholar
[161] W., Yin and S., Osher, “Error forgetting of Bregman iteration,” Journal of Scientific Computing, vol. 54, no. 2–3, pp. 684–95, 2013.Google Scholar
[162] S., Osher, M., Burger, D., Goldfarb, J., Xu, and W., Yin, “An iterative regularization method for total variation–based image restoration,” SIAM Journal on Multiscale Modeling and Simulation, vol. 4, no. 2, pp. 460–489, 2005.Google Scholar
[163] J., Yang and Y., Zhang, “Alternating direction algorithms for problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011.Google Scholar
[164] E., Candes, X., Li, Y., Ma, and J., Wright, “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, pp. 1–37, 2011.Google Scholar
[165] B., Efron, T., Hastie, I., Johnstone, and R., Tibshirani, “Least angle regression,” The Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004.Google Scholar
[166] M., Best, “An algorithm for the solution of the parametric quadratic programming problem,” in Applied Mathematics and Parallel Computing, Springer, 1996, pp. 57–76.
[167] L., Ghaoui, V., Viallon, and T., Rabbani, “Safe feature elimination in sparse supervised learning,” preprint, 2010, http://arXiv:1009.4219.
[168] R., Tibshirani, J., Bien, J., Friedman, T., Hastie, N., Simon, J., Taylor, and R., Tibshirani, “Strong rules for discarding predictors in lasso-type problems,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 74, no. 2, pp. 245-266, 2012.Google Scholar
[169] S., Wright, R., Nowak, and M., Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479-2493, July 2009.Google Scholar
[170] Z., Wen, W., Yin, H., Zhang, and D., Goldfarb, “On the convergence of an active set method for l1-minimization,” Optimization Methods and Software, vol. 27, no. 6, pp. 1127-1146, 2012.Google Scholar
[171] J., Nocedal and S. J., Wright, Numerical Optimization, New York, USA: Springer- Verlag, 1999.
[172] J., Tropp and A., Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, December 2007.Google Scholar
[173] D., Donoho, Y., Tsaig, I., Drori, and J.-C., Starck, “Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit,” submitted to IEEE Transactions on Information Theory, 2006.Google Scholar
[174] D., Needell and R., Vershynin, “Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 310-316, April 2010.Google Scholar
[175] W., Dai and O., Milenkovic, “Subspace pursuit for compressive sensing: Closing the gap between performance and complexity,” arXiv:0803.0811v1 [cs.NA], 2008.
[176] D., Needell and J., Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301-321, 2009.Google Scholar
[177] S., Foucart, “Hard thresholding pursuit: An algorithm for compressive sensing,” SIAM Journal on Numerical Analysis, vol. 49, no. 6, pp. 2543-2563, 2011.Google Scholar
[178] Y., Wang and W., Yin, “Sparse signal reconstruction via iterative support detection,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 462-491, 2010.Google Scholar
[179] T., Blumensath and M., Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265-274, 2009.Google Scholar
[180] T., Blumensath and M., Davies, “Normalized iterative hard thresholding: Guaranteed stability and performance,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 298-309, April 2010.Google Scholar
[181] W., Yin, S., Osher, D., Goldfarb, and J., Darbon, “Bregman iterative algorithms for l1-minimization with applications to compressed sensing,” SIAM Journal on Imaging Sciences, vol. 1, no. 1, pp. 143-168, 2008.Google Scholar
[182] S., Osher, Y., Mao, B., Dong, and W., Yin, “Fast linearized Bregman iteration for compressive sensing and sparse denoising,” Communications in Mathematical Sciences, vol. 8, no. 1, pp. 93-111, 2010.Google Scholar
[183] K., Toh and S., Yun, “An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems,” Pacific Journal of Optimization, vol. 6, no. 15, pp. 615-640, 2010.Google Scholar
[184] J., Yang and X., Yuan, “Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization,” Mathematics of Computation, vol. 82, no. 281, pp. 301-329, 2013.Google Scholar
[185] R., Keshavan, A., Montanari, and S., Oh, “Matrix completion from a few entries,” IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2980-2998, June 2010.Google Scholar
[186] Z., Wen, W., Yin, and Y., Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm,” Mathematical Programming Computation, vol. 4, no. 4, pp. 333-361, 2012.Google Scholar
[187] R., Baraniuk, V., Cevher, M., Duarte, and C., Hegde, “Model-based compressive sensing,” IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1982-2001, April 2010.Google Scholar
[188] D., Wang, Q., Zhang, and J., Liu, “Partial network coding: Theory and application in continuous sensor data collection,” in Proceedings of IEEE Workshop on Quality of Services, New Haven, CT, USA, June 2006.
[189] B., Chazelle, R., Rubinfeld, and L., Trevisan, “Approximating the minimum spanning tree weight in sublinear time,” SIAM Journal on computing, vol. 34, no. 6, pp. 1370-1379, July 2005.Google Scholar
[190] Sublinear algorithm surveys, Online: http://people.csail.mit.edu/ronitt/sublinear.html.
[191] M., Mardani, G., Mateos, and G. B., Giannakis, “Subspace learning and imputation for streaming big data matrices and tensors,” IEEE Transactions on Signal Processing, vol. 63, no. 10, pp. 2663-2677, May 2015.Google Scholar
[192] L., Kuang, F., Hao, L. T., Yang, M., Lin, C., Luo, and G., Min, “A tensor-based approach for big data representation and dimensionality reduction,” IEEE Transactions on Emerging Topics in Computing, vol. 2, no. 3, pp. 280-291, September 2014.Google Scholar
[193] J., Li, Y., Yan, W., Duan, S., Song, and M. H., Lee, “Tensor decomposition of Toeplitz jacket matrices for big data processing,” in International Conference on Big Data and Smart Computing, Jeju Island, Korea, February 2015, pp. 11-14.
[194] N., Vervliet, O., Debals, L., Sorber, and L. D., Lathauwer, “Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis,” IEEE Signal Processing Magazine, vol. 31, no. 5, pp. 71-79, September 2014.Google Scholar
[195] N. D., Sidiropoulos, E. E., Papalexakis, and C., Faloutsos, “Parallel randomly compressed cubes : A scalable distributed architecture for big tensor decomposition,” IEEE Signal Processing Magazine, vol. 31, no. 5, pp. 57-70, September 2014.Google Scholar
[196] T. G., Kolda and B. W., Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455-500, 2009.Google Scholar
[197] J. D., Carroll and J.-J., Chang, “Analysis of individual differences in multidimensional scaling via an N-way generalization of ‘Eckart-Young’ decomposition,” Psychometrika, vol. 35, no. 3, pp. 283-319, 1970.Google Scholar
[198] R. A., Harshman, “Parafac2: Mathematical and technical notes,” UCLA Working Papers in Phonetics, vol. 22, no. 3044, p. 122215, 1972.Google Scholar
[199] J., Douglas Carroll, S., Pruzansky, and J. B., Kruskal, “Candelinc: A general approach to multidimensional analysis of many-way arrays with linear constraints on parameters,” Psychometrika, vol. 45, no. 1, pp. 3-24, 1980.Google Scholar
[200] R. A., Harshman, “Models for analysis of asymmetrical relationships among N objects or stimuli,” in First Joint Meeting of the Psychometric Society and the Society for Mathematical Psychology, McMaster University, Hamilton, Ontario, vol. 5, 1978.
[201] R. A., Harshman and M. E., Lundy, “Uniqueness proof for a family of models sharing features of tucker's three-mode factor analysis and parafac/candecomp,” Psychometrika, vol. 61, no. 1, pp. 133-154, 1996.Google Scholar
[202] A., Huy Phan and A., Cichocki, “Parafac algorithms for large-scale problems,” Neurocomput., vol. 74, no. 11, pp. 1970-1984, May 2011, http://dx.doi.org/10.1016/j.neucom.2010.06.030. Google Scholar
[203] A. L. F., De Almeida, G., Favier, and J. C., Mota, “The constrained block-PARAFAC decomposition,” in Conference of ThRee-Way Methods in Chemistry and Psychology, Chania, Greece, June 2006.
[204] “COMFAC: Matlab code for LS fitting of the complex PARAFAC model in 3-D,” http://people.ece.umn.edu/ñikos/comfac.m.
[205] “Factoring tensors in the cloud: A tutorial on big tensor data analytics,” www.cs.cmu.edu /~epapalex/tutorials/icassp14.html.
[206] N. D., Sidiropoulos and A., Kyrillidis, “Multi-way compressed sensing for sparse low-rank tensors,” IEEE Signal Processing Letters, vol. 19, no. 11, pp. 757-760, August 2012.Google Scholar
[207] L., Chiantini and G., Ottaviani, “On generic identifiability of 3-tensors of small rank,” SIAM Journal on Matrix Analysis and Applications, vol. 33, no. 3, pp. 1018-1037, 2012.Google Scholar
[208] J., Liu, P., Musialski, P., Wonka, and J., Ye, “Tensor completion for estimating missing values in visual data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 208-220, January 2013.Google Scholar
[209] S., Gandy, B., Recht, and I., Yamada, “Tensor completion and low-n-rank tensor recovery via convex optimization,” Inverse Problems, vol. 27, no. 2, p. 025010, 2011.Google Scholar
[210] M., Signoretto, R. V., de Plas, B. D., Moor, and J. A. K., Suykens, “Tensor versus matrix completion: A comparison with application to spectral data,” IEEE Signal Processing Letters, vol. 18, no. 7, pp. 403-406, July 2011.Google Scholar
[211] D., Kressner, M., Steinlechner, and B., Vandereycken, “Low-rank tensor completion by Riemannian optimization,” BIT Numerical Mathematics, vol. 54, no. 2, pp. 447-468, 2014.Google Scholar
[212] Y., Xu, R., Hao, W., Yin, and Z., Su, “Parallel matrix factorization for low-rank tensor completion,” preprint, 2013, http://arXiv:1312.1254.
[213] M., Yuan and C.-H., Zhang, “On tensor completion via nuclear norm minimization,” Foundations of Computational Mathematics, vol. 16, no. 4, pp. 1031-1068, 2016.Google Scholar
[214] C. T., Zahn, “Graph-theoretical methods for detecting and describing gestalt clusters,” IEEE Transactions on Computers, vol. C-20, no. 1, pp. 68-86, January 1971.Google Scholar
[215] P., Mordohai and G., Medioni, “Tensor voting: A perceptual organization approach to computer vision and machine learning,” Synthesis Lectures on Image, Video, and Multimedia Processing, vol. 2, no. 1, pp. 1-136, 2006.Google Scholar
[216] E., Franken, M., van Almsick, P., Rongen, L., Florack, and B., ter Haar Romeny, An Efficient Method for Tensor Voting Using Steerable Filters, Berlin, Heidelberg, Germany: Springer Berlin Heidelberg, 2006, pp. 228-240.
[217] W. T., Freeman and E. H., Adelson, “The design and use of steerable filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891-906, September 1991.Google Scholar
[218] E., Pan, M., Pan, Z., Han, and V., Wright, “Mobile trace inference based on tensor voting,” in Proceedings of IEEE Global Communications Conference, Austin, TX, USA, December 2014, pp. 4891-4897.
[219] B. J., King, “Range data analysis by free-space modeling and tensor voting,” PhD dissertation, Troy, NY, USA, 2008.
[220] D., Nion, K. N., Mokios, N. D., Sidiropoulos, and A., Potamianos, “Batch and adaptive parafac-based blind separation of convolutive speech mixtures,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 6, pp. 1193-1207, August 2010.Google Scholar
[221] N. D., Sidiropoulos, G. B., Giannakis, and R., Bro, “Blind parafac receivers for DS-CDMA systems,” IEEE Transactions on Signal Processing, vol. 48, no. 3, pp. 810-823, March 2000.Google Scholar
[222] N. D., Sidiropoulos, R., Bro, and G. B., Giannakis, “Parallel factor analysis in sensor array processing,” IEEE Transactions on Signal Processing, vol. 48, no. 8, pp. 2377-2388, August 2000.Google Scholar
[223] E. E., Papalexakis, C., Faloutsos, and N. D., Sidiropoulos, ParCube: Sparse Parallelizable Tensor Decompositions. Berlin, Heidelberg, Germany: Springer Berlin Heidelberg, 2012, pp. 521-536.
[224] R., Bro and N. D., Sidiropoulos, “Least squares algorithms under unimodality and nonnegativity constraints,” Journal of Chemometrics, vol. 12, no. 4, pp. 223-247, 1998.Google Scholar
[225] A., Cichocki, D., Mandic, L. D., Lathauwer, G., Zhou, Q., Zhao, C., Caiafa, and H. A., PHAN, “Tensor decompositions for signal processing applications: From two-way to multiway component analysis,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145-163, March 2015.Google Scholar
[226] L., Li and D., Boulware, “High-order tensor decomposition for large-scale data analysis,” in IEEE International Congress on Big Data, June 2015, pp. 665-668.Google Scholar
[227] F., Shang, Y., Liu, and J., Cheng, “Generalized higher-order tensor decomposition via parallel ADMM,” preprint, 2014, http://arXiv:1407.139.
[228] X., He, D., Cai, and P., Niyogi, “Tensor subspace analysis,” in Advances in Neural Information Processing Systems, vol. 18, pp. 499-506, 2005.Google Scholar
[229] Y., Xu and W., Yin, “A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1758-1789, 2013.Google Scholar
[230] B., Romera-Paredes and M., Pontil, “A new convex relaxation for tensor completion,” preprint, 2013, http://arXiv:1307.4653.
[231] L., Yang, Z.-H., Huang, and Y.-F., Li, “A splitting augmented Lagrangian method for low multilinear-rank tensor recovery,” Asia-Pacific Journal of Operational Research, vol. 32, no. 1, p. 1540008, 2015.Google Scholar
[232] Y., Xu, “Alternating proximal gradient method for sparse nonnegative tucker decomposition,” Mathematical Programming Computation, vol. 7, no. 1, pp. 39-70, 2015.Google Scholar
[233] Y., Xu, “Block coordinate update method in tensor optimization,” Siam J. Imaging Sciences, vol. 6, no. 3, pp. 1758-1789, 2014.Google Scholar
[234] Y., Liu and F., Shang, “An efficient matrix factorization method for tensor completion,” IEEE Signal Processing Letters, vol. 20, no. 4, pp. 307-310, April 2014.Google Scholar
[235] R., Tomioka, T., Suzuki, K., Hayashi, and H., Kashima, “Statistical performance of convex tensor decomposition,” NIPS–11 Proceedings of the 24th International Conference on Neural Information Processing Systems, pp. 972-980, 2011.
[236] Y., Liu, F., Shang, W., Fan, J., Cheng, and H., Cheng, “Generalized higher order orthogonal iteration for tensor learning and decomposition,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 12, pp. 2551-2563, November 2015.Google Scholar
[237] A., Krishnamurthy and A., Singh, “Low-rank matrix and tensor completion via adaptive sampling,” in Advances in Neural Information Processing Systems, Lake Tohoe, NV, USA, December 2013, pp. 836-844.
[238] Q., Li, A., Prater, L., Shen, and G., Tang, “Overcomplete tensor decomposition via convex optimization,” in IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, December 2015, pp. 53-56.
[239] D., Goldfarb and Z., Qin, “Robust low-rank tensor recovery: Models and algorithms,” SIAM Journal on Matrix Analysis and Applications, vol. 35, no. 1, pp. 225-253, 2014.Google Scholar
[240] J., Brachat, P., Comon, B., Mourrain, and E. P., Tsigaridas, “Symmetric tensor decomposition,” in European Signal Processing Conference, 2009, pp. 525-529.
[241] B., Ran, H., Tan, Y., Wu, and P. J., Jin, “Tensor based missing traffic data completion with spatial-temporal correlation,” Physica A: Statistical Mechanics and its Applications, vol. 446, pp. 54-63, 2016.Google Scholar
[242] J., Liu, P., Musialski, P., Wonka, and J., Ye, “Tensor completion for estimating missing values in visual data,” in IEEE International Conference on Computer Vision, September 2009, pp. 2114-2121.
[243] O. H. M., Padilla and J. G., Scott, “Tensor decomposition with generalized lasso penalties,” preprint, 2015, http://ArXiv:1502.06930.
[244] B., Jiang, S., Ma, and S., Zhang, “Tensor principal component analysis via convex optimization,” Mathematical Programming, vol. 150, no. 2, pp. 423-457, 2015.Google Scholar
[245] L. T., Huang, H. C., So, Y., Chen, and W. Q., Wang, “Truncated nuclear norm minimization for tensor completion,” in IEEE Sensor Array and Multichannel Signal Processing Workshop, June 2014, pp. 417-420.
[246] M., Reisert and H., Burkhardt, “Efficient tensor voting with 3D tensorial harmonics,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 2008, pp. 1-7.
[247] G., Guy and G., Medioni, “Inferring global perceptual contours from local features,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 1993, pp. 786-787.
[248] J., Kang, I., Cohen, and G., Medioni, “Continuous multi-views tracking using tensor voting,” in Proceedings of Workshop on Motion and Video Computing, 2002, December 2002, pp. 181-186.
[249] P., Kornprobst and G., Medioni, “Tracking segmented objects using tensor voting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, June 2000, vol. 2.
[250] A., Narayanaswamy, Y., Wang, and B., Roysam, “3-D image pre-processing algorithms for improved automated tracing of neuronal arbors,” Neuroinformatics, vol. 9, no. 2, pp. 219-231, 2011.Google Scholar
[251] N., Anjum and A., Cavallaro, “Multifeature object trajectory clustering for video analysis,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 11, pp. 1555-1564, November 2008.Google Scholar
[252] J. G., Lee, J., Han, and X., Li, “A unifying framework of mining trajectory patterns of various temporal tightness,” IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 6, pp. 1478-1490, June 2015.Google Scholar
[253] D., Mouillot and D., Viale, “Satellite tracking of a fin whale (Balaenoptera physalus) in the north-western Mediterranean Sea and fractal analysis of its trajectory,” Hydrobiologia, vol. 452, no. 1, pp. 163-171, 2001.Google Scholar
[254] D., Zhang and J. P. G., Sterbenz, “Robustness analysis of mobile ad hoc networks using human mobility traces,” in International Conference on Design of Reliable Communication Networks, Kansas City, MO, USA, March 2015, pp. 125-132.
[255] H., Liu and J., Li, “Unsupervised multi-target trajectory detection, learning and analysis in complicated environments,” in International Conference on Pattern Recognition, Tsukuba Science City, Japan, November 2012, pp. 3716-3720.
[256] J. G., Ko and J. H., Yoo, “Rectified trajectory analysis based abnormal loitering detection for video surveillance,” in Proceedings of International Conference on Artificial Intelligence, Modelling and Simulation, Kota Kinabalu, Malaysia, December 2013, pp. 289-293.
[257] “Deep learning wikipedia,” https://en.wikipedia.org/wiki/Deep_learning.
[258] A. G., Ivakhnenko and V. G., Lapa, “Cybernetic predicting devices,” DTIC Document, Tech. Rep., 1966.
[259] A. G., Ivakhnenko, “Polynomial theory of complex systems,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-1, no. 4, pp. 364-378, October 1971.Google Scholar
[260] K., Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, no. 4, pp. 193-202, 1980.Google Scholar
[261] J., Schmidhuber, “Learning complex, extended sequences using the principle of history compression,” Neural Computation, vol. 4, no. 2, pp. 234-242, March 1992.Google Scholar
[262] J., Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85-117, 2015.Google Scholar
[263] Jürgen, Schmidhuber, Habilitation thesis, TUM, 1993.
[264] G. E., Hinton, P., Dayan, B. J., Frey, and R. M., Neal, “The ‘wake-sleep’ algorithm for unsupervised neural networks,” Science, vol. 268, no. 5214, p. 1158, 1995.Google Scholar
[265] S., Hochreiter, “Untersuchungen zu dynamischen neuronalen netzen,” in Diploma, Technische Universitat Munchen, p. 91, 1991.Google Scholar
[266] S., Hochreiter, Y., Bengio, P., Frasconi, and J., Schmidhuber, “Gradient flow in recurrent nets: The difficulty of learning long-term dependencies,” 2001.
[267] J., Schmidhuber, “Deep Learning,” Scholarpedia, vol. 10, no. 11, p. 32832, 2015, revision no. 152272.Google Scholar
[268] R., Dechter, “Learning while searching in constraint-satisfaction problems,” Proceedings of the 5th National Conference on Artificial Intelligence. Philadelphia, PA, August 11-15, Science, vol. 1, 1986.
[269] I., Aizenberg, N. N., Aizenberg, and J. P., Vandewalle, Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications, Springer Science & Business Media, 2013.
[270] G. E., Hinton, “Learning multiple layers of representation,” Trends in Cognitive Sciences, vol. 11, no. 10, pp. 428-434, 2007.Google Scholar
[271] S., Hochreiter and J., Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, November 1997.Google Scholar
[272] A., Graves, S., Fernández, F., Gomez, and J., Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proceedings of the 23rd International Conference on Machine Learning, ser. ICML06. New York, NY, USA: ACM, 2006, pp. 369-376.
[273] H., Sak, A. W., Senior, and F., Beaufays, “Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition,” preprint, 2014, http://arXiv:1402.1128.
[274] X., Li and X., Wu, “Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4520-4524, April 2015.
[275] H., Zen and H., Sak, “Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, April 2015, pp. 4470-4474.
[276] G., Hinton et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine vol. 29, no. 6, pp. 82-97, November 2012.Google Scholar
[277] L., Deng, G., Hinton, and B., Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, pp. 8599-8603.
[278] L., Deng, J., Li, J. T., Huang, K., Yao, D., Yu, F., Seide, M., Seltzer, G., Zweig, X., He, J., Williams, Y., Gong, and A., Acero, “Recent advances in deep learning for speech research at Microsoft,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, pp. 8604-8608.
[279] L., Deng, O., Abdel-Hamid, and D., Yu, “A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, pp. 6669-6673.
[280] T. N., Sainath, A.-R., Mohamed, B., Kingsbury, and B., Ramabhadran, “Deep convolutional neural networks for LVCSR,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, May 2013, pp. 8614-8618.
[281] “Slides on deep learning,” http://www.cs.nyu.edu/yann/talks/lecun-ranzato-icml2013.pdf.
[282] L., Deng and D., Yu, “Deep learning: Methods and applications,” Foundations and Trends in Signal Processing, vol. 7, no. 34, pp. 197-387, 2014.
[283] D., Yu and L., Deng, Automatic Speech Recognition: A Deep Learning Approach, Springer, 2012.
[284] “Deng receives prestigious IEEE technical achievement award,” http://blogs.technet.com/b /inside_microsoft_research/archive/2015/12/03/deng-receives-prestigious-ieee-technical -achievement-award.aspx.
[285] K.-S., Oh and K., Jung, “GPU implementation of neural networks,” Pattern Recognitionvol. 37, no. 6, pp. 1311-1314, 2004.Google Scholar
[286] K., Chellapilla, S., Puri, and P., Simard, “High performance convolutional neural networks for document processing,” Suvisoft, 2006.
[287] D. C., Ciresan, U., Meier, L. M., Gambardella, and J., Schmidhuber, “Deep big simple neural nets excel on handwritten digit recognition,” preprint, 2010, http://arXiv:1003.0358.
[288] R., Raina, A., Madhavan, and A. Y., Ng, “Large-scale deep unsupervised learning using graphics processors,” in Proceedings of the Annual International Conference on Machine Learning. New York, NY, USA: ACM, 2009, pp. 873-880.
[289] Y., LeCun, Y., Bengio, and G., Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436-444, 2015.Google Scholar
[290] A., Krizhevsky, I., Sutskever, and G. E., Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural information Processing Systems, ed. F. Pereira et al., Curran Associates, Inc., 2012, pp. 1097-1105.
[291] I., Sutskever, O., Vinyals, and Q. V., Le, “Sequence to sequence learning with neural networks,” in Advances in Neural information Processing Systems, ed. F., Pereira et al., North Miami Beach, FL, USA: Curran Associates, Inc., 2014, pp. 3104-3112.
[292] K., Cho, B., van Merrienboer, Ç. Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” preprint, 2014, http://arXiv:1406.1078.
[293] G. E., Hinton, S., Osindero, and Y.-W., Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527-1554, 2006.Google Scholar
[294] Y., Bengio, P., Lamblin, D., Popovici, and H., Larochelle, “Greedy layer-wise training of deep networks,” in Advances in Neural information Processing Systems, vol. 19, 2007, p. 153.Google Scholar
[295] Y., LeCun, B. E., Boser, J. S., Denker, D., Henderson, R. E., Howard,W. E., Hubbard, and L. D., Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural information Processing Systems, ed. D. S., Touretzky, Morgan-Kaufmann, 1990, pp. 396-404.
[296] Y., Lecun, L., Bottou, Y., Bengio, and P., Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE vol. 86, no. 11, pp. 2278-2324, November 1998.Google Scholar
[297] Y., Bengio, R., Ducharme, P., Vincent, and C., Janvin, “A neural probabilistic language model,” Journal of Machine Learning Research vol. 3, no. February, pp. 1137-1155, 2003.Google Scholar
[298] F. A., Gers, N. N., Schraudolph, and J., Schmidhuber, “Learning precise timing with LSTM recurrent networks,” Journal of Machine Learning Research vol. 3, pp. 115-143, August 2002.Google Scholar
[299] A., Graves, D., Eck, N., Beringer, and J., Schmidhuber, “Biologically plausible speech recognition with LSTM neural nets,” in International Workshop on Biologically Inspired Approaches to Advanced Information Technology, Berlin, Heidelberg: Springer, 2004, pp. 127-136.
[300] S., Fernández, A., Graves, and J., Schmidhuber, “An application of recurrent neural networks to discriminative keyword spotting,” in International Conference on Artificial Neural Networks, Berlin, Heidelberg: Springer, 2007, pp. 220-229.
[301] R., McMillan, “How Skype used AI to build its amazing new language translator,” Wire, December 2014.
[302] A. Y., Hannun, C., Case, J., Casper, B., Catanzaro, G., Diamos, E., Elsen, R., Prenger, S., Satheesh, S., Sengupta, A., Coates, and A. Y., Ng, “Deep speech: Scaling up end-to-end speech recognition,” preprint, 2014, http://arXiv:1412.5567.
[303] “Plenary speakers,” www.icassp2016.org/PlenarySpeakers.asp.
[304] L., Deng, “Achievements and challenges of deep learning,” Microsoft, 2015.
[305] D., Ciregan, U., Meier, and J., Schmidhuber, “Multi-column deep neural networks for image classification,” in IEEE Conference on Computer Vision and Pattern Recognition, Rhode Island, RI, USA, June 2012, pp. 3642-3649.
[306] O., Vinyals, A., Toshev, S., Bengio, and D., Erhan, “Show and tell: A neural image caption generator,” in IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, June 2015, pp. 3156-3164.
[307] H., Fang, S., Gupta, F., Iandola, R. K., Srivastava, L., Deng, P., Dollr, J., Gao, X., He, M., Mitchell, J. C., Platt, C. L., Zitnick, and G., Zweig, “From captions to visual concepts and back,” in IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, June 2015, pp. 1473-1482.
[308] R., Kiros, R., Salakhutdinov, and R. S., Zemel, “Unifying visual-semantic embeddings with multimodal neural language models,” preprint, 2014, http://arXiv:1411.2539.
[309] S.-h., Zhong, Y., Liu, and Y., Liu, “Bilinear deep learning for image classification,” in Proceedings of the 19th ACM International Conference on Multimedia, New York, NY, USA, 2011, pp. 343-352.
[310] “Nvidia demos a car computer trained with ‘deep learning’,” www.technology review.com/s/533936/ces-2015-nvidia-demos-a-car-computer-trained-with-deep-learning/.
[311] “What is a driverless car?,” www.wisegeek.com/what-is-a-driverless-car.htm.
[312] “Self-driving cars now legal in California,” www.cnn.com/2012/09/25/tech/innovation /self-driving-car-california/.
[313] S., Thrun, “Toward robotic cars,” Communications of the ACM, vol. 53, no. 4, pp. 99-106, 2010.Google Scholar
[314] S. K., Gehrig and F. J., Stein, “Dead reckoning and cartography using stereo vision for an autonomous car,” in Proceedings of International Conference on Intelligent Robots and Systems, Kyongju, Korea, 1999, vol. 3, pp. 1507-1512.
[315] “The beginning of the end of driving,” www.motortrend.com/news/the-beginning-of -the-end-of-driving/.
[316] “European roadmap smart systems for automated driving,” www.smart-systems-integr ation.org/public/documents/publications/EPoSS%20Roadmap_Smart%20Systems%20for %20Automated%20Driving_2015_V1.pdf.
[317] W., Zhu, J., Miao, J., Hu, and L., Qing, “Vehicle detection in driving simulation using extreme learning machine,” Neurocomputing, vol. 128, pp. 160-165, 2014.Google Scholar
[318] F. A., Gers and E., Schmidhuber, “LSTM recurrent networks learn simple context-free and context-sensitive languages,” IEEE Transactions on Neural Networks vol. 12, no. 6, pp. 1333-1340, November 2001.Google Scholar
[319] R., Józefowicz, O., Vinyals, M., Schuster, N., Shazeer, and Y., Wu, “Exploring the limits of language modeling,” preprint, 2016, http://arXiv:1602.02410.
[320] D., Gillick, C., Brunk, O., Vinyals, and A., Subramanya, “Multilingual language processing from bytes,” preprint, 2015, http://arXiv:1512.00103.
[321] R., Socher, J., Bauer, C. D., Manning, and A. Y., Ng, “Parsing with compositional vector grammars,” in ACL, 2013.
[322] R., Socher, A., Perelygin, J. Y., Wu, J., Chuang, C. D., Manning, A. Y., Ng, and C., Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing, Citeseer, 2013, vol. 1631, p. 1642.
[323] Y., Shen, X., He, J., Gao, L., Deng, and G., Mesnil, “A latent semantic model with convolutional-pooling structure for information retrieval,” in Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, 2014, pp. 101-110.
[324] P.-S., Huang, X., He, J., Gao, L., Deng, A., Acero, and L., Heck, “Learning deep structured semantic models for web search using clickthrough data,” in Proceedings of the 22nd ACM International Conference on Conference on Information and KnowledgeManagement, San Francisco, CA, USA, October 2013, pp. 2333-2338.
[325] G., Mesnil, Y., Dauphin, K., Yao, Y., Bengio, L., Deng, D., Hakkani-Tur, X., He, L., Heck, G., Tur, D., Yu, and G., Zweig, “Using recurrent neural networks for slot filling in spoken language understanding,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 3, pp. 530-539, March 2015.Google Scholar
[326] J., Gao, X., He, S. W., tau Yih, and L., Deng, “Learning continuous phrase representations for translation modeling,” in ACL. Citeseer, 2014.
[327] J., Gao, P., Pantel, M., Gamon, X., He, and L., Deng, “Modeling interestingness with deep neural networks,” Tech. Rep., October 2014, www.microsoft.com/ en-us /research/publication/modeling-interestingness-with-deep-neural-networks/.
[328] X., He, J., Gao, and L., Deng, “Deep learning for natural language processing: Theory and practice (tutorial),” in Proceedings of the 23rd ACM International Conference on Information and Knowledge Management, Shanghai, China, November 2014.
[329] “Merck molecular activity challenge,” www.kaggle.com/c/MerckActivity/details/winners.
[330] G. E., Dahl, N., Jaitly, and R., Salakhutdinov, “Multi-task neural networks for QSAR predictions,” preprint, 2014, http://arXiv:1406.1231.
[331] “Merck molecular activity challenge,” https://tripod.nih.gov/tox21/challenge/leader board.jsp.
[332] “NCATS announces Tox21 Data Challenge Winners,” https://tripod.nih.gov/tox21/chal lenge/leaderboard.jsp.
[333] I., Wallach, M., Dzamba, and A. Heifets, “Atomnet: A deep convolutional neural network for bioactivity prediction in structure-based drug discovery,” preprint, 2015, http://arXiv:1510.02855.
[334] Y., Tkachenko, “Autonomous CRM control via CLV approximation with deep reinforcement learning in discrete and continuous action space,” preprint, 2015, http://arXiv:1504.01840.
[335] A., van den Oord, S., Dieleman, and B., Schrauwen, “Deep content-based music recommendation,” in Advances in Neural Information Processing Systems, Lake Tahoe, CA, USA, December 2013, pp. 2643-2651.
[336] A., M. Elkahky, Y., Song, and X., He, “A multi-view deep learning approach for cross domain user modeling in recommendation systems,” in Proceedings of the 24th InternationalConference on World Wide Web, ser.WWW–15. New York, NY, USA: ACM, 2015, pp. 278-288, http://doi.acm.org/10.1145/2736277.2741667.
[337] D., Chicco, P., Sadowski, and P., Baldi, “Deep autoencoder neural networks for gene ontology annotation predictions,” in Proceedings of the 5th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics, ACM, Newport Beach, CA, USA, September 2014, pp. 533-540.
[338] Y., Bengio, A., Courville, and P., Vincent, “Representation learning: A review and new perspectives,IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798-1828, August 2013.Google Scholar
[339] G., E. Hinton, “Deep belief networks,Scholarpedia, vol. 4, no. 5, p. 5947, 2009.Google Scholar
[340] M., A. Carreira-Perpinan and G., Hinton, “On contrastive divergence learning,” in Artificial Intelligence and Statistics Conference, vol. 10, pp. 33-40, 2005.Google Scholar
[341] G., E. Hinton, A Practical Guide to Training Restricted Boltzmann Machines, Springer, 2012, pp. 599-619.
[342] G., E. Hinton, “Products of experts,” in Ninth International Conference on Artificial NeuralNetworks, vol. 1. Edinburgh, UK: IET, September 1999, pp. 1-6.
[343] G., E. Hinton, “Training products of experts by minimizing contrastive divergence,Neural Computation, vol. 14, no. 8, pp. 1771-1800, 2002.Google Scholar
[344] Y., Bengio, “Learning deep architectures for AI,Foundations and trends® in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009.Google Scholar
[345] C., Szegedy, A., Toshev, and D., Erhan, “Deep neural networks for object detection,” in Advances in Neural Information Processing Systems, Lake Tahoe, CA, USA, December 2013, pp. 2553-2561.
[346] H., Larochelle, D., Erhan, A., Courville, J., Bergstra, and Y., Bengio, “An empirical evaluation of deep architectures on problems with many factors of variation,” in Proceedings of the International Conference on Machine Learning, ACM, Corvallis, OR, USA, June 2007, pp. 473-480.
[347] A., Fischer and C., Igel, “Training restricted Boltzmann machines: An introduction,Pattern Recognition, vol. 47, no. 1, pp. 25-39, 2014.Google Scholar
[348] Y., LeCun, B., Boser, J. S., Denker, D., Henderson, R. E., Howard, W., Hubbard, and L. D., Jackel, “Backpropagation applied to handwritten zip code recognition,Neural Computation, vol. 1, no. 4, pp. 541-551, December 1989.Google Scholar
[349] J., J. Weng, N., Ahuja, and T. S., Huang, “Learning recognition and segmentation of 3-D objects from 2-D images,” in Proceedings of Fourth International Conference on Computer Vision, 1993. Berlin, Germany, May 1993, pp. 121-128.
[350] “Convolutional neural network,” http://ufldl.stanford.edu/tutorial/supervised/Convolutiona lNeuralNetwork/.
[351] C., Szegedy, W., Liu, Y., Jia, P., Sermanet, S., Reed, D., Anguelov, D., Erhan, V., Vanhoucke, and A., Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition, June 2015, pp. 1-9.Google Scholar
[352] A., Krizhevsky, “Convolutional deep belief networks on cifar-10,” http://www.cs.toronto.edu/kriz/conv-cifar10-aug2010.pdf, not published, 2010.
[353] H., Lee, R., Grosse, R., Ranganath, and A. Y., Ng, “Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations,” in Proceedings of the Annual International Conference on Machine Learning, ser. ICML –09. New York, NY, USA: ACM, 2009, pp. 609-616.
[354] A., Graves,M., Liwicki, S., Fernndez, R., Bertolami, H., Bunke, and J., Schmidhuber, “A novel connectionist system for unconstrained handwriting recognition,IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 855-868, May 2009.Google Scholar
[355] J., Bayer, D., Wierstra, J., Togelius, and J., Schmidhuber, “Evolving memory cell structures for sequence learning,” in International Conference on Artificial Neural Networks, Springer, 2009, pp. 755-764.
[356] S., Fernndez, A., Graves, and J., Schmidhuber, “Sequence labelling in structured domains with hierarchical recurrent neural networks,” in Proceedings International Joint Conference on Artificial Intelligence, Hyderabad, India, January 2007, pp. 774-779.
[357] A., Graves and J., Schmidhuber, “Offline handwriting recognition with multidimensional recurrent neural networks,” in Advances in Neural Information Processing Systems 21, North Miami Beach, FL, USA: Curran Associates, Inc., 2009, pp. 545-552.
[358] B., Fan, L., Wang, F. K., Soong, and L., Xie, “Photo-real talking head with deep bidirectional LSTM,” in IEEE International Conference on Acoustics, Speech and Signal Processing, April 2015, pp. 4884-4888.Google Scholar
[359] “Google voice search: Faster and more accurate,” https://research.googleblog.com/2015 /09/google-voice-search-faster-and-more.html.
[360] Cisco, “Cisco visual networking index: Global mobile data traffic forecast update 2015- 2020,” White Paper, 2016.
[361] Apache Spark, “Apache Spark: Lightning-fast cluster computing,” 2016, http://spark.apache.org.
[362] O. D., Lara and M. A., Labrador, “A survey on human activity recognition using wearable sensors,IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 1192-1209, 2013.Google Scholar
[363] G., M. Weiss and J. W., Lockhart, “The impact of personalization on smartphone-based activity recognition,” in AAAI Workshop on Activity Context Representation: Techniques and Languages, Palo Alto, CA, USA, 2012.
[364] C., Perera, A., Zaslavsky, P., Christen, and D., Georgakopoulos, “Context aware computing for the Internet of things: A survey,IEEE Communications Surveys & Tutorials, vol. 16, no. 1, pp. 414-454, 2014.Google Scholar
[365] P., Vincent, H., Larochelle, I., Lajoie, Y., Bengio, and P.-A., Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,The Journal of Machine Learning Research, vol. 11, pp. 3371-3408, 2010.Google Scholar
[366] X., Wang, L., Gao, S., Mao, and S., Pandey, “DeepFi: Deep learning for indoor fingerprinting using channel state information,” in IEEE Wireless Communications and Networking Conference, March 2015, pp. 1666-1671.Google Scholar
[367] N. D., Lane and P., Georgiev, “Can deep learning revolutionize mobile sensing?” in Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, ACM, 2015, pp. 117-122.
[368] J., Ngiam, A., Khosla, M., Kim, J., Nam, H., Lee, and A. Y., Ng, “Multimodal deep learning,” in Proceedings of the International Conference on Machine Learning, 2011, pp. 689-696.Google Scholar
[369] J., Dean et al., “Large scale distributed deep networks,” in Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, December 2012, pp. 1223-1231.
[370] K., Zhang and X.-w., Chen, “Large-scale deep belief nets with MapReduce,IEEE Access, vol. 2, pp. 395-403, 2014.Google Scholar
[371] J., W. Lockhart, G. M., Weiss, J. C., Xue, S. T., Gallagher, A. B., Grosner, and T. T., Pulickal, “Design considerations for the WISDM smart phone-based sensor mining architecture,” in Proceedings of the 5th International Workshop on Knowledge Discovery from Sensor Data, ACM, 2011, pp. 25-33.
[372] L., von Ahn, B., Maurer, C., McMillen, D., Abraham, and M., Blum, “reCAPTCHA: Humanbased character recognition via web security measures,Science, vol. 321, no. 5895, pp. 1465-1468, 2008.Google Scholar
[373] P., Klemperer, Auctions: Theory and Practice, ser. Princeton, NJ, USA: Princeton University Press, 2004.
[374] H., Abu-Ghazaleh and A. S., Alfa, “Application of mobility prediction in wireless networks using markov renewal theory,IEEE Transactions on Vehicular Technology, vol. 59, no. 2, pp. 788-802, February 2010.Google Scholar
[375] D., Katsaros and Y., Manolopoulos, “Prediction in wireless networks by markov chains,IEEE Wireless Communications, vol. 16, no. 2, pp. 56-64, April 2009.Google Scholar
[376] J. -K., Lee and J. C., Hou, “Modeling steady-state and transient behaviors of user mobility: Formulation, analysis, and application,” in Proceedings of the 7th ACM International Symposium on Mobile Ad Hoc Networking and Computing, ser. MobiHoc 06. New York, NY, USA: ACM, 2006, pp. 85-96.
[377] B., P. Clarkson, “Life patterns: Structure from wearable sensors,” https://dspace.mit.edu/handle/1721.1/8030, 2002.
[378] N., Eagle and A. S., Pentland, “Eigenbehaviors: Identifying structure in routine,Behavioral Ecology and Sociobiology, vol. 63, no. 7, pp. 1057-1066, 2009.Google Scholar
[379] W.-C., Peng and M.-S., Chen, “Mining user moving patterns for personal data allocation in a mobile computing system,” in Proceedings of the 2000 International Conference on Parallel Processing, August 2000, pp. 573-580.Google Scholar
[380] J., Chung, O., Paek, J., Lee, and K., Ryu, “Temporal pattern mining of moving objects for location-based service,” in Database and Expert Systems Applications, Springer, 2002, pp. 331-340.
[381] J., Reades, F., Calabrese, and C., Ratti, “Eigenplaces: Analysing cities using the space- time structure of the mobile phone network,Environment and Planning B: Planning and Design, vol. 36, no. 5, pp. 824-836, 2009.Google Scholar
[382] F., Calabrese, J., Reades, and C., Ratti, “Eigenplaces: Segmenting space through digital signatures,IEEE Pervasive Computing, vol. 9, no. 1, pp. 78-84, January 2010.Google Scholar
[383] I., Arel, D. C., Rose, and T. P., Karnowski, “Deep machine learning: A new frontier in artificial intelligence research [research frontier],IEEE Computational Intelligence Magazine, vol. 5, no. 4, pp. 13-18, November 2010.Google Scholar
[384] G., E. Hinton and R. R., Salakhutdinov, “Reducing the dimensionality of data with neural networks,Science, vol. 313, no. 5786, pp. 504-507, 2006.Google Scholar
[385] Y. W., Teh and M. I., Jordan, “Hierarchical Bayesian nonparametric models with applications,” in Bayesian Nonparametrics: Principles and Practice, Cambridge, UK: Cambridge University Press, 2010.
[386] R., Thibaux and M. I., Jordan, “Hierarchical beta processes and the Indian buffet process,” in Artificial Intelligence and Statistics Conference, vol. 2, pp. 564-571, 2007.Google Scholar
[387] C. E., Rasmussen, “The infinite Gaussian mixture model,” in Advances in Neural Information Processing Systems, Cambridge, MA, USA: MIT Press, vol. 12., 2000, pp. 554-560.
[388] B., Chen, G., Polatkan, G., Sapiro, L., Carin, and D. B., Dunson, “The hierarchical beta process for convolutional factor analysis and deep learning,” in Proceedings of the International Conference on Machine Learning, New York, NY, USA: ACM, 2011, pp. 361-368.
[389] F., Wood, “A non-parametric Bayesian method for inferring hidden causes,” in Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, AUAI Press, 2006, pp. 536-543.
[390] D., Knowles and Z., Ghahramani, “Infinite sparse factor analysis and infinite independent components analysis,” in International Conference on Independent Component Analysis and Signal Separation, Springer Berlin Heidelberg, 2007, pp. 381-388.
[391] T., L. Griffiths and Z., Ghahramani, “The Indian buffet process: An introduction and review,Journal of Machine Learning Research, vol. 12, pp. 1185-1224, July 2011.Google Scholar
[392] M., A. Carreira-Perpiñ and G., Hinton, “On contrastive divergence learning,” in Artificial Intelligence and Statistics Conference, vol. 10, pp. 33-40, 2005.Google Scholar
[393] E., J. Candès, J., Romberg, and T., Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489-509, February 2006.Google Scholar
[394] E., Candes, J., Romberg, and T., Tao, “Stable signal recovery from incomplete and inaccurate information,” Communications on Pure and Applied Mathematics, vol. 2005, no. 59, pp. 1207-1233, 2005.
[395] E. J., Candès and T., Tao, “Near optimal signal recovery from random projections: Universal encoding strategies?IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406- 5425, December 2006.Google Scholar
[396] D., Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, April 2006.
[397] H., Nyquist, “Certain topics in telegraph transmission theory,Transactions of the American Institute of Electrical Engineers, vol. 47, no. 2, pp. 617-644, April 1928.Google Scholar
[398] C., Shannon, “Communication in the presence of noise,Proc. Institute of Radio Engineers, vol. 37, no. 1, pp. 10-21, 1949.Google Scholar
[399] B. K., Natarajan, “Sparse approximate solutions to linear systems,SIAM Journal on Computing, vol. 24, no. 2, pp. 227-234, 1995.Google Scholar
[400] M., Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, Springer Verlag, 2010.
[401] J., Starck, F., Murtagh, and J., Fadili, Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological Diversity, Cambridge, UK: Cambridge University Press, 2010.
[402] J., Starck, E., Candes, and D., Donoho, “The curvelet transform for image denoising,IEEE Transactions on Image Processing, vol. 11, no. 6, pp. 670-684, June 2002.Google Scholar
[403] B. A., Olshausen and D. J., Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,Nature, vol. 381, no. 6583, pp. 607-609, 1996.Google Scholar
[404] K., Engan, S., Aase, and J., Husoy, “Multi-frame compression: Theory and design,Signal Processing, vol. 80, no. 10, pp. 2121-2140, 2000.Google Scholar
[405] M., Aharon, M., Elad, and A., Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311-4322, November 2006.Google Scholar
[406] M., Yuan and Y., Lin, “Model selection and estimation in regression with grouped variables,Journal of the Royal Statistical Society: Series B, vol. 68, no. 1, pp. 49-67, April 2006.Google Scholar
[407] J., Chen and X., Huo, “Theoretical results on sparse representations of multiplemeasurement vectors,IEEE Transactions on Signal Processing, vol. 54, no. 12, pp. 4634-4643, December 2006.Google Scholar
[408] F., Bach, “Consistency of the group lasso and multiple kernel learning,The Journal of Machine Learning Research, vol. 9, pp. 1179-1225, 2008.Google Scholar
[409] D., Malioutov, M., Cetin, and A., Willsky, “A sparse signal reconstruction perspective for source localization with sensor arrays,IEEE Transactions on Signal Processing, vol. 53, no. 8, pp. 3010-3022, August 2005.Google Scholar
[410] S., Cotter, B., Rao, K., Engan, and K., Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,IEEE Transactions on Signal Processing, vol. 53, no. 7, pp. 2477-2488, July 2005.Google Scholar
[411] J., Meng, W., Yin, H., Li, E., Hossain, and Z., Han, “Collaborative spectrum sensing from sparse observations in cognitive radio networks,IEEE Journal on Selected Topicson Communications Special Issue on Advances in Cognitive Radio Networking andCommunications, vol. 29, no. 2, pp. 327-337, February 2011.Google Scholar
[412] M., Fazel, “Matrix rank minimization with applications,” PhD dissertation, Stanford University, 2002.
[413] E., Candes and B., Recht, “Exact matrix completion via convex optimization,Foundationsof Computational Mathematics, vol. 9, no. 6, pp. 717-772, 2009.Google Scholar
[414] Z., Liu and L., Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 3, pp. 1235-1256, 2009.Google Scholar
[415] A., So and Y., Ye, “Theory of semidefinite programming for sensor network localization,Mathematical Programming, vol. 109, no. 2, pp. 367-384, 2007.Google Scholar
[416] C., Tomasi and T., Kanade, “Shape and motion from image streams under orthography: A factorization method,International Journal of Computer Vision, vol. 9, no. 2, pp. 137- 154, 1992.Google Scholar
[417] T., Morita and T., Kanade, “A sequential factorization method for recovering shape and motion from image streams,IEEE Transactions on Pattern Analysis and MachineIntelligence, vol. 19, no. 8, 858-867, August 1997.Google Scholar
[418] D., Goldberg, D., Nichols, B., Oki, and D., Terry, “Using collaborative filtering to weave an information tapestry,Communications of the ACM, vol. 35, no. 12, 61-70, 1992.Google Scholar
[419] Y., Eldar and M., Mishali, “Robust recovery of signals from a structured union of subspaces,IEEE Transactions on Information Theory, vol. 55, no. 11, 5302-5316, November 2009.Google Scholar
[420] Y., Lu and M., Do, “Sampling signals from a union of subspaces,” IEEE Signal ProcessingMagazine, vol. 25, no. 2, 41-47, March 2008.Google Scholar
[421] E., Candes and J., Romberg, “Sparsity and incoherence in compressive sampling,InverseProblems, vol. 23, no. 3, 969-985, 2007.Google Scholar
[422] P., Feng and Y., Bresler, “Spectrum-blind minimum-rate sampling and reconstruction of multiband signals,” in Proceedings of IEEE International Conference on Acoustics,Speech, and Signal Processing, vol. 3, May 1996, 1688-1691.Google Scholar
[423] M., Vetterli, P., Marziliano, and T., Blu, “Sampling signals with finite rate of innovation,IEEE Transactions on Signal Processing, vol. 50, no. 6, 1417-1428, June 2002.Google Scholar
[424] E., Candes and T., Tao, “Decoding by linear programming,IEEE Transactions onInformation Theory, vol. 51, no. 12, 4203-4215, December 2005.Google Scholar
[425] Y., Zhang, “Theory of compressive sensing via 1-minimization: A non-rip analysis and extensions,Journal of the Operations Research Society of China, vol. 1, no. 1, 79-105, 2013.Google Scholar
[426] E., Candes and Y., Plan, “A probabilistic and RIPless theory of compressed sensing,IEEETransactions on Information Theory, vol. 57, no. 11, 7235-7254, November 2010.Google Scholar
[427] D., Donoho and X., Huo, “Uncertainty principles and ideal atomic decompositions,IEEETransactions on Information Theory, vol. 47, no. 7, 2845-2862, November 2001.Google Scholar
[428] R., Gribonval and M., Nielsen, “Sparse representations in unions of bases,IEEE Transactionson Information Theory, vol. 49, no. 12, 3320-3325, December 2003.Google Scholar
[429] Y., Zhang, “A simple proof for recoverability of _1-minimization: Go over or under?” RiceUniversity CAAM Technical Report TR05-09, 2005.
[430] A., Cohen, W., Dahmen, and R. A., DeVore, “Compressed sensing and best k-term approximation,Journal of the American Mathematical Society, vol. 22, no. 1, 211-231, 2009.Google Scholar
[431] E., Candes, “The restricted isometry property and its implications for compressed sensing,Comptes Rendus Mathematique, vol. 346, no. 9-10, 589-592, 2008.Google Scholar
[432] S., Foucart and M., Lai, “Sparsest solutions of underdetermined linear systems via _qminimization for 0 q ≤ 1,Applied and Computational Harmonic Analysis, vol. 26, no. 3, 395-407, 2009.Google Scholar
[433] S., Foucart, “A note on guaranteed sparse recovery via _1-minimization,Applied andComputational Harmonic Analysis, vol. 29, no. 1, 97-103, July 2010.Google Scholar
[434] T., Cai, L., Wang, and G., Xu, “Shifting inequality and recovery of sparse signals,IEEE Transactions on Signal Processing, vol. 58, no. 3, 1300-1308, March 2010.Google Scholar
[435] Q., Mo and S., Li, “New bounds on the restricted isometry constant δ2k ,” Applied andComputational Harmonic Analysis, vol. 31, no. 3, 460-468, 2011.Google Scholar
[436] M., Davenport, “PhD thesis: Random observations on random observations: Sparse signal acquisition and processing,” PhD dissertation, 2010.
[437] R., Baraniuk, M., Davenport, R., Devore, and M., Wakin, “A simple proof of the restricted isometry property for random matrices,” Constructive Approximation, vol. 28, no. 3, 253-263, 2007.Google Scholar
[438] S., Mendelson, A., Pajor, and N., Tomczak-Jaegermann, “Uniform uncertainty principle for Bernoulli and subgaussian ensembles,” Constructive Approximation, vol. 28, no. 3, 277-289, 2008.Google Scholar
[439] H., Rauhut, “Compressive sensing and structured random matrices,” Theoretical Foundationsand Numerical Methods for Sparse Recovery, vol. 9, 1-92, 2010.Google Scholar
[440] J., Bourgain, S., Dilworth, K., Ford, S., Konyagin, and D., Kutzarova, “Explicit constructions of RIP matrices and related problems,Duke Mathematical Journal, vol. 159, no. 1, 145-185, 2011.Google Scholar
[441] J., Haupt, L., Applebaum, and R., Nowak, “On the restricted isometry of deterministically subsampled Fourier matrices,” Proceedings of the 44th Annual Conference on InformationSciences and Systems, 1-6, Princeton, NJ, USA, March 2010.
[442] P., Indyk, “Explicit constructions for compressed sensing of sparse signals,” SODA –08 Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 30-33, San Francisco, CA, USA, January 20-22, 2008.
[443] S., Vavasis, “Derivation of compressive sensing theorems from the spherical section property,University of Waterloo, CO, vol. 769, 2009.Google Scholar
[444] B. S., Kashin, “Diameters of some finite-dimensional sets and classes of smooth functions,Mathematics of the USSR-Izvestiya, vol. 11, p. 317, 1977.Google Scholar
[445] A., Garnaev and E. D., Gluskin, “The widths of a Euclidean ball,Dokl. Akad. Nauk SSSR, vol. 277, no. 5, 1048-1052, 1984.Google Scholar
[446] D., Du and F., Hwang, Combinatorial Group Testing and Its Applications. World Scientific Pub. Co. Inc., 2000.
[447] R., Berinde, A., Gilbert, P., Indyk, H., Karloff, and M., Strauss, “Combining geometry and combinatorics: A unified approach to sparse signal recovery,” in 2008 46th Annual AllertonConference on Communication, Control, and Computing, September 2008, pp. 798- 805.Google Scholar
[448] A., Gilbert and P., Indyk, “Sparse recovery using sparse matrices,Proceedings of the IEEE, vol. 98, no. 6, 937-947, June 2010.Google Scholar
[449] A., Gilbert, M., Strauss, J., Tropp, and R., Vershynin, “One sketch for all: Fast algorithms for compressed sensing,” 2007, 237-246.
[450] A., C. Gilbert, Y., Li, E., Porat, and M. J., Strauss, “Approximate sparse recovery: Optimizing time and measurements,SIAM Journal on Computing, vol. 41, no. 2, 436-453, 2012.Google Scholar
[451] Y., Zhang, “When is missing data recoverable?” Rice University CAAM Technical ReportTR06-15, 2006.
[452] http://dsp.rice.edu/cs.
[453] J. A., Tropp, J. N., Laska, M. F., Duarte, J. K., Romberg, and R. G., Baraniuk, “Beyond Nyquist: Efficient sampling of sparse bandlimited signals,IEEE Transactions on InformationTheory, vol. 56, no. 1, 520-544, January 2010.Google Scholar
[454] S., Kirolos, J., Laska, M., Wakin, M., Duarte, D., Baron, T., Ragheb, Y., Massoud, and R., Baraniuk, “Analog-to-information conversion via random demodulation,” in IEEE DallasCircuits and Systems Workshop, Dallas, October 2006.
[455] J. N., Laska, S., Kirolos, M. F., Duarte, T. S., Ragheb, R. G., Baraniuk, and Y., Massoud, “Analog-to-information conversion via random demodulation,” in IEEE InternationalSymposium on Circuits and Systems, ISCAS, New Orleans, LA, USA, May 2007.
[456] M., Mishali and Y., C. Eldar, “From theory to practice: Sub-Nyquist sampling of sparse wideband analog signals,IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, 375-391, April 2010.Google Scholar
[457] M., Mishali and Y., C. Eldar, “Expected rip: Conditioning of the modulated wideband converter,” in 2009 IEEEInformation Theory Workshop, Sicily, Italy, October 2009.
[458] M., Mishali, Y. C.|Eldar, and J. A., Tropp, “Efficient sampling of sparse wideband analog signals,” in IEEE Convention of Electrical and Electronics Engineers, Israel, December 2008, 290-294.
[459] M., Mishali, Y., C. Eldar, and A., Elron, “Xampling: Signal acquisition and processing in union of subspaces,IEEE Transactions on Signal Processing, vol. 59, no. 10, pp. 4719- 4734, October 2011.Google Scholar
[460] T., Michaeli and Y. C., Eldar, “Xampling at the rate of innovation,IEEE Transactions onSignal Processing, vol. 60, no. 3, 1121-1133, March 2012.Google Scholar
[461] K., Gedalyahu and Y., Eldar, “Time-delay estimation from low-rate samples: A union of subspaces approach,IEEE Transactions on Signal Processing, vol. 58, no. 6, pp. 3017- 3031, June 2011.Google Scholar
[462] E., Matusiak and Y., Eldar, “Sub-nyquist sampling of short pulses,IEEE Transactions onSignal Processing, vol. 60, no. 3, 1134-1148, March 2012.Google Scholar
[463] M., Mishali and Y. C., Eldar, “Xampling: Compressed sensing of analog signals,” CompressedSensing Theory and Applications, Cambridge, UK: Cambridge University Press, 2012.
[464] M., Satyanarayanan, “Pervasive computing: Vision and challenges,IEEE Personal Communications, vol. 8, no. 4, pp. 10 -17, August 2001.Google Scholar
[465] R., Glidden, C., Bockorick, S., Cooper, C., Diorio, D., Dressler, V., Gutnik, C., Hagen, D., Hara, T., Hass, T., Humes, J., Hyde, R., Oliver, O., Onen, A., Pesavento, K., Sundstrom, and M., Thomas, “Design of ultra-low-cost uhf rfid tags for supply chain applications,IEEECommunications Magazine, vol. 42, no. 8, 140-151, August 2004.Google Scholar
[466] L., Mo, Y., He, Y., Liu, J., Zhao, S.-J., Tang, X.-Y., Li, and G., Dai, “Canopy closure estimates with greenorbs: Sustainable sensing in the forest,” in Proceedings of the 7th ACM Conferenceon Embedded Networked Sensor Systems, ser. SenSys –09. New York, NY, USA: ACM, 2009, 99-112.
[467] A., Goldsmith, Wireless Communications, Cambridge, UK: Cambridge University Press, 2005.
[468] T. S., Rappaport, Wireless Communications: Principles and Practice (2nd ed.) USA: Prentice Hall, 2001.
[469] S. S., Chen, D. L., Donoho, and M. A., Saunders, “Atomic decomposition by basis pursuit,SIAM Journal on Scientific Computing (SISC), vol. 20, no. 1, 33-61, 1998.Google Scholar
[470] W. U., Bajwa, J., Haupt, A. M., Sayeed, and R., Nowak, “Compressed channel sensing: A new approach to estimating sparse multipath channels,Proceedings of the IEEE, vol. 98, no. 6, 1058-1076, June 2010.Google Scholar
[471] J. L., Paredes, G. R., Arce, and Z., Wang, “Ultra-wideband compressed sensing channel estimation,IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 3, pp. 383- 395, October 2007.Google Scholar
[472] C. R., Berger, Z., Wang, Z., Huang, and S., Zhou, “Application of compressive sensing to sparse channel estimation,IEEE Communications Magazine, vol. 48, 164-174, November 2010.Google Scholar
[473] P., Zhang, Z., Hu, R. C., Qiu, and B. M., Sadler, “A compressive sensing based ultrawideband communication system,” in Proceedings of IEEE International Conference onCommunications, 2009, 1-5.
[474] J., Romberg, “Multiple channel estimation using spectrally random probes,” in Proc. SPIEWavelets XIII, 2009.
[475] W. U., Bajwa, A., Sayeed, and R., Nowak, “Compressed sensing of wireless channels in time, frequency, and space,” in 2008 42nd Asilomar Conference on Signals, Systems andComputers, October 2008, 2048-2052.
[476] Z., Sahinoglu, S., Gezici, and I., Guvenc, Ultra-wideband Positioning Systems: TheoreticalLImits, Ranging Algorithms and Protocols, Cambridge, UK: Cambridge University Press, 2011.
[477] G., Staple and K., Werbach, “The end of spectrum scarcity,IEEE Spectrum Archive, vol. 41, no. 3, 48-52, March 2004.Google Scholar
[478] S., Haykin, “Cognitive radio: Brain-empowered wireless communications,IEEE Journalon Selected areas in Communications, vol. 23, no. 2, 201-220, February 2005.Google Scholar
[479] H., Kim and K. G., Shin, “Efficient discovery of spectrum opportunities with MAC-layer sensing in cognitive radio networks,IEEE Transactions on Mobile Computing, vol. 7, no. 5, 533-545, May 2008.Google Scholar
[480] F. C., Commission, “Longley-Rice methodology for evaluating TV coverage and interference,Office of Engineering and Technology Bulletin, no. 69, 2004.Google Scholar
[481] S. M., Mishra, A., Sahai, and R., Brodersen, “Cooperative sensing among cognitive radios,” in Proceedings of IEEE International Conference on Communications, Istanbul, Turkey, June 2006, 1658-1663.
[482] A., Ghasemi and E. S., Sousa, “Collaborative spectrum sensing for opportunistic access in fading environments,” in Proceedings of IEEE International Symposium on New Frontiersin Dynamic Spectrum Access Networks, Baltimore, MD, USA, November 2005, pp. 131- 136.
[483] A., Ghasemi and E. S., Sousa, “Opportunistic spectrum access in fading channels through collaborative sensing,Journal of Communications, vol. 2, no. 2, 71-82, March 2007.Google Scholar
[484] W., Saad, Z., Han, M., Debbah, A., Hjørungnes, and T., Basar, “Coalitional games for distributed collaborative spectrum sensing in cognitive radio networks,” in Proceedings ofIEEE Conference on Computer Communications, Rio de Janeiro, Brazil, April 2009, 2114-2122.
[485] G., Ghurumuruhan and Y., Li, “Cooperative spectrum sensing in cognitive radio: Part I: Two user networks,IEEE Transactions on Wireless Communications, vol. 6, no. 6, 2204-2213, June 2007.Google Scholar
[486] G., Ghurumuruhan and Y., Li, “Cooperative spectrum sensing in cognitive radio: Part II: Multiuser networks,IEEE Transactions on Wireless Communications, vol. 6, no. 6, 2214-2222, June 2007.Google Scholar
[487] J., Unnikrishnan and V. V., Veeravalli, “Cooperative sensing for primary detection in cognitive radio,IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 1, 18-27, February 2008.Google Scholar
[488] S., Cui, Z., Quan, and A., Sayed, “Optimal linear cooperation for spectrum sensing in cognitive radio networks,IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 1, 28-40, February 2008.Google Scholar
[489] W., Zhang, C., Sun, and K. B., Letaief, “Cluster-based cooperative spectrum sensing in cognitive radio systems,” in Proceedings of International Conference on Communications, Glasgow, Scotland, June 2007, 2511-2515.
[490] W., Zhang, C., Sun, and K. B., Letaief, “Cooperative spectrum sensing for cognitive radios under bandwidth constraints,” in Proceedings of IEEE Wireless Communications and Networking Conference, Hong Kong, China, February 2007, 25-30.
[491] C. H., Lee and W., Wolf, “Energy efficient techniques for cooperative spectrum sensing in cognitive radios,” in Proceedings of IEEE Consumer Communications and NetworkingConference, Las Vegas, NV, USA, January 2008, 968-972.
[492] A., Plaza, J. A., Benediktsson, J., Boardman, J., Brazile, L., Bruzzone, G., Camps-Valls, J., Chanussot, M., Fauvel, P., Gamba, J., Gualtieri, M., Marconcini, J. C., Tilton, and G., Trianni, “Recent advances in techniques for hyperspectral image processing,Remote Sens.Environment, vol. 113, no. 1, 110-122, 2009.Google Scholar
[493] J. M., Bioucas-Dias, A., Plaza, G., Camps-Valls, P., Scheunders, N. M., Nasrabadi, and J., Chanussot, “Hyperspectral remote sensing data analysis and future challenges,IEEEGeoscience and Remote Sensing Magazine, vol. 1, no. 2, 6-36, June 2013.Google Scholar
[494] E. J., Candès and M. B., Wakin, “An introduction to compressive sampling,IEEE SignalProcessing Magazine, vol. 25, no. 2, 21-30, March 2008.Google Scholar
[495] R. M., Willett, M. F., Duarte, M. A., Davenport, and R. G., Baraniuk, “Sparsity and structure in hyperspectral imaging: Sensing, reconstruction, and target detection,IEEE SignalProcessing Magazine, vol. 31, no. 1, 116-126, January 2014.Google Scholar
[496] J. E., Fowler, “Compressive pushbroom and whiskbroom sensing for hyperspectral remotesensing imaging,” in Proceedings of IEEE International Conference on Image Processing, Paris, France, October 2014, 684-688.
[497] M. F., Duarte, M. A., Davenport, D., Takhar, J. N., Laska, T., Sun, K. E., Kelly, and R. G., Baraniuk, “Single-pixel imaging via compressive sampling,IEEE Signal ProcessingMagazine, vol. 25, no. 2, 83-91, March 2008.Google Scholar
[498] T. S. C., Li, K. F., Kelly, and Y., Zhang, “A compressive sensing and unmixing scheme for hyperspectral data processing,IEEE Transactions on Image Processing, vol. 21, no. 3, 1200-1210, March 2012.Google Scholar
[499] M., Golbabaee and P., Vandergheynst, “Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery,” in Proceedings of IEEE International Conferenceon Acoustic, Speech and Signal Processing, Kyoto, Japan, March 2012, pp. 2741- 2744.
[500] G., Martín, J. M., Bioucas-Dias, and A., Plaza, “HYCA: A new technique for hyperspectral compressive sensing,IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 5, 2819-2831, May 2015.Google Scholar
[501] H., Ren and C., Chang, “Automatic spectral target recognition in hyperspectral imagery,IEEE Transactions on Aerospace and Electronic Systems, vol. 39, no. 4, 1232-1249, October 2003.Google Scholar
[502] Y., Chen, N., Nasrabadi, and T., Tran, “Sparse representation for target detection in hyperspectral imagery,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 3, pp. 629-640, June 2011.Google Scholar
[503] Y., Chen, N. M., Nasrabadi, and T. D., Tran, “Simultaneous joint sparsity model for target detection in hyperspectral imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 4, pp. 676-680, July 2011.Google Scholar
[504] G., Mateos, J. A., Bazerque, and G. B., Giannakis, “Distributed sparse linear regression,” IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5262-5276, 2010.Google Scholar
[505] J. F. C., Mota, J. M. F., Xavier, P. M. Q., Aguiar, and M., Puschel, “Distributed basis pursuit,” IEEE Transactions on Signal Processing, vol. 60, no. 4, pp. 1942-1956, April 2012.Google Scholar
[506] I., Foster, Y., Zhao, I., Raicu, and S., Lu, “Large-scale sparse logistic regression,” in Proceedings of ACM International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, June 2009, pp. 547-556.
[507] S. S., Ram, A., Nedic, and V. V., Veeravalli, “A new class of distributed optimization algorithm: Application of regression of distributed data,” Optimization Methods and Software, vol. 27, no. 1, pp. 71-88, 2012.Google Scholar
[508] D., Kempe, A., Dobra, and J., Gehrke, “Gossip-based computation of aggregate information,” in Proceedings of Annual IEEE Symposium on Foundations of Computer Sciences, Cambridge, MA, USA, October 2003, pp. 482-491.
[509] Q., Ling, Y., Xu, W., Yin, and Z., Wen, “Decentralized low-rank matrix completion,” in Proceedings of IEEE International Conference on Acoustic, Speech and Signal Processing, Kyoto, Japan, March 2012, pp. 2925-2928.
[510] A., Nedic and A., Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transaction on Automatic Control, vol. 54, no. 1, pp. 48-61, Jan. 2009.Google Scholar
[511] A., Nedic and A., Ozdaglar, “Cooperative distributed multi-agent optimization,” in Convex Optimization in Signal Processing and Communications, Cambridge, UK: Cambridge University Press, 2009.
[512] A., Nedic, A., Ozdaglar, and P., Parrilo, “Constrained consensus and optimization in multiagent networks,” IEEE Transactions on Automatic Control, vol. 55, no. 4, pp. 922-938, April 2010.Google Scholar
[513] K., Srivastava and A., Nedic, “Distributed asynchronous constrained stochastic optimization,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 4, pp. 772-790, August 2011.Google Scholar
[514] J., Tsitsiklis, “Problems in decentralized decision making and computation,” PhD thesis, MIT, 1984.
[515] K., Yuan, Q., Ling, and W., Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, 2015.
[516] I., Chen, “Fast distributed first-order methods,” Master's thesis, MIT, 2012.
[517] K. I., Tsianos and M. G., Rabbat, “Distributed strongly convex optimization,” in 50th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, Oct 2012, pp. 593-600.Google Scholar
[518] D., Bertsekas and J., Tsitsiklis, Parallel and Distributed Computation: Numerical Methods (2nd ed.) Belmont, MA, USA: Athena Scientific, 1997.
[519] G. B., Giannakis, Q., Ling, G., Mateos, I. D., Schizas, and H., Zhu, “Proximal splitting methods in signal processing,” in Splitting Methods in Communication and Imaging, NewYork, USA: Springer, 2015.
[520] I., Schizas, A., Ribeiro, and G., Giannakis, “Consensus in ad hoc WSNs with noisy links - Part I: Distributed estimation of deterministic signals,” IEEE Transactions on Signal Processing, vol. 56, no. 1, pp. 350-364, 2008.Google Scholar
[521] W., Shi, Q., Ling, K., Yuan, G., Wu, and W., Yin, “On the linear convergence of the ADMM in decentralized consensus optimization,” IEEE Transactions on Signal Processing, vol. 62, no. 7, pp. 1750-1761, April 2014.Google Scholar
[522] Q., Ling,W., Shi, G., Wu, and A., Ribeiro, “DLM: Decentralized linearized alternating direction method of multipliers,” IEEE Transactions on Signal Processing, vol. 63, no. 15, pp. 4051-4064, August 2015.
[523] N., Parikh and S., Boyd, “Proximal algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 1-112, 2013.Google Scholar
[524] W., Shi, Q., Ling, G., Wu, and W., Yin, “EXTRA: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944-966, 2014.Google Scholar
[525] J. C., Duchi, A., Agarwal, and M. J., Wainwright, “Dual averaging for distributed optimization: Convergence analysis and network scaling,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 592-606, March 2012.Google Scholar
[526] Y., Nesterov, “Primal-dual subgradient methods for convex problems,” Mathematical Programming, vol. 120, no. 1, pp. 261-283, 2009.Google Scholar
[527] M., Hong, Z.-Q., Luo, and M., Razaviyayn, “On the convergence of alternating direction method of mulitpliers for a family of nonconvex problems,” in 40th International Conference on Acoustic, Speech and Signal Processing, Brisbane, Australia, April 2015.
[528] A., Nedic and A., Olshevsky, “Distributed optimization over time-varying directed graphs,” IEEE Transactions on Automatic Control, vol. 60, no. 3, pp. 601-615, March 2015.Google Scholar
[529] E., Wei and A., Ozdaglar, “On the O(1/k) convergence of asynchronous distributed alternating direction method of multipliers,” in Global Conference on Signal and Information Processing, Austin, TX, USA, December 2013.
[530] D., Bertsekas, “Incremental gradient, subgradient, and proximal methods for convex optimization: A survey,” Optimization for Machine Learning, vol. 4, pp. 85-119, 2012.Google Scholar
[531] Z.-Q., Luo, “On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks,” Neural Computation, vol. 3, no. 2, pp. 226-245, 1991.Google Scholar
[532] D., Blatt, A. O., Hero, and H., Gauchman, “A convergent incremental gradient method with a constant step size,” SIAM Journal on Optimization, vol. 18, no. 1, pp. 29-51, 2007.Google Scholar
[533] M., Gurbuzbalaban, A., Ozdaglar, and P., Parrilo, “Convergence rate of incremental aggregated gradient algorithms,” IEEE Transactions on Signal Processing, preprint, 2015.
[534] M. L., Roux, M., Schmidt, and F., Bach, “A stochastic gradient method with an exponential convergence rate for strongly-convex optimization with finite training sets,” in Proceedings of the Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, December 2012.
[535] A., Defazio, F., Bach, and S., Lacoste-Julien, “Saga: A fast incremental gradient method with support for non-strongly convex composite objectives,” in Proceeding of Annual Conference on Neural Information Processing Systems, Montreal, Canada, December 2014.
[536] R., Johnson and T., Zhang, “Accelerating stochastic gradient descent using predictive variance reduction,” in Proceedings of the Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, December 2013.
[537] L., Xiao and T., Zhang, “A proximal stochastic gradient method with progressive variance reduction,” Siam Journal on Optimization, vol. 24, pp. 2057-2075, 2014.Google Scholar
[538] J., Konecny, Z., Qu, and P., Richtarik, “Semi-stochastic coordinate descent,” preprint, 2014, http://arXiv:1412.6293.
[539] S., Shalev-Shwartz and T., Zhang, “Stochastic dual coordinate ascent methods for regularized loss minimization,” Journal of Machine Learning Research, vol. 14, pp. 567-599, 2013.Google Scholar
[540] A., Agarwal and L., Bottou, “A lower bound for the optimization of finite sums,” in Proceedings of the International Conference on Machine Learning, Lille, France, June 2015.
[541] A., Nemirovsky and D., Yudin, “Problem complexity and method efficiency in optimization,” in Interscience Series in Discrete Mathematics, Wiley, 1983.
[542] G., Lan, “An optimal randomized incremental gradient method,” preprint, 2015, https:// arxiv.org/abs/1507.02000.
[543] Y., Li and S., Osher, “Coordinate descent optimization for L1 minimization with applications to compressed sensing: A greedy algorithm,” Inverse Problems and Imaging, vol. 3, no. 3, pp. 487-503, 2009.Google Scholar
[544] L., Bottou and O., Bousquet, “The tradeoffs of large scale learning,” in Advances in Neural Information Processing Systems, Vancouver, Canada, December 2008.
[545] M., Zinkevich, M., Weimer, A., Smola, and L., Li, “Parallelized stochastic gradient descent,” in Advances in Neural Information Processing Systems, Vancouver, Canada, December 2010.
[546] F., Niu, B., Recht, C., Re, and S. J., Wright, “Hogwild: A lock-free approach to parallelizing stochastic gradient descent,” in Advances in Neural Information Processing Systems, Granada, Spain, December 2011.
[547] C. J., Hsieh, K. W., Chang, C. J., Lin, S. S., Keerthi, and S., Sundararajan, “A dual coordinate descent method for large-scale linear SVM,” in International Conference on Machine Learning, Helsinki, Finland, July 2008.
[548] S., Boyd, N., Parikh, E., Chu, B., Peleato, and J., Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundation and Trends in Machine Learning, vol. 3, no. 1, pp. 1-122, November 2010.Google Scholar
[549] R. M., Freund and P., Grigas, “New analysis and results for the Frankwolfe method,” http://arXiv.org/abs/1307.0873 2014.
[550] S., Lacoste-Julien, M., Jaggi, M., Schmidt, and P., Pletscher, “Block-coordinate Frank-Wolfe optimization for structural SVMs,” in International Conference on Machine Learning, Atlanta, GA, USA, June 2013.
[551] M., Grant and S., Boyd, “CVX: Matlab software for disciplined convex programming, Version 2.1,” http://cvxr.com/cvx, March 2014.
[552] B., Chun, S., Ihm, P., Maniatis, M., Naik, and A., Patti, “CloneCloud: Elastic execution between mobile device and cloud,” in Conference on Computer Systems, Salzburg, Austria, April 2011.
[553] M., Chiang, S. H., Low, A. R., Calderbank, and J. C., Doyle, “Layering as optimization decomposition: A mathematical theory of network architectures,” Proceedings of the IEEE, vol. 95, no. 1, pp. 255-312, January 2007.Google Scholar
[554] S., Kosta, A., Aucinas, P., Hui, R., Mortier, and X., Zhang, “ThinkAir: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading,” in Proceedings of IEEE International Conference on Computer Communications, Orlando, FL, USA, March 2012.
[555] R., Bifulco, M., Brunner, R., Canonico, R., Hasselmeyer, and F., Mir, “Scalability of a mobile cloud management system,” in ACM Special Interest Group on Data Communications Workshop on Mobile Cloud Computing, Helsinki, Finland, August 2012.
[556] P., Wendell, J. W., Jiang, M. J., Freedman, and J., Rexford, “Donar: Decentralized sever selection for cloud services,” in Proceedings of the ACM Special Interest Group on Data Communications, New Delhi, India, August 2010.
[557] Z., Zhang, M., Zhang, A., Greenberg, Y. C., Hu, R., Mahajan, and B., Christian, “Optimizing cost and performance in online service provider networks,” in Proceedings of USENIX Symposium on Networked Sytems Design and Implementation, San Jose, CA, USA, April 2010.
[558] D. K., Goldenberg, L., Qiu, H., Xie, Y. R., Yang, and Y., Zhang, “Optimizing cost and performance for multihoming,” in Proceedings of ACM Special Interest Group on Data Communications Workshop on Mobile Cloud Computing, Portland, OR, USA, August 2002.
[559] H., Xu, C., Feng, and B., Li, “Temperature aware workload management in geo-distributed datacenters,” in Proceedings of USENIX International Conference on Autonomic Computing, San Jose, CA, USA, June 2013.
[560] J. W., Jiang, R., Zhang-Shen, J., Rexford, and M., Chiang, “Cooperative content distribution and traffic engineering in an ISP network,” in Proceedings of ACM Special Interest Group on Measurement and Evaluation, Seattle, WA, USA, June 2009.
[561] S., Narayana, J. W., Jiang, J., Rexford, and M., Chiang, “To coordinate or not to coordinate? Wide-area traffic management for data centers,” in Proceedings of ACMInternational Conference on emerging Networking Experiments and Technologies, Nice, France, December 2012.
[562] H., Xu and B., Li, “Joint request mapping and response routing for geo-distributed cloud services,” in Proceedings of IEEE International Conference on Computer Communications, Turin, Italy, April 2013.
[563] M., Satyanarayanan, P., Bahl, R., Caceres, and N., Davies, “The case for VM-based cloudlets in mobile computing,” IEEE Pervasive Computing, vol. 8, no. 4, pp. 14-23, October 2009.Google Scholar
[564] N., McKeown, T., Anderson, H., Balakrishnan, G., Parulkar, L., Peterson, J., Rexford, S., Shenker, and J., Turner, “Openflow: Enabling innovation in campus networks,” in Proceedings of the ACM Special Interest Group on Data Communications, Seattle,WA, USA, August 2008.
[565] M., Jeonghoon and J., Walrand, “Fair end-to-end window-based congestion control,” IEEE/ACM Transactions on Networking, vol. 8, no. 6, pp. 556-567, 2000.Google Scholar
[566] F. P., Kelly, A., Maulloo, and D., Tan, “Rate control for communication networks: Shadow price, proportional fairness and stability,” Journal of Operational Research Society, vol. 49, pp. 237-252, March 1998.Google Scholar
[567] L., Rao, X., Liu, L., Xie, and W., Liu, “Minimizing electricity cost: Optimization of distributed internet data centers in a multi-electricity-market environment,” in Proceedings of the IEEE International Conference on Computer Communications, San Diego, CA, USA, March 2010.
[568] A., Greenberg, J., Hamilton, D. A., Maltz, and P., Patel, “The cost of a cloud: Research problems in data center networks,” ACM Special Interest Group on Data Communications Computer Communication Review, vol. 39, no. 1, pp. 68-73, January 2009.Google Scholar
[569] X., Fan, W., Weber, and L. A., Barroso, “Power provisioning for a warehouse-size computer,” in Proceedings of the ACM International Symposium on Computer Architecture, San Diego, CA, USA, June 2007.
[570] H. V., Madhyastha, T., Isdal, M., Piatek, C., Dixon, T., Anderson, A., Krishnamurthy, and A., Venkataramani, “iPlane: An information plane for distributed services,” in Proceedings of USENIX Symposium on Networked Systems Design and Implementation, Seattle, WA, USA, November 2006.
[571] “AT&T 2012 Sustainability Report: Water,” www.att.com/gen/landing-pages?pid=24188.
[572] “Water use hints at problems at Utah Data Center,” www.sltrib.com/.
[573] Office of governor in California, “Governor brown declares drought state of emergency,” 2014, http://gov.ca.gov/news.php?id=18368.
[574] Z., Liu, Y., Chen, C., Bash, A., Wierman, D., Gmach, Z., Wang, M., Marwab, and C., Hyser, “Renewable and cooling aware workload management for sustainable data centers,” in Proceedings of Special Interest Group on Performance Evaluation, London, UK, June 2012.
[575] E., Frachtenberg, “Holistic datacenter design in the open compute project,” Computer, vol. 45, no. 7, pp. 83-85, 2012.Google Scholar
[576] D., Alger, Grow a Greener Data Center, New Jersey: Cisco Press, 2009.
[577] S., Ren, “Optimizing water efficiency in distributed data centers,” in Proceedings of International Conference on Cloud and Green Computing, Karlsruhe, Germany, September 2013.
[578] “Google data centers,” www.google.com/about/datacenters/.
[579] U. Institute, “Data center industry survey,” 2013, http://uptimeinstitute.com/2013 -survey-results.
[580] eBay. [Online]. Available: http://tech.ebay.com/dashboard.
[581] “Prineville data center,” [Online]. Available: www.facebook.com/PrinevilleDataCenter/.
[582] K., Papagiannaki, N., Taft, Z., Zhang, and C., Diot, “Long-term forecasting of internet backbone traffic: Observations and initial methods,” in Proceedings of the IEEE International Conference on Computer Communications, San Francisco, CA, USA, March 2003.
[583] G., Wang, J., Wu, G., Zhou, and G., Li, “Collision-tolerant media access control for asynchronous users over frequency-selective channels,” IEEE Transactions on Wireless Communications, vol. 12, no. 10, pp. 5162-5171, 2013.Google Scholar
[584] L., Liu, S., Ren, and Z., Han, “Scalable workload management for water efficiency in data centers,” in IEEE Global Communications Conference, Austin, TX, USA, December 2014.
[585] C., Liang and F., Yu, “Wireless network virtualization: A survey, some research issues and challenges,” IEEE Communications Surveys Tutorials, vol. 17, no. 1, pp. 358-380, 2015.Google Scholar
[586] A., Fischer, J., Botero, M., Till Beck, H., de Meer, and X., Hesselbach, “Virtual network embedding: A survey,” IEEE Communications Surveys Tutorials, vol. 15, no. 4, pp. 1888-1906, 2013.Google Scholar
[587] A., Belbekkouche, M. M., Hasan, and A., Karmouch, “Resource discovery and allocation in network virtualization,” IEEE Communications Surveys Tutorials, vol. 14, no. 4, pp. 1114-1128, 2012.Google Scholar
[588] N., Feamster, L., Gao, and J., Rexford, “How to lease the internet in your spare time,” Special Interest Group on Data Communications Computer Communication Review, vol. 37, no. 1, pp. 61-64, January 2007.Google Scholar
[589] N., Chowdhury and R., Boutaba, “Network virtualization: State of the art and research challenges,” IEEE Communications Magazine, vol. 47, no. 7, pp. 20-26, July 2009.Google Scholar
[590] H., Wen, P. K., Tiwary, and T., Le-Ngoc, Wireless Virtualization, ser. Springer Briefs in Computer Science. Springer, 2013.
[591] H., Wen, P., Tiwary, and T., Le-Ngoc, “Current trends and perspectives in wireless virtualization,” in International Conference on Selected Topics: Stochastic Game for Wireless Network Virtualization, Montreal, Canada, August 2013, pp. 62-67.
[592] F., Fu and U., Kozat, “Stochastic game for wireless network virtualization,” Networking, IEEE/ACM Transactions on Networking, vol. 21, no. 1, pp. 84-97, February 2013.Google Scholar
[593] Q., Zhu and X., Zhang, “Game-theory based power and spectrum virtualization for maximizing spectrum efficiency over mobile cloud-computing wireless networks,” in Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, March 2015.
[594] M., Yang, Y., Li, J., Liu, D., Jin, J., Yuan, and L., Zeng, “Opportunistic spectrum sharing for wireless virtualization,” in IEEE Wireless Communications and Networking Conference, Istanbul, Turkey, April 2014, pp. 1803-1808.
[595] G., Liu, F., Yu, H., Ji, and V., Leung, “Distributed resource allocation in full-duplex relaying networks with wireless virtualization,” in IEEE Global Communications Conference, Austin, TX, USA, December 2014, pp. 4959-4964.
[596] R., Kokku, R., Mahindra, H., Zhang, and S., Rangarajan, “NVS: A substrate for virtualizing wireless resources in cellular networks,” IEEE/ACM Transactions on Networking, vol. 20, no. 5, pp. 1333-1346, October 2012.Google Scholar
[597] L., Xiao, M., Johansson, and S., Boyd, “Simultaneous routing and resource allocation via dual decomposition,” IEEE Transactions on Communications, vol. 52, no. 7, pp. 1136-1144, July 2004.Google Scholar
[598] Y., Chen, S., Zhang, S., Xu, and G., Li, “Fundamental trade-offs on green wireless networks,” IEEE Communications Magazine, vol. 49, no. 6, pp. 30-37, June 2011.Google Scholar
[599] E., Hossain, Z., Han, and H. V., Poor, Smart Grid Communications and Networking, Cambridge, UK: Cambridge University Press, 2012.
[600] U.-C. P. S. O. T. Force, “Final report on the August 14, 2003 blackout in the United States and Canada: Causes and recommendations,” Tech. Rep., April 2004.
[601] S., Gorman, “Effect of stealthy bad data injection on network congestion in market based power system,” The Wall Street Journal, April 2009.
[602] A., Abur and A. G., Exposito, Power System State Estimation: Theory and Implementation, New York, USA: Marcel Dekker, Inc., 2004.
[603] Y., Liu, M. K., Reiter, and P., Ning, “False data injection attacks against state estimation in electric power grids,” in Proceedings of 16th ACM Conference on Computer and Communications Security, Chicago, IL, USA, November 2009.
[604] J. J., Grainger and W. D. S., Jr., Power System Analysis, New York, USA: McGraw-Hill, 1994.
[605] L., Xie, Y., Mo, and B., Sinopoli, “False data injection attacks in electricity markets,” in Proceedings of IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, October 2010.
[606] M., Esmalifalak, Z., Han, and L., Song, “Effect of stealthy bad data injection on network congestion in market based power system,” in Proceedings of IEEE Wireless Communications and Networking Conference, Paris, France, April 2012.
[607] G., Dán and H., Sandberg, “Stealth attacks and protection schemes for state estimators in power systems,” in Proceedings of IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, October 2010.
[608] M., Esmalifalak, G., Shi, Z., Han, and L., Song, “Bad data injection attack and defense in electricity market using game theory study,” IEEE Transactions on Smart Grid, vol. 4, no. 1, pp. 160-169, March 2013.Google Scholar
[609] O., Kousut, L., Jia, R. J., Thomas, and L., Tong, “Malicious data attacks on the smart grid,” IEEE Transactions on Smart Grid, vol. 2, no. 4, pp. 645-658, December 2011.Google Scholar
[610] L., Liu, M., Esmalifalak, and Z., Han, “Detection of false data injection in power grid exploiting low rank and sparsity,” in IEEE International Conference on Smart Grid Communications, Budapest, Hungary, June 2013.
[611] T. T., Kim and H. V., Poor, “Strategic protection against data injection attacks on power grids,” IEEE Transactions on Smart Grid, vol. 2, no. 2, pp. 326-333, June 2011.Google Scholar
[612] S., Cui, Z., Han, S., Kar, T. T., Kim, H. V., Poor, and A., Tajer, “Coordinated datainjection attack and detection in the smart grid: A detailed look at enriching detection solutions,” IEEE Signal Processing Magazine, vol. 29, no. 5, pp. 106-115, September 2012.Google Scholar
[613] Y., Zhao, A., Goldsmith, and H. V., Poor, “Fundamental limits of cyber-physical security in smart power grids,” in Proceedings of IEEE 52nd Annual Conference on Decision and Control, Florence, Italy, December 2013.
[614] M., Shahidehpour,W. F., Tinney, and Y., Fu, “Impact of security on power system operation,” Proceedings of the IEEE, vol. 93, no. 11, pp. 2013-2025, November 2001.Google Scholar
[615] O., Alsac and B., Scott, “Optimal load flow with steady-state security,” IEEE Transactions on Power Apparatus and System, vol. 93, no. 3, pp. 745-751, May 1974.Google Scholar
[616] M. V. F., Pereira, A., Monticelli, and L. M. V. G., Pinto, “Security-constrained dispatch with corrective rescheduling,” in Proceedings of IFAC Symposium on Planning and Operation of Electric Energy System, Rio de Janeiro, Brazil, July 1985.
[617] A. J., Wood and B. F., Wollenberg, Power Generation Operation and Control, New York, USA: Wiley, 1996.
[618] A., Monticelli, M. V. F., Pereira, and S., Granville, “Security-constrained optimal power flow with post-contingency corrective rescheduling,” IEEE Transactions on Power Systems, vol. 2, no. 1, pp. 175-180, February 1987.Google Scholar
[619] F., Capitanescu, J. L. M., Ramos, P., Panciatici, D., Kirschen, A. M., Marcolini, L., Platbrood, and L., Wehenkel, “State-of-the-art, challenges, and future trends in security constrained optimal power flow,” Electric Power System Research, vol. 81, no. 8, pp. 1731-1741, August 2011.Google Scholar
[620] J., Martínez-Crespo, J., Usaola, and J. L., Fernández, “Security-constrained optimal generation scheduling in large-scale power systems,” IEEE Transactions on Power Systems, vol. 21, no. 1, pp. 321-332, February 2006.Google Scholar
[621] Y., Fu, M., Shahidehpour, and Z., Li, “AC contingency dispatch based on securityconstrained unit commitment,” IEEE Transactions on Power Systems, vol. 21, no. 2, pp. 897-908, May 2006.Google Scholar
[622] F., Capitanescu and L., Wehenkel, “A new iterative approach to the corrective securityconstrained optimal power flow problem,” IEEE Transactions on Power Systems, vol. 23, no. 4, pp. 1533-1541, November 2008.Google Scholar
[623] Y., Li and J. D., McCalley, “Decomposed scopf for improving efficiency,” IEEE Transactions on Power Systems, vol. 24, no. 1, pp. 494-495, February 2009.Google Scholar
[624] Z., Han, H., Li, and W., Yin, Compressive Sensing for Wireless Communication, Cambridge, UK: Cambridge University Press, 2012.
[625] E. J., Candès, X., Li, Y., Ma, and J., Wright, “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, pp. 1-37, May 2011.Google Scholar
[626] Z., Lin, M., Chen, L., Wu, and Y., Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” UIUC, Tech. Rep. UILU-ENG-09-2215, Urbana, FL, USA, 2009.
[627] J., Cai, E. J., Candès, and Z., Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956-1982, January 2010.Google Scholar
[628] Y., Shen, Z., Wen, and Y., Zhang, “Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization,” Rice CAAM, Tech. Rep. TR11-02, Houston, TX, USA, 2011.
[629] Z., Wen, W., Yin, and Y., Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm,” Rice CAAM, Tech. Rep. TR10-07, Houston, TX, USA, 2010.
[630] R. D., Zimmerman, C. E., Murillo-Sánchez, and R. J., Thomas, “MAT-POWER steady-state operations, planning and analysis tools for power systems research and education,” IEEETransactions on Power Systems, vol. 26, no. 1, pp. 12-19, February 2011.Google Scholar
[631] R., Baldick, B. H., Kim, C., Chase, and Y., Luo, “A fast distributed implementation of optimal power flow,” IEEE Transactions on Power Systems, vol. 14, no. 3, pp. 858-864, August 1989.Google Scholar
[632] M., Kraning, E., Chu, J., Lavaei, and S., Boyd, “Dynamic network energy management via proximal message passing,” Foundations and Trends in Optimization, vol. 1, no. 2, pp. 1-54, January 2014.Google Scholar
[633] W., Deng and W., Yin, “On the global and linear convergence of the generalized alternating direction method of multipliers,” Rice CAAM, Tech. Rep. TR12-14, Houston, TX, USA, 2012.
[634] J., Nocedal and S. J., Wright, Numerical Optimization (2nd ed.), New York, USA: Springer, 2006.
[635] M., Schatz, “Cloudburst: Highly sensitive read mapping with mapreduce,Bioinformatics, vol. 25, no. 11, pp. 1363-1369, 2009.Google Scholar
[636] http://lintool.github.io/Cloud9/.
[637] J., Stamos and H., Young, “A symmetric fragment and replicate algorithm for distributed joins,” IEEE Transactions on Parallel and Distributed Systems, vol. 4, no. 12, pp. 1345-1354, December 1993.Google Scholar
[638] W., Yan and P., Larson, “Eager aggregation and lazy aggregation,” in Proceedings of Very Large Data Base Conference, Zurich, Switzerland, September 1995.
[639] S., Ramakrishnan, G., Swart, and A., Urmanov, “Balancing reducer skew in mapreduce workloads using progressive sampling,” in Proceedings of ACM Symposium on Cloud Computing, San Jose, CA, USA, October 2012.
[640] M., Englert, D., Ozmen, and M., Westermann, “The power of reordering for online minimum makespan scheduling,” in Proceedings of IEEE Annual Symposium on Foundation on Computer Science, Philadelphia, PA, USA, 2008.
[641] J., Kleinberg and E., Tardos, Algorithm Design, www.pearsoned.co.in/Web/Home.aspx, Pearson Education India, 2006.
[642] J., Devore, Probability & Statistics for Engineering and the Sciences, CengageBrain.com, 2012.
[643] “Wikipedia page-to-page link,” 2013, http://haselgrove.id.au/wikipedia.htm.
[644] W., Heinzelman, A., Chandrakasan, and H., Balakrishnan, “Energy-efficient communication protocol for wireless microsensor networks,” in Proceedings of Hawaiian International Conference on Systems Science, Wailea Maui, HI, USA, January 2000.
[645] I. F., Akyildiz, W., Su, Y., Sankarusubramanian, and E., Cayirci, “A survey on sensor networks,” IEEE Communications Magazine, vol. 40, no. 8, pp. 102-114, August 2002.Google Scholar
[646] D., Estrin, R., Govindan, J., Heidemann, and S., Kumar, “Next century challenges: Scalable coordination in sensor networks,” in Proceedings of ACM Annual Conference on Mobile Computing and Networking, Seattle, WA, USA, August 1999.
[647] S., Madden, R., Szewczyk, M., Franklin, and W., Hong, “Supporting aggregate queries over ad-hoc wireless sensor networks,” in Proceedings of IEEE International Workshop on Mobile Computing Systems and Application, Callicon, NY, USA, June 2002.
[648] S., Madden, M., Franklin, J., Hellerstein, and W., Hong, “Tag: A tiny aggregation service for ad hoc sensor networks,” in Proceedings of USENIX Operating System Design and Implementation, Boston, MA, USA, December 2002.
[649] C., Intanagonwiwat, D., Estrin, R., Govindan, and J., Heidemann, “Impact of network density on data aggregation in wireless sensor networks,” in Proceedings of IEEE International Conference on Distributed Computing Systems, Vienna, Austria, July 2002.
[650] B., Krishnamachari, D., Estrin, and S., Wicker, “The impact of data aggregation in wireless sensor networks,” in Proceedings of IEEE International Conference on Distributed Computing Systems Workshop on Distributed Event-based System, Vienna, Austria, July 2002.
[651] J., Al-Karaki and A., Kamal, “Routing techniques in wireless sensor networks: A survey,” IEEE Wireless Communications, vol. 11, no. 6, pp. 6-28, December 2004.Google Scholar
[652] W., Heinzelman, J., Kulik, and H., Balakrishnan, “Adaptive protocols for information dissemination in wireless sensor networks,” in Proceedings of ACM Annual Conference on Mobile Computing and Networking, Seattle, WA, USA, August 1999.
[653] C., Intanagonwiwat, R., Govindan, and D., Estrin, “Directed diffusion: A scalable and robust communication paradigm for sensor networks,” in Proceedings of ACM Annual Conference on Mobile Computing and Networking, Boston, MA, USA, August 2000.
[654] D., Braginsky and D., Estrin, “Rumor routing algorithm for sensor networks,” in Proceedings of ACM Workshop on Wireless Sensor Networks and Applications, Atlanta, GA, USA, September 2002.
[655] M., Chu, H., Haussecker, and F., Zhao, “Scalable information-driven sensor querying and routing for ad hoc heterogeneous sensor networks,” International Journal of High Performance Computing Applications, vol. 16, no. 3, pp. 293-313, August 2002.Google Scholar
[656] N., Sadagopan, B., Krishnamachari, and A., Helmy, “the acquire mechanism mechanism for efficient querying in sensor networks,” in Proceedings of the IEEE International Workshop on Sensor Network Protocol and Applications, Seattle, WA, USA, May 2003.
[657] O., Younis and S., Fahmy, “Distributed clustering in ad-hoc sensor networks: A hybrid, energy-efficient approach,” in Proceedings of IEEE International Conference on Computer Communications, Hong Kong, China, March 2004.
[658] V., Kawadia and P., Kumar, “The power control and clustering in ad-hoc networks,” in Proceedings of IEEE International Conference on Computer Communications, San Francisco, CA, USA, March 2003.
[659] S., Banerjee and S., Khuller, “A clustering scheme for hierarchical control in multi-hop wireless networks,” in Proceedings of IEEE International Conference on Communications, Anchorage, AK, USA, April 2001.
[660] K., Yao, D., Estrin, and Y. H., Hu, eds., Special Issue on Sensor Networks, EURASIP Journal on Applied Signal Processing, vol. 2003, no. 4, 2004.
[661] M., Greenwald and S., Khanna, “Power-conserving computation of order-statistics over sensor networks,” in Proceedings of ACM the Symposium on Principles of Database Systems, Paris, France, June 2004.
[662] C., Buragohain, D., Agrawal, and S., Suri, “Power aware routing for sensor databases,” in Proceedings of IEEE International Conference on Communications, Miami, FL, USA, March 2005.
[663] W., Yu, W., Rhee, S., Boyd, and J. M., Cioffi, “Iterative water-filling for Gaussian vector multiple-access channels,” IEEE Transactions on Information Theory, vol. 50, no. 1, pp. 145-152, January 2004.Google Scholar
[664] A., Boukerche, R., Pazzi, and R., Araujo, “A fast and reliable protocol for wireless sensor networks in critical conditions monitoring applications,” in Proceedings of ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, Venice, Italy, October 2004.
[665] S., Lindsey and C., Raghavendra, “Pegasis: Power-efficient gathering in sensor networks,” in Proceedings of IEEE Aerospace Conference, vol. 3, 2002.Google Scholar
[666] J., Wieselthier, G., Nguyen, and A., Ephremides, “On the construction of energy-efficient broadcast and multicast trees in wireless networks,” in Proceedings of IEEE International Conference on Computer Communications, Tel-Aviv, Israel, March 2000.
[667] N., Shrivastava, C., Buragohain, D., Agrawal, and S., Suri, “Medians and beyond: New aggregation techniques for sensor networks,” in Proceedings of ACM Conference on Embedded Networked Sensor Systems, Baltimore, MD, USA, November 2004.
[668] W., Hoeffding, “Probability inequalities for sums of bounded random variables,” Journal of the American Statistical Association, vol. 58, no. 301, pp. 13-30, 1963.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Bibliography
  • Zhu Han, University of Houston, Mingyi Hong, Iowa State University, Dan Wang, Hong Kong Polytechnic University
  • Book: Signal Processing and Networking for Big Data Applications
  • Online publication: 18 May 2017
  • Chapter DOI: https://doi.org/10.1017/9781316408032.015
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Bibliography
  • Zhu Han, University of Houston, Mingyi Hong, Iowa State University, Dan Wang, Hong Kong Polytechnic University
  • Book: Signal Processing and Networking for Big Data Applications
  • Online publication: 18 May 2017
  • Chapter DOI: https://doi.org/10.1017/9781316408032.015
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Bibliography
  • Zhu Han, University of Houston, Mingyi Hong, Iowa State University, Dan Wang, Hong Kong Polytechnic University
  • Book: Signal Processing and Networking for Big Data Applications
  • Online publication: 18 May 2017
  • Chapter DOI: https://doi.org/10.1017/9781316408032.015
Available formats
×