Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-17T17:57:17.041Z Has data issue: false hasContentIssue false

3 - Optimization algorithms for big data with application in wireless networks

from Part I - Mathematical foundations

Published online by Cambridge University Press:  18 December 2015

Mingyi Hong
Affiliation:
Iowa State University, USA
Wei-Cheng Liao
Affiliation:
University of Minnesota, USA
Ruoyu Sun
Affiliation:
University of Minnesota, USA
Zhi-Quan Luo
Affiliation:
University of Minnesota, USA
Shuguang Cui
Affiliation:
Texas A & M University
Alfred O. Hero, III
Affiliation:
University of Michigan, Ann Arbor
Zhi-Quan Luo
Affiliation:
University of Minnesota
José M. F. Moura
Affiliation:
Carnegie Mellon University, Pennsylvania
Get access

Summary

This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloud-based densely deployed next-generation wireless network. In the first part of the chapter we survey a few popular first-order methods for large-scale optimization, including the block coordinate descent (BCD) method, the block successive upper-bound minimization (BSUM) method and the alternating direction method of multipliers (ADMM). In the second part of the chapter, we show that many difficult problems in managing large wireless networks can be solved efficiently and in a parallel manner, by modern first-order optimization methods. Extensive numerical results are provided to demonstrate the benefit of the proposed approach.

Introduction

Motivation

The ever-increasing demand for rapid access to large amounts of data anywhere anytime has been the driving force in the current development of next-generation wireless network infrastructure. It is projected that within 10 years, the wireless cellular network will offer up to 1000× throughput performance over the current 4G technology [1]. By that time the network should also be able to deliver a fiber-like user experience, boasting 10 Gb/s individual transmission rate for data-intensive cloud-based applications.

Achieving this lofty goal requires revolutionary infrastructure and highly sophisticated resource management solutions. A promising network architecture to meet this requirement is the so-called cloud-based radio access network (RAN), where a large number of networked base stations (BSs) are deployed for wireless access, while powerful cloud centers are used at the back end to perform centralized network management [1–4]. Intuitively, a large number of networked access nodes, when intelligently provisioned, will offer significantly improved spectrum efficiency, real-time load balancing and hotspot coverage. In practice, the optimal network provisioning is extremely challenging, and its success depends on smart joint backhaul provisioning, physical layer transmit/receive schemes, BS/user cooperation and so on.

This chapter proposes the use of modern first-order large-scale optimization techniques to manage a cloud-based densely deployed next-generation wireless network. We show that many difficult problems in this domain can be solved efficiently and in a parallel manner, by advanced optimization algorithms such as the block successive upper-bound minimization (BSUM) method and the alternating direction methods of multipliers (ADMM) method.

The organization of the chapter

To begin with, we introduce a few well-known first-order optimization algorithms. Our focus is on algorithms suitable for solving problems with certain block-structure, where the optimization variables can be divided into (possibly overlapping) blocks.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2016

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Huawei, “5G: A technology vision,” Huawei Technologies Inc., White paper, 2013.
[2] W.-C., Liao, M., Hong, H., Farmanbar, et al., “Min flow rate maximization for software defined radio access networks,” IEEE Journal on Selected Areas in Communication, vol. 32, no. 6, pp. 1282–1294, 2014.Google Scholar
[3] J., Andrews, “Seven ways that HetNets are a cellular paradigm shift,” IEEE Communications Magazine, vol. 51, no. 3, pp. 136–144, March 2013.Google Scholar
[4] S.-H., Park, O., Simeone, O., Sahin, and S., Shamai, “Joint precoding and multivariate backhaul compression for the downlink of cloud radio access networks,” IEEE Transactions on Signal Processing, vol. 61, no. 22, pp. 5646–5658, November 2013.Google Scholar
[5] P., Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimization,” Journal of Optimization Theory and Applications, vol. 103, no. 9, pp. 475–494, 2001.Google Scholar
[6] D. P., Bertsekas and J. N., Tsitsiklis, Neuro-Dynamic Programming, Belmont, MA: Athena Scientific, 1996.Google Scholar
[7] D. P., Bertsekas and J. N., Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, 2nd edn, Belmont, MA: Athena Scientific, 1997.Google Scholar
[8] Z.-Q., Luo and P., Tseng, “Error bounds and convergence analysis of feasible descent methods: a general approach,” Annals of Operations Research, vol. 46–47, pp. 157–178, 1993.Google Scholar
[9] D. P., Bertsekas and J. N., Tsitsiklis, “On the convergence of the coordinate descent method for convex differentiable minimization,” Journal of Optimization Theory and Application, vol. 72, no. 1, pp. 7–35, 1992.Google Scholar
[10] D. P., Bertsekas and J. N., Tsitsiklis, “On the linear convergence of descent methods for convex essentially smooth minimization,” SIAM Journal on Control and Optimization, vol. 30, no. 2, pp. 408–425, 1992.Google Scholar
[11] Y., Nesterov, “Efficiency of coordiate descent methods on huge-scale optimization problems,” SIAM Journal on Optimization, vol. 22, no. 2, pp. 341–362, 2012.Google Scholar
[12] P., Richtarik and M., Takac, “Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function,” Mathematical Programming, pp. 1–38, 2012.Google Scholar
[13] S., Shalev-Shwartz and A., Tewari, “Stochastic methods for l1 regularized loss minimization,” Journal of Machine Learning Research, vol. 12, pp. 1865–1892, 2011.Google Scholar
[14] Z., Lu and X., Lin, “On the complexity analysis of randomized block-coordinate descent methods,” Mathematical Programming, 2013, accepted.Google Scholar
[15] A., Saha and A., Tewari, “On the nonasymptotic convergence of cyclic coordinate descent method,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 576–601, 2013.Google Scholar
[16] A., Beck and L., Tetruashvili, “On the convergence of block coordinate descent type methods,” SIAM Journal on Optimization, vol. 23, no. 4, pp. 2037–2060, 2013.Google Scholar
[17] M., Hong, X., Wang, M., Razaviyayn, and Z.-Q., Luo, “Iteration complexity analysis of block coordinate descent methods,” preprint, 2013, available online arXiv:1310.6957.
[18] F., Facchinei, S., Sagratella, and G., Scutari, “Flexible parallel algorithms for big data optimization,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.Google Scholar
[19] G., Scutari, F., Facchinei, P., Song, D. P., Palomar, and J.-S., Pang, “Decomposition by partial linearization: Parallel optimization of multi-agent systems,” IEEE Transactions on Signal Processing, vol. 63, no. 3, pp. 641–656, 2014.Google Scholar
[20] M. J. D., Powell, “On search directions for minimization algorithms,” Mathematical Programming, vol. 4, pp. 193–201, 1973.Google Scholar
[21] M. V., Solodov, “On the convergence of constrained parallel variable distribution algorithms,” SIAM Journal on Optimization, vol. 8, no. 1, pp. 187–196, 1998.Google Scholar
[22] R., Glowinski and A., Marroco, “Sur l'approximation, par elements finis d'ordre un,et la resolution, par penalisation-dualite, d'une classe de problemes de dirichlet non lineares,” Revue Franqaise d'Automatique, Informatique et Recherche Opirationelle, vol. 9, pp. 41–76, 1975.Google Scholar
[23] D., Gabay and B., Mercier, “Adual algorithm for the solution of nonlinear variational problems via finite element approximation,” Computers & Mathematics with Applications, vol. 2, pp. 17–40, 1976.Google Scholar
[24] S., Boyd, N., Parikh, E., Chu, B., Peleato, and J., Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, 2011.Google Scholar
[25] W., Yin, S., Osher, D., Goldfarb, and J., Darbon, “Bregman iterative algorithms for l1- minimization with applications to compressed sensing,” SIAM Journal on Imgaging Science, vol. 1, no. 1, pp. 143–168, March 2008.Google Scholar
[26] J., Yang, Y., Zhang, and W., Yin, “An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise,” SIAM Journal on Scientific Computing, vol. 31, no. 4, pp. 2842–2865, 2009.Google Scholar
[27] X., Zhang, M., Burger, and S., Osher, “A unified primal-dual algorithm framework based on Bregman iteration,” Journal of Scientific Computing, vol. 46, no. 1, pp. 20–46, 2011.Google Scholar
[28] K., Scheinberg, S., Ma, and D., Goldfarb, “Sparse inverse covariance selection via alternating linearization methods,” in Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), 2010.Google Scholar
[29] D. P., Bertsekas, Nonlinear Programming, 2nd edn, Belmont,MA: Athena Scientific, 1999.Google Scholar
[30] S., Boyd and L., Vandenberghe, Convex Optimization, Cambridge University Press, 2004.Google Scholar
[31] A., Nedic and A., Ozdaglar, “Cooperative distributed multi-agent optimization,” in Convex Optimization in Signal Processing and Communications, Cambridge University Press, 2009.Google Scholar
[32] D. P., Bertsekas, Constrained Optimization and Lagrange Multiplier Method, Belmont, MA: Academic Press, 1982.Google Scholar
[33] B., He and X., Yuan, “On the O(1/n) convergence rate of the Douglas–Rachford alternating direction method,” SIAM Journal on Numerical Analysis, vol. 50, no. 2, pp. 700–709, 2012.Google Scholar
[34] R., Monteiro and B., Svaiter, “Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers,” SIAM Journal on Optimization, vol. 23, no. 1, pp. 475–507, 2013.Google Scholar
[35] T., Goldstein, B., O'Donoghue, and S., Setzer, “Fast alternating direction optimization methods,” UCLA CAM technical report, 2012.Google Scholar
[36] D., Boley, “Linear convergence of ADMM on a model problem,” SIAM Journal on Optimization, vol. 23, pp. 2183–2207, 2013.Google Scholar
[37] W., Deng and W., Yin, “On the global linear convergence of alternating direction methods,” preprint, 2012.
[38] Z., Zhou, X., Li, J., Wright, E., Candes, and Y., Ma, “Stable principal component pursuit,” Proceedings of 2010 IEEE International Symposium on Information Theory, 2010.Google Scholar
[39] M., Hong and Z.-Q., Luo, “On the linear convergence of the alternating direction method of multipliers,” arXiv preprint arXiv:1208.3922, 2012.
[40] X., Wang, M., Hong, S., Ma, and Z.-Q., Luo, “Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers,” preprint, 2013.
[41] C., Chen, B., He, X., Yuan, and Y., Ye, “The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent,” preprint, 2013.
[42] B., He, M., Tao, and X., Yuan, “Alternating direction method with Gaussian back substitution for separable convex programming,” SIAM Journal on Optimization, vol. 22, pp. 313–340, 2012.Google Scholar
[43] M., Razaviyayn, M., Hong, and Z.-Q., Luo, “Aunified convergence analysis of block successive minimizationmethods for nonsmooth optimization,” SIAM Journal on Optimization, vol. 23, no. 2, pp. 1126–1153, 2013.Google Scholar
[44] P., Combettes and J.-C., Pesquet, “Proximal splitting methods in signal processing,” in Fixed- Point Algorithms for Inverse Problems in Science and Engineering, ser. Springer Optimization and Its Applications, New York: Springer, 2011, pp. 185–212.Google Scholar
[45] C., Navasca, L. D., Lathauwer, and S., Kindermann, “Swamp reducing technique for tensor decomposition,” Proceedings 16th European Signal Processing Conference (EUSIPCO), August 2008.Google Scholar
[46] Q., Shi, M., Razaviyayn, Z.-Q., Luo, and C., He, “An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4331–4340, 2011.Google Scholar
[47] A. P., Dempster, N. M., Laird, and D. B., Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society Series B, vol. 39, pp. 1–38, 1977.Google Scholar
[48] A. L., Yuille and A., Rangarajan, “The concave-convex procedure,” Neural Computation, vol. 15, no. 4, pp. 915–936, Apr. 2003.Google Scholar
[49] D., Hunter and K., Lange, “Quantile regression via an mm algorithm,” Journal of Computational and Graphical Statistics, vol. 9, pp. 60–77, 2000.Google Scholar
[50] D. D., Lee and H. S., Seung, “Algorithms for non-negative matrix factorization,” in Neural Information Processing Systems (NIPS), 2000, pp. 556–562.Google Scholar
[51] B. R., Marks and G. P., Wright, “A general inner approximation algorithm for nonconvex mathematical programs,” Operations Research, vol. 26, pp. 681–683, July–August 1978.Google Scholar
[52] B., Chen, S., He, Z., Li, and S., Zhang, “Maximum block improvement and polynomial optimization,” SIAM Journal on Optimization, vol. 22, no. 1, pp. 87–107, 2012.Google Scholar
[53] M., Hong, Q., Li, and Y.-F., Liu, “Decomposition by successive convex approximation: a unifying approach for linear transceiver design in interfering heterogeneous networks,” manuscript, 2013, available online arXiv:1210.1507.
[54] S. S., Christensen, R., Agarwal, E. D., Carvalho, and J. M., Cioffi, “Weighted sum-rate maximization using weighted MMSE for MIMO-BC beamforming design,” IEEE Transactions on Wireless Communications, vol. 7, no. 12, pp. 4792–4799, 2008.Google Scholar
[55] M., Hong, R., Sun, H., Baligh, and Z.-Q., Luo, “Joint base station clustering and beamformer design for partial coordinated transmission in heterogenous networks,” IEEE Journal on Selected Areas in Communications., vol. 31, no. 2, pp. 226–240, 2013.Google Scholar
[56] M., Razaviyayn, M., Hong, and Z.-Q., Luo, “Linear transceiver design for aMIMO interfering broadcast channel achieving max-min fairness,” Signal Processing, vol. 93, no. 12, pp. 3327–3340, 2013.Google Scholar
[57] D. P., Bertsekas, P., Hosein, and P., Tseng, “Relaxationmethods for network flowproblems with convex arc costs,” SIAM Journal on Control and Optimization, vol. 25, no. 5, pp. 1219–1243, September 1987.Google Scholar
[58] Gurobi, “Gurobi optimizer reference manual,” 2013.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×