Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-08T09:22:39.421Z Has data issue: false hasContentIssue false

DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS

Published online by Cambridge University Press:  18 November 2014

JUEYOU LI
Affiliation:
School of Mathematics, Chongqing Normal University, Chongqing 400047, PR China email zywu@cqnu.edu.cn School of SITE, Federation University Australia, VIC 3353, Australia email lijueyou@163.com
CHANGZHI WU
Affiliation:
Australasian Joint Research Centre for Building Information Modelling, School of Built Environment, Curtin University, WA 6102, Australia email c.wu@curtin.edu.au
ZHIYOU WU*
Affiliation:
School of Mathematics, Chongqing Normal University, Chongqing 400047, PR China email zywu@cqnu.edu.cn
QIANG LONG
Affiliation:
School of Science, Southwest University of Science and Technology, Sichuan 621010, PR China email longqiang@swust.edu.cn
XIANGYU WANG
Affiliation:
Australasian Joint Research Centre for Building Information Modelling, School of Built Environment, Curtin University, WA 6102, Australia email c.wu@curtin.edu.au Department of Housing and Interior Design, Kyung Hee University, Seoul, Korea email x.wang@curtin.edu.au
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the 'Save PDF' action button.

We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal–dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, ${\mathcal{O}}(1/k)$, after $k$ iterations, which is faster than the rate, ${\mathcal{O}}(1/\sqrt{k})$, of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method.

Information

Type
Research Article
Copyright
Copyright © 2014 Australian Mathematical Society