Distributed Optimization over General Directed Networks with Random Sleep Scheme
- PDF / 521,919 Bytes
- 9 Pages / 594.77 x 793.026 pts Page_size
- 50 Downloads / 223 Views
ISSN:1598-6446 eISSN:2005-4092 http://www.springer.com/12555
Distributed Optimization over General Directed Networks with Random Sleep Scheme Zheng Wang, Lifeng Zheng*, and Huaqing Li* Abstract: Distributed optimization aims at optimizing a global objective function which is described by a sum of local objective functions through local information processing and sharing. This paper studies the problem of distributed optimization over a network in which underlying graph is generally directed strongly connected. Most existing distributed algorithms require each agent to observe the gradient of the local objective function per iteration, which leads to heavy computational cost. A computation-efficient distributed optimization algorithm incorporating random sleep scheme is proposed by incorporating rescaling gradient technique to address the unbalancedness of the directed graph. The implementation of the proposed algorithm allows agents not only locally allocates the weights on the received information, but also independently decides whether to execute gradient observation at each iteration. Theoretical analysis verifies that the proposed algorithm is able to seek the optimal solution with probability one. Simulations are shown to demonstrate effectiveness of the proposed algorithm, show correctness of the theoretical analysis, and investigate the tradeoffs between convergence performance and computation cost. Keywords: Distributed convex optimization, multi-agent systems, random sleep scheme, row-stochastic matrix.
1.
INTRODUCTION
Optimal control methodologies witness growing research interests in dynamic system control, robotic control, and consensus control of networked systems [1–10]. In rescent years, distributed consensus optimization for multi-agent systems has owned much attention due to its wide applications on resource allocation [11, 12], sensor networks [13], and formation control [14]. One of the key problems is to minimize the sum of local objective functions, and simultaneously to realize consensus among the agents. Early work of distributed optimization includes distributed (sub)gradient descent [15] and distributed dual-averaging method [16]. The DGD method is based on simply summing a consensus part and a gradient part, in which each agent essentially executes a consensus process followed by a gradient descent process following the local gradient direction to obtain global optimal solutions. Extensions of this research are developed for various scenarios, such as dynamic networks [17, 18] and heterogeneous local constraints [19]. Then, many researchers focus on improving the convergence performance of distributed algorithms. EXTRA [20] achieves geometric con-
vergence by adopting symmetric weights. Recently, gradient tracking methods [21–23] are intensively studied, which are proven to converge linearly, by combining inexact gradient method with dynamic average consensus technique [24]. Related work includes stochastic networks [25] and uncoordinated stepsizes [26], etc. Aligning with the researc
Data Loading...