Distributed Optimization Over Weight-Balanced Digraphs with Event-Triggered Communication

This paper extends the event-triggered communication in consensus problems of multi-agent systems to the case of distributed continuous-time convex optimization over weight-balanced digraphs. We address problems whose global objective functions are a sum

  • PDF / 488,584 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 36 Downloads / 198 Views

DOWNLOAD

REPORT


Abstract This paper extends the event-triggered communication in consensus problems of multi-agent systems to the case of distributed continuous-time convex optimization over weight-balanced digraphs. We address problems whose global objective functions are a sum of local functions associated to each agent. We utilize the event-triggered communication technique to reduce the communication load and avoid Zeno behavior meanwhile. Based on Lyapunov approach, we prove that the Zero-Gradient-Sum (ZGS) algorithm combined with the event-triggered communication makes all agents’ states converge to the optimal solution of the global objective function exponentially fast. Keywords Distributed optimization Multi-agent system Event-triggered









Zero-gradient-sum algorithm Consensus Weight-balanced digraphs



1 Introduction Distributed optimization based on multi-agent networks has attracted lots of attention in recent years due to its wide applications in the real world, such as in sensor networks [1], large-scale machine learning [2], and distributed parameter estimation [3]. Those problems share a common feature that they all can be cast as

X. Pan ⋅ Z. Liu (✉) ⋅ Z. Chen College of Computer and Control Engineering, Nankai University, Tianjin 300353, China e-mail: [email protected] X. Pan ⋅ Z. Liu ⋅ Z. Chen Tianjin Key Laboratory of Intelligent Robotics, Tianjin 300353, China Z. Chen College of Science, Civil Aviation University of China, Tianjin 300300, China © Springer Science+Business Media Singapore 2016 Y. Jia et al. (eds.), Proceedings of 2016 Chinese Intelligent Systems Conference, Lecture Notes in Electrical Engineering 405, DOI 10.1007/978-981-10-2335-4_45

489

490

X. Pan et al.

an optimization problem with a global objective function described as the sum of local functions, each of which is known only to one agent in the network. Agents need cooperate to solve the problem. Many algorithms have been proposed to tackle this kind of problem. In [4], the distributed subgradient algorithm incorporating consensus-like protocol is proposed to solve unconstrained problems (see also [5, 6]) for the first time. Projection operator is applied to deal with constrained problems (see [7–9]) based on the work in [4]. They are all implemented in discrete time. Recently, a few works on continuous-time dynamics for distributed optimization are presented in [10, 11, 12, 13]. Those continuous-time algorithms can be analyzed via classical stability theory such as Lyapunov approach. In [11], the authors solve an equivalent form of the primal problem using the saddle points theory. The authors of [12] use sign functions to design the control law. All agents reach consensus in finite time and converge to the optimal solution together. The paper [13] designs an algorithm for a high-order continuous-time system. However, continuous communication is needed between neighbors in the realization process of the continuous-time algorithms. It may result in great communication burden, occupy lots of computation capacity, and also lea