Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improveme
- PDF / 2,630,196 Bytes
- 44 Pages / 439.37 x 666.142 pts Page_size
- 17 Downloads / 147 Views
(2020) 34:42
Multi‑agent active information gathering in discrete and continuous‑state decentralized POMDPs by policy graph improvement Mikko Lauri1 · Joni Pajarinen2,3 · Jan Peters3,4
© The Author(s) 2020
Abstract Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable Markov decision process (Dec-POMDP). The Dec-POMDP is a principled model for co-operative decentralized multi-agent decision-making. An optimal solution of a Dec-POMDP is a set of local policies, one for each agent, which maximizes the expected sum of rewards over time. In contrast to most prior work on Dec-POMDPs, we set the reward as a non-linear function of the agents’ state information, for example the negative Shannon entropy. We argue that such reward functions are well-suited for decentralized information gathering problems. We prove that if the reward function is convex, then the finite-horizon value function of the Dec-POMDP is also convex. We propose the first heuristic anytime algorithm for information gathering Dec-POMDPs, and empirically prove its effectiveness by solving discrete problems an order of magnitude larger than previous state-of-the-art. We also propose an extension to continuous-state problems with finite action and observation spaces by employing particle filtering. The effectiveness of the proposed algorithms is verified in domains such as decentralized target tracking, scientific survey planning, and signal source localization. Keywords Planning under uncertainty · Decentralized POMDP · Information gathering · Active perception
* Mikko Lauri [email protected]‑hamburg.de 1
Department of Informatics, University of Hamburg, Hamburg, Germany
2
Tampere University, Tampere, Finland
3
Intelligent Autonomous Systems, Technische Universität Darmstadt, Darmstadt, Germany
4
Max Planck Institute, Tübingen, Germany
13
Vol.:(0123456789)
42
Page 2 of 44
Autonomous Agents and Multi-Agent Systems
(2020) 34:42
1 Introduction Autonomous agents and robots can be deployed in information gathering tasks in environments where human presence is either undesirable or infeasible. Examples include monitoring of deep ocean conditions, or space exploration. It may be desirable to deploy a team of agents due to the large scope of the task at hand, resulting in a multi-agent active information gathering task. Such a task typically has a definite duration, after which the agents, the data collected by the agents, or both, are recovered. For example, underwater survey vehicles may be recovered by a
Data Loading...