A Bayesian Trust Inference Model for Human-Multi-Robot Teams

  • PDF / 2,264,496 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 26 Downloads / 188 Views

DOWNLOAD

REPORT


A Bayesian Trust Inference Model for Human-Multi-Robot Teams Maziar Fooladi Mahani1 · Longsheng Jiang1 · Yue Wang1 Accepted: 23 September 2020 © Springer Nature B.V. 2020

Abstract In this paper, we develop a Bayesian inference model for the degree of human trust in multiple mobile robots. A linear model for robot performance in navigation and perception is first devised. We then propose a computational trust model for the human multi-robot team based on a dynamic Bayesian network (DBN). In the trust DBN, the robot performance is the network input, the human trust feedback to each individual robot and the human intervention are the outputs (observations). The categorical Boltzmann machine is used to capture the multinomial distributions that model the conditional dependencies of the DBN. We introduce the expectation maximization (EM) algorithm for the model learning and personalization. A factorial form of the EM algorithm is adopted for the multi-robot system where each robot has its corresponding latent trust state in the human mind. Bayesian inference is conducted to find the trust states, i.e., the trust belief. Based on the inferred trust states, we further derive the formulation to predict human interventions for model validation. A simulated human-UAV collaborative search mission is conducted with humans-in-the-loop. The experiment results show the Bayesian trust inference model can infer the degrees of human trust in multiple mobile robots and also predict human interactions with relatively high accuracy (72.2%). These findings confirm the effectiveness of DBNs in modeling human trust towards multi-robot systems. Keywords Human-multi-robot teams · Trust · Dynamic Bayesian network

1 Introduction With recent advances in unmanned systems and artificial intelligence (AI) [8], the realization of advanced humanmulti-robot teams (HMRTs), which is a team consisting of one human and at least two robots, intelligent agents, or (semi-)autonomous systems, seems reachable. However, a lot of challenges are posed for achieving efficient and highperformance teamwork in HMRTs [30], mainly due to the fact that the workload on a single human operator can easily become overwhelming when supervising multiple robots at the same time [23]. Trust plays an essential role in humanrobot interaction (HRI) and user adoption of autonomy in an HMRT. By trusting some robots more in the team and hence allowing more autonomy of these more trusted robots,

B

Yue Wang [email protected] Maziar Fooladi Mahani [email protected] Longsheng Jiang [email protected]

1

Department of Mechanical Engineering, Clemson University, Clemson, SC 29634, USA

the human operator can focus on managing and assisting the less trusted robots. As a result, there has been considerable research about trust in human-robot teaming. For example, Mercado et al. investigated trust and operator performance in a context of human-agent teaming for multi-robot management [17]. The results indicated that operator performance and trust increased as a function of transparen