Accurate and robust odometry by fusing monocular visual, inertial, and wheel encoder
- PDF / 3,392,864 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 113 Downloads / 214 Views
REGULAR PAPER
Accurate and robust odometry by fusing monocular visual, inertial, and wheel encoder Yuqian Niu1 · Jia Liu1 · Xia Wang1 · Wei Hao1 · Wenjie Li1 · Lijun Chen1 Received: 15 June 2020 / Accepted: 19 August 2020 © China Computer Federation (CCF) 2020
Abstract Tracking the pose of a robot has been gaining importance in the field of Robotics, e.g., paving the way for robot navigation. In recent years, monocular visual–inertial odometry (VIO) is widely used to do the pose estimation due to its good performance and low cost. However, VIO cannot estimate the scale or orientation accurately when robots move along straight lines or circular arcs on the ground. To address the problem, in this paper we take the wheel encoder into account, which can provide us with stable translation information as well as small accumulated errors and momentary slippage errors. By jointly considering the kinematic constraints and the planar moving features, an odometry algorithm tightly coupled with monocular camera, IMU, and wheel encoder is proposed to get robust and accurate pose sensing for mobile robots, which mainly contains three steps. First, we present the wheel encoder preintegration theory and noise propagation formula based on the kinematic mobile robot model, which is the basis of accurate estimation in backend optimization. Second, we adopt a robust initialization method to obtain good initial values of gyroscope bias and visual scale in reality, by making full use of the camera, IMU and wheel encoder measurements. Third, we bound the high computation complexity with a marginalization strategy that conditionally eliminates unnecessary measurements in the sliding window. We implement a prototype and several extensive experiments showing that our system can achieve robust and accurate pose estimation, in terms of the scale, orientation and location, compared with the state-of-the-art. Keywords Multi-sensor fusion · Visual–inertial–wheel encoder odometry · State estimation · Localization · Robots
1 Introduction Robot localization has been gaining importance in the field of Robotics, spanning from robot navigation, threedimensional reconstruction to simultaneous localization and * Jia Liu [email protected] * Lijun Chen [email protected] Yuqian Niu [email protected] Xia Wang [email protected] Wei Hao [email protected] Wenjie Li [email protected] 1
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
mapping (SLAM). Visual inertial odometry (VIO) is a common way for robot localization. By fusing the measurements captured by camera and IMU, VIO can make the metric scale together with the pitch and roll angles observable, which (especially for the scale) underlies the tasks like SLAM and navigation. In addition, the VIO sensor with small size is easy-to-deploy on mobile robots, unmanned aerial vehicles, and handheld devices. In spite of these advantages, VIO requires generic three-dimensional motion along different directions, which is hard to satisfy in pr
Data Loading...