Visual localization under appearance change: filtering approaches
- PDF / 3,972,709 Bytes
- 14 Pages / 595.276 x 790.866 pts Page_size
- 2 Downloads / 194 Views
(0123456789().,-volV) (0123456789().,-volV)
S.I. : DICTA 2019
Visual localization under appearance change: filtering approaches Anh-Dzung Doan1 Ian Reid1
•
Yasir Latif1 • Tat-Jun Chin1 • Yu Liu1 • Shin-Fang Ch’ng1 • Thanh-Toan Do2
•
Received: 17 February 2020 / Accepted: 2 September 2020 Ó Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract A major focus of current research on place recognition is visual localization for autonomous driving. In this scenario, as cameras will be operating continuously, it is realistic to expect videos as an input to visual localization algorithms, as opposed to the single-image querying approach used in other visual localization works. In this paper, we show that exploiting temporal continuity in the testing sequence significantly improves visual localization—qualitatively and quantitatively. Although intuitive, this idea has not been fully explored in recent works. To this end, we propose two filtering approaches to exploit the temporal smoothness of image sequences: (i) filtering on discrete domain with hidden Markov model, and (ii) filtering on continuous domain with Monte Carlo-based visual localization. Our approaches rely on local features with an encoding technique to represent an image as a single vector. The experimental results on synthetic and real datasets show that our proposed methods achieve better results than state of the art (i.e., deep learning-based pose regression approaches) for the task on visual localization under significant appearance change. Our synthetic dataset and source code are made publicly available (https://sites.google.com/view/g2d-software/home; https://github.com/dadung/ Visual-Localization-Filtering). Keywords Visual localization Place recognition Autonomous driving Robotics
1 Introduction
& Anh-Dzung Doan [email protected] Yasir Latif [email protected] Tat-Jun Chin [email protected] Yu Liu [email protected] Shin-Fang Ch’ng [email protected] Thanh-Toan Do [email protected] Ian Reid [email protected] 1
School of Computer Science, The University of Adelaide, Adelaide, Australia
2
Department of Computer Science, University of Liverpool, Liverpool, UK
To carry out higher-level tasks such as planning and navigation, a robot needs to maintain, at all times, an accurate estimate of its position and orientation with respect to the environment. When the robot uses an existing map to infer its 6 degree of freedom (DoF) pose, the problem is termed localization. In the case when the map information is appearance (images) associated with different parts of the map, the problem is that of visual localization (VL). Image-based localization methods normally assume that the appearance remains unchanged from when the map is generated to the present time when the robot needs to localize itself. However, as the operational time span of the robot increases, the appearance of the environment inevitably changes. This poses a great challenge for visual localization meth
Data Loading...