Eliminating cross-camera bias for vehicle re-identification
- PDF / 3,924,005 Bytes
- 17 Pages / 439.642 x 666.49 pts Page_size
- 59 Downloads / 201 Views
Eliminating cross-camera bias for vehicle re-identification Jinjia Peng1 · Guangqi Jiang1 · Dongyan Chen1 · Tongtong Zhao1 · Huibing Wang1 · Xianping Fu1,2 Received: 20 December 2019 / Revised: 14 July 2020 / Accepted: 24 September 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Vehicle re-identification (reID) often requires to recognize a target vehicle in large datasets captured from multi-cameras. It plays an important role in the automatic analysis of the increasing urban surveillance videos, which has become a hot topic in recent years. However, the appearance of vehicle images is easily affected by the environment that various illuminations, different backgrounds and viewpoints, which leads to the large bias between different cameras. To address this problem, this paper proposes a cross-camera adaptation framework (CCA), which smooths the bias by exploiting the common space between cameras for all samples. CCA first transfers images from multi-cameras into one camera to reduce the impact of the illumination and resolution, which generates the samples with the similar distribution. Then, to eliminate the influence of background and focus on the valuable parts, we propose an attention alignment network (AANet) to learn powerful features for vehicle reID. Specially, in AANet, the spatial transfer network with attention module is introduced to locate a series of the most discriminative regions with high-attention weights and suppress the background. Moreover, comprehensive experimental results have demonstrated that our proposed CCA can achieve excellent performances on benchmark datasets VehicleID and VeRi-776. Keywords Cross-camera · Attention alignment · Vehicle re-identification
1 Introduction The related research to vehicles has attracted wide attention and made some progress in the field of computer vision, such as vehicle detection [4, 10], tracking [5, 26] and classification [22, 29]. Different from the tasks above, the purpose of vehicle reID is to accurately Huibing Wang
[email protected] Xianping Fu
[email protected]
Extended author information available on the last page of the article.
Multimedia Tools and Applications
match the target vehicle captured from multiple non-overlapping cameras, which is of great significance to intelligent transportation. Meanwhile, the large amount of videos or images could be processed automatically carried out by vehicle reID to exploit the meaningful information, which plays an important role in modern smart surveillance systems. With the development of deep learning, lots of excellent deep learning-based methods [6, 8, 11, 46] are proposed for the vehicle reID task. However, there still exist many limitations for the application in the real-world. Different with the person reID [12, 34, 36], fine-grained classification [31, 35, 38] and other methods [30, 33, 37] that could extract rich features from the images with various poses and colors, the vehicles are generally rigid structure with solid colors and appearanc
Data Loading...