Efficient Image Retrieval via Feature Fusion and Adaptive Weighting
In the community of content-based image retrieval (CBIR), single feature only describes specific aspect of image content, resulting in false positive matches prevalently returned as candidate retrieval results with low precision and recall. Typically, fre
- PDF / 1,069,022 Bytes
- 15 Pages / 439.37 x 666.142 pts Page_size
- 40 Downloads / 274 Views
2
School of Computer, Shenyang Aerospace University, Shenyang 110136, China [email protected] Key Laboratory of Liaoning General Aviation Academy, Shenyang 110136, China 3 School of Information, Liaoning University, Shenyang 110036, China
Abstract. In the community of content-based image retrieval (CBIR), single feature only describes specific aspect of image content, resulting in false positive matches prevalently returned as candidate retrieval results with low precision and recall. Typically, frequently-used SIFT feature only depicts local gradient distribution within ROIs of gray scale images lacking color information, and tends to produce limited retrieval performance. In order to tackle such problems, we propose a feature fusion method of integrating multiple diverse image features to gain more complementary and helpful image information. Furthermore, to represent the disparate powers of discrimination of image features, a dynamically updating Adaptive Weights Allocation Algorithm (AWAA) which rationally allocates fusion weights proportional to their contributions to matching is proposed in the paper. Extensive experiments on several benchmark datasets demonstrate that feature fusion simultaneously with adaptive weighting based image retrieval yields more accurate and robust retrieval results than conventional retrieval schema.
Keywords: Image retrieval Color Names · BoW
1
·
Feature fusion
·
Adaptive weighting
·
Introduction
With the rapid growth of digital image data around us, conventional image retrieval methods based on keywords labelling seem incompetent to large scale image retrieval tasks, attributing to two main disadvantages. These disadvantages are heavy overhead on manually labelling, and matching deviation due to different individual subjectivities. In contrast, there is an increasing number of application areas and market demands for content-based image retrieval technology, accompanied by many inevitable challenges in the meantime. So far, a mass of state-of-the-art CBIR methods are based on Bag-of-Words (BoW) model [5,8,10] which originates from text retrieval field, describing an image by a bag vector consisting of feature descriptor words. First of all, local descriptors (e.g. SIFT [3]) from ROIs are computed with invariant detectors c Springer Nature Singapore Pte Ltd. 2016 T. Tan et al. (Eds.): CCPR 2016, Part II, CCIS 663, pp. 259–273, 2016. DOI: 10.1007/978-981-10-3005-5 22
260
X. Shi et al.
or affine region detectors, outputting a large amount of local high dimensional feature vectors from image patches. After that, constructing the codebook composed of many codewords with unsupervised clustering methods (k-means, approximate k-means, hierarchical k-means, etc.) [6]. Finally, the continuous feature space is subdivided into discrete search space by visual words quantized by the pre-trained codebook. Under BoW model representation, descriptor vectors from images are quantized to the corresponding nearest centroids within the codebook, and then an image is depicted as a frequency vec
Data Loading...