Findings on ranking evaluation functions for feature weighting in image retrieval
- PDF / 334,350 Bytes
- 10 Pages / 595 x 794 pts Page_size
- 13 Downloads / 176 Views
RESEARCH
Open Access
Findings on ranking evaluation functions for feature weighting in image retrieval Sergio F da Silva1* , Letricia PS Avalhais1 , Marcos A Batista2 , Celia AZ Barcelos3 and Agma JM Traina1
Abstract Background: There are substantial benefits to be gained from ranking optimization in several information retrieval and recommendation systems. However, the analysis of ranking evaluation functions (REFs), which play a major role in many ranking optimization models, needs to be further investigated. An analysis of previous studies that investigated REFs was performed, and evidence was found which indicated that the choice of a proper REF is context sensitive. Methods: In this study, we analyze a broad set of REFs for feature weighting aimed at increasing the image retrieval effectiveness. The REFs analyzed sums ten and includes the most successful and representative REFs from the literature. The REFs were embedded into a genetic algorithm (GA)-based relevance feedback (RF) model, called WLSP-C±, aimed at improving image retrieval results through the use of learning weights for image descriptors and image regions. Results: Analyses of precision-recall curves in five real-world image data sets showed that one non-parameterized REF named F5, not analyzed in previous studies, overcame recommended ones, which require parameter adjustment. We also provided a computational analysis of the GA-based RF model investigated, and it was shown that it is linear in regard to the image data set cardinality. Conclusions: We conclude that REF F5 should be investigated in other contexts and problem scenarios centered on ranking optimization, as ranking optimization techniques rely heavily on the ranking quality measure. Keywords: Rank learning; Ranking evaluation functions; Content-based image retrieval; Genetic algorithms
Background Ranking optimization research studies have fostered widespread developments in information retrieval and recommendation systems [1-6]. Ranking optimization techniques can be grouped into three main classes: rank learning [2,4,5], rank aggregation (also known as data fusion) [7-10] and ranking (or list) diversification [1,11,12]. Rank learning relies on supervised queries, relevance feedback or context information to achieve an adequate model to rank items like web pages, images, etc. Normally, rank aggregation is an unsupervised method that relies on multi-criteria ranks and tries to combine them to produce a consensus rank. On the other hand, ranking or list diversification aims at balancing ‘precision’ and ‘diversity’ to reflect a broad spectrum of user interests concerning items. *Correspondence: [email protected] 1 Federal University of Goiás, Catalão, 75604-020 Goias, Brazil Full list of author information is available at the end of the article
Rank learning tasks are generally stated as optimization problems: to find the best model (or the best adjustment in a given model) according to some representation to rank items. Given its general formulation, solutions of rank learni
Data Loading...