A Learning State-Space Model for Image Retrieval
- PDF / 1,154,232 Bytes
- 10 Pages / 600.03 x 792 pts Page_size
- 4 Downloads / 196 Views
Research Article A Learning State-Space Model for Image Retrieval Cheng-Chieh Chiang,1, 2 Yi-Ping Hung,3 and Greg C. Lee4 1 Department
of Information and Computer Education, College of Education, National Taiwan Normal University, Taipei 106, Taiwan 2 Department of Information Technology, Takming College, Taipei 114, Taiwan 3 Graduate Institute of Networking and Multimedia, College of Electrical Engineering and Computer Science, National Taiwan University, Taipei 106, Taiwan 4 Department of Computer Science and Information Engineering, College of Science, National Taiwan Normal University, Taipei 106, Taiwan Received 30 August 2006; Accepted 12 March 2007 Recommended by Ebroul Izquierdo This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval. Copyright © 2007 Cheng-Chieh Chiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1.
INTRODUCTION
Image retrieval has become a very active research area since the 1990s due to the rapid increase in the use of digital images [1, 2]. Estimating the user concepts is one of the most difficult tasks in image retrieval. Feature extraction involves extracting only low-level features such as color, texture, and shape from an image. However, people understand an image semantically, rather than via the low-level visual features, and there is a large gap between the low-level features and the high-level concepts in image understanding [3]. The relevance feedback approach [4, 5] is widely used for bridging this semantic gap. In each iteration of a retrieval task, the user assigns some relevant and irrelevant examples according to their concepts, from which the system learns to estimate what the user actually wants. Many types of learning models have been applied in relevance feedback for image retrieval, such as Bayesian framework [6–8], SVM [9], and active learning [10]. Goh et al. also proposed several quantitative measures to model concept complexity in the learning of relevance feedback [10]. Image representation is another important issue that needs to be addressed when solving the above problem. It
is necessary to design good units for image representation even if a perfect learning appro
Data Loading...