Synobins: An Intermediate Level towards Annotation and Semantic Retrieval
- PDF / 7,775,031 Bytes
- 38 Pages / 600.03 x 792 pts Page_size
- 66 Downloads / 160 Views
Synobins: An Intermediate Level towards Annotation and Semantic Retrieval Daniela Stan Raicu1 and Ishwar K. Sethi2 1 Intelligent
Multimedia Processing Laboratory, School of Computer Science, Telecommunications, and Information Systems, DePaul University, Chicago, IL 60604, USA 2 IIE Laboratory, Department of Computer Science & Engineering, Oakland University, Rochester, MI 48309-4478, USA Received 1 September 2004; Revised 24 March 2005; Accepted 9 May 2005 To reason about the meaning of an image, useful information should be provided with that image; however, images often contain little to no textual information about the objects they are depicting, which is the precise reason why there is a need for CBIR systems that exploit only the correlations present in the raw pixel data. In this paper, we proposed a new type of image feature, which consists of patterns of colors and intensities that capture the latent associations among images and primitive features in such a way that the noise and redundancy are eliminated. We introduced the synobin, a new term for content-based image retrieval literature, which is the equivalent of a synonym word from text retrieval, to name the bin that is synonymous with other bins of a color feature, in the sense that they are similarly used across the image database. In a formal definition, a group of synobins is given by the most important bins participating in forming of a useful pattern, that is, the bins having the highest coefficients in the linear combination defining that pattern. Incorporating our feature model into a CBIR system moves the research in image retrieval beyond simple matching of images based on their primitive features and creates a ground for learning image semantics from visual content. A system developed using our proposed feature model will have the capability of learning associations not only between semantic concepts and images, but also between semantic concepts and patterns. We evaluated the performance of our system based on the retrieval accuracy and on the perceptual similarity order among retrieved images. When compared to standard image retrieval methods, our preliminary results show that even if the feature space was reduced to only 3%–5% of the initial space, the accuracy and perceptual similarity for our system remain the same or better depending on the category of images. Copyright © 2006 Hindawi Publishing Corporation. All rights reserved.
1.
INTRODUCTION
In the last ten years, the multimedia superhighway has expanded exponentially, bringing vast repository of information to the desktop in a few mouse clicks; therefore, there is an evergrowing demand for tools to locate information by content with greater accuracy and efficiency. In particular, methods for content-based image retrieval (CBIR) have drawn the most attention as many of the underlying techniques can be easily applied to other multimedia artifacts with some suitable modifications. A CBIR system can be viewed as two main components: feature extraction and the search for similar images i
Data Loading...