Query-by-Example Music Information Retrieval by Score-Informed Source Separation and Remixing Technologies
- PDF / 2,674,935 Bytes
- 14 Pages / 600.05 x 792 pts Page_size
- 36 Downloads / 164 Views
Research Article Query-by-Example Music Information Retrieval by Score-Informed Source Separation and Remixing Technologies Katsutoshi Itoyama,1 Masataka Goto,2 Kazunori Komatani,1 Tetsuya Ogata,1 and Hiroshi G. Okuno1 1 Department
of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Sakyo-Ku, Kyoto 606-8501, Japan 2 Media Interaction Group, Information Technology Research Institute (ITRI), National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568, Japan Correspondence should be addressed to Katsutoshi Itoyama, [email protected] Received 1 March 2010; Revised 10 September 2010; Accepted 31 December 2010 Academic Editor: Augusto Sarti Copyright © 2010 Katsutoshi Itoyama et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We describe a novel query-by-example (QBE) approach in music information retrieval that allows a user to customize query examples by directly modifying the volume of different instrument parts. The underlying hypothesis of this approach is that the musical mood of retrieved results changes in relation to the volume balance of different instruments. On the basis of this hypothesis, we aim to clarify the relationship between the change in the volume balance of a query and the genre of the retrieved pieces, called genre classification shift. Such an understanding would allow us to instruct users in how to generate alternative queries without finding other appropriate pieces. Our QBE system first separates all instrument parts from the audio signal of a piece with the help of its musical score, and then it allows users remix these parts to change the acoustic features that represent the musical mood of the piece. Experimental results showed that the genre classification shift was actually caused by the volume change in the vocal, guitar, and drum parts.
1. Introduction One of the most promising approaches in music information retrieval is query-by-example (QBE) retrieval [1–7], where a user can receive a list of musical pieces ranked by their similarity to a musical piece (example) that the user gives as a query. This approach is powerful and useful, but the user has to prepare or find examples of favorite pieces, and it is sometimes difficult to control or change the retrieved pieces after seeing them because another appropriate example should be found and given to get better results. For example, even if a user feels that vocal or drum sounds are too strong in the retrieved pieces, it is difficult to find another piece that has weaker vocal or drum sounds while maintaining the basic mood and timbre of the first piece. Since finding such music pieces is now a matter of trial and error, we need more direct and convenient methods for QBE. Here we assume that
QBE retrieval system takes audio inputs and treat low-level acoustic features (e.g., Mel-frequency
Data Loading...