Infrared and visible image fusion using modified spatial frequency-based clustered dictionary
- PDF / 3,429,626 Bytes
- 15 Pages / 595.276 x 790.866 pts Page_size
- 1 Downloads / 211 Views
THEORETICAL ADVANCES
Infrared and visible image fusion using modified spatial frequency‑based clustered dictionary Sumit Budhiraja1 · Rajat Sharma1 · Sunil Agrawal1 · Balwinder S. Sohi2 Received: 18 March 2020 / Accepted: 9 September 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract Infrared and visible image fusion is an active area of research as it provides fused image with better scene information and sharp features. An efficient fusion of images from multisensory sources is always a challenge for researchers. In this paper, an efficient image fusion method based on sparse representation with clustered dictionary is proposed for infrared and visible images. Firstly, the edge information of visible image is enhanced by using a guided filter. To extract more edge information from the source images, modified spatial frequency is used to generate a clustered dictionary from the source images. Then, non-subsampled contourlet transform (NSCT) is used to obtain low-frequency and high-frequency sub-bands of the source images. The low-frequency sub-bands are fused using sparse coding, and the high-frequency sub-bands are fused using max-absolute rule. The final fused image is obtained by using inverse NSCT. The subjective and objective evaluations show that the proposed method is able to outperform other conventional image fusion methods. Keywords Image fusion · Sparse representation · Dictionary learning · Spatial frequency · Online dictionary learning
1 Introduction Due to advancements in sensing technology over the years, a lot of good-quality images produced by different sensors have become readily available. Image fusion combines complimentary information from multiple images to generate a fused image having better scene representation. Infrared and visible images are one such example, which contain complimentary information of the target scene, as infrared image captures the thermal information of the scene and visible image captures texture and detailed spatial information with low noise levels [1]. The fusion of infrared and visible images provides clearer image with better details and is especially helpful under low-illumination conditions [2]. Infrared and visible image fusion is finding a lot of applications in areas such as surveillance [3], object recognition [4], object detection and tracking [5], and image enhancement [6]. Traditional image fusion methods have been categorized
* Sumit Budhiraja [email protected] 1
ECE, UIET, Panjab University, Chandigarh 160014, India
Chandigarh University, Gharuan, Mohali, Punjab 140413, India
2
into spatial domain and transform domain methods. Spatial domain methods are simple and easy to implement, but can lead to some spectral distortion. Transform domain methods try to represent the salient features of the image using transform coefficients. Traditional transform domain methods have used Laplacian pyramid transform (LPT) [7], discrete wavelet transform (DWT) [8], curvelet transform [9], dualtree complex wavelet transform (DTCWT)
Data Loading...