Multi-focus image fusion for multiple images using adaptable size windows and parallel programming

  • PDF / 3,387,385 Bytes
  • 8 Pages / 595.276 x 790.866 pts Page_size
  • 99 Downloads / 254 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Multi-focus image fusion for multiple images using adaptable size windows and parallel programming Adan Garnica-Carrillo1 · Felix Calderon1 · Juan Flores1 Received: 18 September 2018 / Revised: 7 February 2020 / Accepted: 26 February 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract The multi-focus image fusion with adaptable windows (MF-AW) algorithm for multiple images improves the results of the linear combination of images with variable windows (CLI-VV—from its Spanish acronym) algorithm, using a unique decision map and applying parallel programming. Other algorithms use the same window size throughout the image to produce a decision map; furthermore, a different decision map is produced for each pair of images. MF-AW determines the largest possible window size delimited by the edges of the decision map, which are improved using an iterative process. The execution time is improved using integral images, binary search, and parallel programming; as a result, the fused image is obtained in tenths of a second. Quantitative and qualitative measures indicate that the results obtained with this algorithm outperform the state of the art in terms of both accuracy and execution time. Keywords Image fusion · Image segmentation · Adaptable size windows · Decision map

1 Introduction In multiple applications, it is desirable to have the clearest possible digital images. However, the characteristics of camera lenses used in image capture systems have limitations that prevent completely clear images with all the details of objects at different distances. When we take a photograph, we decide which objects to focus the lens on, so the details are captured with greater clarity. Consequently, objects at different distances might appear sharper or blurrier. Kuthirummal et al. [1] define the depth of field as the range of distances from the camera lens (for a given aperture) that can be sharply reproduced. When an object is out of the range of distances covered by the depth of field, the object generates a colour circle that affects an area of the camera’s sensor when it should only affect one point; those circles are known as blur circles [2]. The colours of the pixels in the generated image affected by

B

Adan Garnica-Carrillo [email protected] Felix Calderon [email protected] Juan Flores [email protected]

1

DEP-FIE, Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, Mexico

blur circles result from a mixture of colours corresponding to several points of the scene, instead of the colour from a single point. This situation causes those pixels to appear blurred or unfocused. Given a multi-focus image set I = {I (1), I (2), . . . , I (N )} of N images of size nr × n c , each focused on objects at different distances, we define the multi-focus image fusion problem as the process of extracting the clear regions of each image I (k) with the goal of forming a new image J  that is as clear as possible. The fused image is more suitable for human perception or digital pr