A Diffusion Approach to Unsupervised Segmentation of Hyper-Spectral Images

Hyper-spectral cameras capture images at hundreds and even thousands of wavelengths. These hyper-spectral images offer orders of magnitude more intensity information than RGB images. This information can be utilized to obtain segmentation results which ar

  • PDF / 757,959 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 33 Downloads / 179 Views

DOWNLOAD

REPORT


Abstract Hyper-spectral cameras capture images at hundreds and even thousands of wavelengths. These hyper-spectral images offer orders of magnitude more intensity information than RGB images. This information can be utilized to obtain segmentation results which are superior to those that are obtained using RGB images. However, many of the wavelengths are correlated and many others are noisy. Consequently, the hyper-spectral data must be preprocessed prior to the application of any segmentation algorithm. Such preprocessing must remove the noise and interwavelength correlations and due to complexity constraints represent each pixel by a small number of features which capture the structure of the image. The contribution of this paper is three-fold. First, we utilize the diffusion bases dimensionality reduction algorithm (Schclar and Averbuch in Diffusion bases dimensionality reduction, pp. 151–156, [1]) to derive the features which are needed for the segmentation. Second, we describe a faster version of the diffusion bases algorithm which uses symmetric matrices. Third, we propose a simple algorithm for the segmentation of the dimensionality reduced image. Successful application of the algorithms to hyperspectral microscopic images and remote-sensed hyper-spectral images demonstrate the effectiveness of the proposed algorithms. Keywords Segmentation · Diffusion bases · Dimensionality reduction · Hyper-spectral sensing

A. Schclar (B) School of Computer Science, Academic College of Tel-Aviv Yaffo, POB 8401, 61083 Tel Aviv, Israel e-mail: [email protected] A. Averbuch School of Computer Science, Tel Aviv University, POB 39040, 69978 Tel Aviv, Israel e-mail: [email protected] © Springer Nature Switzerland AG 2019 C. Sabourin et al. (eds.), Computational Intelligence, Studies in Computational Intelligence 829, https://doi.org/10.1007/978-3-030-16469-0_9

163

164

A. Schclar and A. Averbuch

1 Introduction Image segmentation is the process of partitioning an image into disjoint regions. Pixels that belong to the same subset are more similar than pixels that belong to different regions. Each region is referred to as a segment. Regular CCD cameras provide very limited spectral information as it is equipped with sensors that only capture details that are visible to the naked eye. However, hyper-spectral cameras are equipped with multiple sensors—each sensor is sensitive to a subrange of the light spectrum including spectrum ranges that are not visible to the naked eye—namely, infra-red and ultra-violet. Its output contains the reflectance values of a scene at all the wavelengths of the sensors. Hyper-spectral cameras can be mounted on airplanes (e.g. [2]), microscopes [3] or they can be hand held [4]. A hyper-spectral image is composed of a set of images—each contains the reflectance values for a particular wavelength subrange. We refer to the set of reflectance values at a coordinate (x, y) as a hyper-pixel. Each hyper-pixel can be represented by a vector in Rn where n is the number of wavelength subranges. This data c