Nearest Neighborhood Grayscale Operator for Hardware-Efficient Microscale Texture Extraction

  • PDF / 1,664,680 Bytes
  • 10 Pages / 600.03 x 792 pts Page_size
  • 82 Downloads / 173 Views

DOWNLOAD

REPORT


Research Article Nearest Neighborhood Grayscale Operator for Hardware-Efficient Microscale Texture Extraction 2 ¨ Christian Mayr1 and Andreas Konig 1 TU

Dresden, Lehrstuhl Hochparallele VLSI-Systeme und Neuromikroelektronik, Helmholtzstraße 10, 01062 Dresden, Germany Kaiserslautern, FB Elektrotechnik und Informationstechnik, Lehrstuhl Integrierte Sensorsysteme, Erwin-Schr¨odinger-Straße, 67663 Kaiserslautern, Germany

2 TU

Received 23 November 2005; Revised 1 August 2006; Accepted 10 September 2006 Recommended by Montse Pardas First-stage feature computation and data rate reduction play a crucial role in an efficient visual information processing system. Hardware-based first stages usually win out where power consumption, dynamic range, and speed are the issue, but have severe limitations with regard to flexibility. In this paper, the local orientation coding (LOC), a nearest neighborhood grayscale operator, is investigated and enhanced for hardware implementation. The features produced by this operator are easy and fast to compute, compress the salient information contained in an image, and lend themselves naturally to various medium-to-high-level postprocessing methods such as texture segmentation, image decomposition, and feature tracking. An image sensor architecture based on the LOC has been elaborated, that combines high dynamic range (HDR) image aquisition, feature computation, and inherent pixel-level ADC in the pixel cells. The mixed-signal design allows for simple readout as digital memory. Copyright © 2007 C. Mayr and A. K¨onig. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

In today’s integrated vision systems, their speed, accuracy, power consumption, and complexity depend primarily on the first stage of visual information processing. The task for the first stage is to extract relevant features from an image such as textures, lines, and their angles, edges, corners, intersections, and so forth. These features have to be extracted robustly with respect to illumination, scale, relative contrast, and so forth. Several integrated pixel sensors operating in the digital domain have been proposed, for example, Tongprasit et al. [1] report a digital pixel sensor which carries out convolution and rank-order filtering up to a mask size of 5 × 5 in a serial-parallel manner. However, in [2], implementations of a low-level image processing operator realized either as a mixed-signal CMOS computation, dedicated digital processing on-chip, or as a standard CMOS sensor coupled to FPGA processing are compared. A case is made that a fast, low-power consumption implementation is best achieved by a parallel, mixed-signal implementation. However, the downside of coding the feature extraction in hardware are severe limitations as to flexibility of the features with regard

to changing applications [3], whereas software-based feature extracti