A Predictive NoC Architecture for Vision Systems Dedicated to Image Analysis

  • PDF / 701,142 Bytes
  • 13 Pages / 600.03 x 792 pts Page_size
  • 39 Downloads / 161 Views

DOWNLOAD

REPORT


Research Article A Predictive NoC Architecture for Vision Systems Dedicated to Image Analysis Virginie Fresse, Alain Aubert, and Nathalie Bochard ´ Laboratoire de Traitement du Signal et Instrumentation, CNRS-UMR 5516, Universit´e Jean Monnet Saint-Etienne, ´ Bˆatiment F, 18 Rue Benoit Lauras, 42000 Saint Etienne Cedex 2, France Received 1 May 2006; Revised 16 October 2006; Accepted 26 December 2006 Recommended by Dietmar Dietrich The aim of this paper is to describe an adaptive and predictive FPGA embedded architecture for vision systems dedicated to image analysis. A large panel of image analysis algorithms with some common characteristics must be mapped onto this architecture. Major characteristics of such algorithms are extracted to define the architecture. This architecture must easily adapt its structure to algorithm modifications. According to required modifications, few parts must be either changed or adapted. An NoC approach is used to break the hardware resources down as stand-alone blocks and to improve predictability and reuse aspects. Moreover, this architecture is designed using a globally asynchronous locally synchronous approach so that each local part can be optimized separately to run at its best frequency. Timing and resource prediction models are presented. With these models, the designer defines and evaluates the appropriate structure before the implementation process. The implementation of a particle image velocimetry algorithm illustrates this adaptation. Experimental results and predicted results are close enough to validate our prediction models for PIV algorithms. Copyright © 2007 Virginie Fresse et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1.

INTRODUCTION

More and more vision systems dedicated to a large panel of applications (tracking, fault detection, etc.) are being designed. Such systems allow computers to understand images and to take appropriate actions, often under hard real-time constraints and sometimes under harsh environments. Moreover, current algorithms are computing resource-intensive. Traditional PC or DSP-based systems are most of time unsuitable for such hard real-time vision systems. They cannot achieve the required high performance, and dedicated embedded architectures must be designed. To date, FPGAs are increasingly used because they can achieve high-speed performances in a small footprint. Modern FPGAs integrate many different heterogeneous resources on one single chip and the number of resources is incredibly high so that one FPGA can handle all processing operations. Data coming from the sensor or any acquisition device is directly processed by the FPGA; no other external resources are necessary. These systems on chip (SoCs) become more and more popular as they give an efficient quality of results (QoR: area and time) of the implemented system. FPGAbased SoCs are suitable for vision systems