Computer vision algorithms acceleration using graphic processors NVIDIA CUDA
- PDF / 2,306,281 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 53 Downloads / 227 Views
(0123456789().,-volV)(0123456789(). ,- volV)
Computer vision algorithms acceleration using graphic processors NVIDIA CUDA Mouna Afif1 • Yahia Said1,2 • Mohamed Atri3 Received: 2 July 2019 / Revised: 21 February 2020 / Accepted: 9 March 2020 Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Using graphic processing units (GPUs) in parallel with central processing unit in order to accelerate algorithms and applications demanding extensive computational resources has been a new trend used for the last few years. In this paper, we propose a GPU-accelerated method to parallelize different Computer vision tasks. We will report on parallelism and acceleration in computer vision applications, we provide an overview about the CUDA NVIDIA GPU programming language used. After that we will dive on GPU Architecture and acceleration used for time consuming optimization. We introduce a high-speed computer vision algorithm using graphic processing unit by using the NVIDIA’s programming framework compute unified device architecture (CUDA). We realize high and significant accelerations for our computer vision algorithms and we demonstrate that using CUDA as a GPU programming language can improve Efficiency and speedups. Especially we demonstrate the efficiency of our implementations of our computer vision algorithms by speedups obtained for all our implementations especially for some tasks and for some image sizes that come up to 8061 and 5991 and 722 acceleration times. Keywords Computer vision Integral image Prefix sum Features extraction GPU NVIDIA CUDA Image covariance
1 Introduction It is well known that image processing algorithms are very time-consuming applications besides many image processing applications need simultaneous parallel processing, for that GPUs devices are highly parallel multicore systems allowing an acceleration of treatments of computer vision applications. By using GPUs devices, we accelerate numerically intensive algorithms and we can get efficient acceleration in contrast with central processing units (CPUs). In recent years, graphic processing unit (GPUs) & Mouna Afif [email protected] 1
Laboratoire d’Electronique et Microe´lectronique, LR99ES30, Faculte´ des Sciences de Monastir, Universite´ de Monastir, 5000 Monastir, Tunisia
2
Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia
3
College of Computer Science, King Khalid University, Abha, Saudi Arabia
have appeared as an attractive alternative to CPU. Resources and capacities provided in GPUs make them a natural choice for implementing computer vision applications. Significant portion of highly dependants in time consuming of computer vision algorithms are amenable to being run in parallel while the rest of the code is remained to the CPU part. Exploiting and accelerating parallel processing has become an important technique for obtaining better and more competitive results compared to CPU implementations and for obtaining cost-effective and energ
Data Loading...