GPU acceleration of the KAZE image feature extraction algorithm

  • PDF / 3,548,598 Bytes
  • 14 Pages / 595.276 x 790.866 pts Page_size
  • 42 Downloads / 228 Views

DOWNLOAD

REPORT


ORIGINAL RESEARCH PAPER

GPU acceleration of the KAZE image feature extraction algorithm B. Ramkumar1 · Rob Laber2 · Hristo Bojinov2 · Ravi Sadananda Hegde1 Received: 20 October 2017 / Accepted: 22 February 2019 © Springer-Verlag GmbH Germany, part of Springer Nature 2019

Abstract The recently proposed, KAZE image feature detection and description algorithm (Alcantarilla et al. in Proceedings of the British machine vision conference. LNCS, vol 7577, no 6, pp 13.1–13.11, 2013) offers significantly improved robustness in comparison to conventional algorithms like SIFT (scale-invariant feature transform) and SURF (speeded-up robust features). The improved robustness comes at a significant computational cost, however, limiting its use for many applications. We report a GPU acceleration of the KAZE algorithm that is significantly faster than its CPU counterpart. Unlike previous reports, our acceleration does not resort to binary descriptors and can serve as a drop-in replacement for CPU-KAZE, SIFT, SURF etc. By achieving nearly tenfold speedup (for a 1920 by 1200 sized image, our Compute Unified Device Architecture (CUDA)-C implementation took around 245 ms on a single GPU in comparison to nearly 2400 ms for a 16-threaded CPU version) without degradation in feature extraction performance, our work expands the applicability of the KAZE algorithm. Additionally, the strategies described here could also prove useful for the GPU implementation of other nonlinear scalespace-based image processing algorithms. Keywords  Nonlinear scale space · Feature detection · Feature description · GPU acceleration · KAZE features

1 Introduction Feature point detection [26] and description [5] are key tools in several computer vision applications like visual navigation [16], automatic target recognition [35], tracking, structure from motion, registration and calibration. By picking out only those salient points that can be repeatably localized across different images, we can vastly reduce subsequent data processing. Feature extraction [26], however, still remains a major bottleneck for many implementations due to its high computational cost; this is especially the case for those algorithms that are the most robust to various image transformations. SIFT [5, 27, 28] is widely considered to be one of the most robust feature descriptors, as these features exhibit distinctiveness and invariance to several common image transformations. Although, vector-based features like SIFT, and its derivatives like SURF, exhibit high performance in terms of matching accuracy, they are * Ravi Sadananda Hegde [email protected] 1



Department of Electrical Engineering, Indian Institute of Technology, Gandhinagar, Gujarat, India



Innit Inc., Redwood City, CA, USA

2

also computationally intensive and need inefficient techniques (like brute force matching) to compare keypoints. On the other hand, binary features [34] are much faster to compute, compact to store and highly efficient for performing comparisons among keypoints (they use the hamming distance f