Fast Optical Flow Using Dense Inverse Search
Most recent works in optical flow extraction focus on the accuracy and neglect the time complexity. However, in real-life visual applications, such as tracking, activity detection and recognition, the time complexity is critical. We propose a solution wit
- PDF / 1,870,975 Bytes
- 18 Pages / 439.37 x 666.142 pts Page_size
- 28 Downloads / 288 Views
Computer Vision Laboratory, D-ITET, ETH Zurich, Zurich, Switzerland {kroegert,timofter,dai,vangool}@vision.ee.ethz.ch 2 VISICS/iMinds, ESAT, KU Leuven, Leuven, Belgium
Abstract. Most recent works in optical flow extraction focus on the accuracy and neglect the time complexity. However, in real-life visual applications, such as tracking, activity detection and recognition, the time complexity is critical. We propose a solution with very low time complexity and competitive accuracy for the computation of dense optical flow. It consists of three parts: (1) inverse search for patch correspondences; (2) dense displacement field creation through patch aggregation along multiple scales; (3) variational refinement. At the core of our Dense Inverse Search-based method (DIS) is the efficient search of correspondences inspired by the inverse compositional image alignment proposed by Baker and Matthews (2001, 2004). DIS is competitive on standard optical flow benchmarks. DIS runs at 300 Hz up to 600 Hz on a single CPU core (1024 × 436 resolution. 42 Hz/46 Hz when including preprocessing: disk access, image re-scaling, gradient computation. More details in Sect. 3.1.), reaching the temporal resolution of human’s biological vision system. It is order(s) of magnitude faster than state-of-the-art methods in the same range of accuracy, making DIS ideal for real-time applications.
1
Introduction
Optical flow estimation is under constant pressure to increase both its quality and speed. Such progress allows for new applications. A higher speed enables its inclusion into larger systems with extensive subsequent processing (e.g. reliable features for motion segmentation, tracking or action/activity recognition) and its deployment in computationally constrained scenarios (e.g. embedded systems, autonomous robots, large-scale data processing). A robust optical flow algorithm should cope with discontinuities (outliers, occlusions, motion discontinuities), appearance changes (illumination, chromaticity, blur, deformations), and large displacements. Decades after the pioneering research of Horn and Schunck [4] and Lucas and Kanade [5] we have solutions for the first two issues [6,7] and recent endeavors lead to significant progress in handling large displacements [8–21]. This came at the cost of high run-times usually not acceptable in computationally constrained scenarios such as realtime applications. Recently, only very few works aimed at balancing accuracy Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46493-0 29) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016 B. Leibe et al. (Eds.): ECCV 2016, Part IV, LNCS 9908, pp. 471–488, 2016. DOI: 10.1007/978-3-319-46493-0 29
472
T. Kroeger et al.
9.5 Avg. EPE: 1.89, Our Method, 600 Hz
9
Avg. end−point error (px)
8.5
(1) DIS @ 600Hz
1
8
Avg. EPE: 1.52, Our Method, 300 Hz
93 × speed−up
7.5
(2) DIS @ 300Hz
7
Avg. EPE: 0.66, Our Method, 10 Hz
2
6.5 6 5.5 5 4.5
3
(3) DIS @
Data Loading...