Phase-Based Modification Transfer for Video

We present a novel phase-based method for propagating modifications of one video frame to an entire sequence. Instead of computing accurate pixel correspondences between frames, e.g. extracting sparse features or optical flow, we use the assumption that s

  • PDF / 2,085,594 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 54 Downloads / 191 Views

DOWNLOAD

REPORT


Department of Computer Science, ETH Zurich, Zurich, Switzerland [email protected] 2 Disney Research, Zurich, Switzerland [email protected]

Abstract. We present a novel phase-based method for propagating modifications of one video frame to an entire sequence. Instead of computing accurate pixel correspondences between frames, e.g. extracting sparse features or optical flow, we use the assumption that small motion can be represented as the phase shift of individual pixels. In order to successfully apply this idea to transferring image edits, we propose a correction algorithm, which adapts the phase shift as well as the amplitude of the modified images. As our algorithm avoids expensive global optimization and all computational steps are performed per-pixel, it allows for a simple and efficient implementation. We evaluate the flexibility of the approach by applying it to various types of image modifications, ranging from compositing and colorization to image filters. Keywords: Phase-based method · Video processing · Edit propagation

1

Introduction

Many applications in video processing, e.g., frame interpolation or edit propagation, require some form of explicit correspondence mapping between pixels in consecutive frames. Common approaches are based on matching sparse feature points, or dense optical flow estimation. However, finding a pixel-accurate mapping is an inherently ill-posed problem, and existing dense approaches usually require computationally expensive regularization and optimization. Recently, a number of novel phase-based video processing techniques have been proposed that are able to solve certain types of problems without the need for explicit correspondences. Examples include motion magnification [20], view synthesis for autostereoscopic displays [5], or frame interpolation for video [13]. The interesting advantage of such techniques over explicit methods is that they are based on efficient, local per-pixel operations, which do not require knowledge about the actual image-space motion of pixels between frames, and hence avoid the need for solving the above mentioned optimization problems. On the other Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46487-9 39) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016  B. Leibe et al. (Eds.): ECCV 2016, Part III, LNCS 9907, pp. 633–648, 2016. DOI: 10.1007/978-3-319-46487-9 39

634

S. Meyer et al.

hand, the price is that phase-based methods are limited to much smaller motions between frames than, e.g., methods for sparse feature point matching. However, given today’s steady increase in video resolution and frame rate, there is also an increasing need for computationally simple and efficient methods. In this paper, we extend the range of possible applications for phase-based techniques. We introduce a method to propagate various types of image modifications over a sequence of video frames, without the need for explicit tracking or correspondences.