Deep Markov Random Field for Image Modeling
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capt
- PDF / 4,595,389 Bytes
- 18 Pages / 439.37 x 666.142 pts Page_size
- 22 Downloads / 224 Views
Abstract. Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts. Keywords: Generative image model
1
· MRF · RNN
Introduction
Generative image models play a crucial role in a variety of image processing and computer vision tasks, such as denoising [1], super-resolution [2], inpainting [3], and image-based rendering [4]. As repeatedly shown by previous work [5], the success of image modeling, to a large extent, hinges on whether the model can successfully capture the spatial relations among pixels. Existing image models can be roughly categorized as global models and lowlevel models. Global models [6–8] usually rely on compressed representations to capture the global structures. Such models are typically used for describing objects with regular structures, e.g. faces. For generic images, low-level models are more popular. Thanks to their focus on local patterns instead of global appearance, low-level models tend to generalize much better, especially when there can be vast variations in the image content. Over the past decades, Markov Random Fields (MRFs) have evolved into one of the most popular models for low-level vision. Specifically, the clique-based Electronic supplementary material The online version of this chapter (doi:10. 1007/978-3-319-46484-8 18) contains supplementary material, which is available to authorized users. c Springer International Publishing AG 2016 B. Leibe et al. (Eds.): ECCV 2016, Part VIII, LNCS 9912, pp. 295–312, 2016. DOI: 10.1007/978-3-319-46484-8 18
296
Z. Wu et al.
Fig. 1. We present a new class of markov random field models whose potential functions are expressed by powerful deep neural networks. We show applications of the model on texture synthesis, image super-resolution and image synthesis.
structure makes them particularly well suited for capturing local relations among pixels. Whereas MRFs as a generic mathematical framework are very flexible and provide immense expressive power, the performance of many MRF-based methods still leaves a lot to be desired when faced with challenging conditions. This occurs due to t
Data Loading...