Multi-scale fractal residual network for image super-resolution

  • PDF / 4,121,059 Bytes
  • 12 Pages / 595.276 x 790.866 pts Page_size
  • 49 Downloads / 239 Views

DOWNLOAD

REPORT


Multi-scale fractal residual network for image super-resolution Xinxin Feng 1,2 & Xianguo Li 1,2

&

Jianxiong Li 1,2

# Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Recent studies have shown that the use of deep convolutional neural networks (CNNs) can improve the performance of single image super-resolution reconstruction (SISR) methods. However, the existing CNN-based SISR model ignores the multi-scale features and shallow and deep features of the image, resulting in relatively low image reconstruction performance. To address these issues, this paper proposes a new multi-scale fractal residual network (MSFRN) for image super-resolution. On the basis of residual learning, a multi-scale fractal residual block (MSFRB) is designed. This block uses convolution kernels of different sizes to extract image multi-scale features and uses multiple paths to extract and fuse image features of different depths. Then, the shallow features extracted at the shallow feature extraction stage and the local features output by all MSFRBs are used to perform global hierarchical feature fusion. Finally, through sub-pixel convolution, the fused global features are used to reconstruct highresolution images from low-resolution images. The experimental results on the five standard benchmark datasets show that MSFRN improved subjective visual effects and objective image quality evaluation indicators, and is superior to other state-ofthe-art SISR methods. Keywords Image super-resolution . Residual learning . Multi-scale feature fusion . Fractal network

1 Introduction Single image super-resolution reconstruction(SISR) is a research hotspot in computer vision. Because of its high theoretical research value and wide application scenarios, it has attracted the attention of many researchers. It aims to reconstruct high-resolution (HR) images that are closer to real images from low-resolution (LR) images and is widely used in satellite remote sensing [1], video surveillance [2], and medical image processing [3]. Because a low-resolution image can obtain many corresponding high-resolution images, image super-resolution is an ill-posed inverse problem. To solve this problem, image super-resolution reconstruction techniques can be roughly divided into three categories: interpolation-based [4, 5], reconstruction-based [6, 7], and learning-based methods [8–18]. The interpolation-based methods are simple and

* Xianguo Li [email protected] 1

School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China

2

Tianjin Key Laboratory of Optoelectronic Detection Technology and System, Tianjin 300387, China

intuitive and can obtain results quickly, but there are problems such as blurred edges after image reconstruction. The reconstruction-based method focuses on restoring the highfrequency information part of the image. This method is simple and low in computations, but it ignores some highfrequency image detail information. The learning-based method establishes a mapping relationship b

Data Loading...