LMSN:a lightweight multi-scale network for single image super-resolution

  • PDF / 1,971,941 Bytes
  • 12 Pages / 595.276 x 790.866 pts Page_size
  • 57 Downloads / 192 Views

DOWNLOAD

REPORT


SPECIAL ISSUE PAPER

LMSN:a lightweight multi‑scale network for single image super‑resolution Yiye Zou1 · Xiaomin Yang1 · Marcelo Keese Albertini2 · Farhan Hussain3 Received: 31 May 2020 / Accepted: 6 November 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract With the development of deep learning (DL), convolutional neural networks (CNNs) have shown great reconstruction performance in single image super-resolution (SISR). However, some methods blindly deepen the networks to purchase the performance, which neglect to make full use of the multi-scale information of different receptive fields and ignore the efficiency in practice. In this paper, a lightweight SISR network with multi-scale information fusion blocks (MIFB) is proposed to fully extract information via a multiple ranges of receptive fields. The features are refined in a coarse-to-fine manner within each block. Group convolutional layers are employed in each block to reduce the number of parameters and operations. Results of extensive experiments on the benchmarks show that our method achieves better performance than the state-of-the-arts with comparable parameters and multiply–accumulate (MAC) operations. Keywords  Super-resolution · Multi-scale · Convolutional neural network · Lightweight

1 Introduction Convolutional neural networks (CNNs) have been used in various vision tasks, such as image segmentation [12], expression recognition [39] and image enhancement [41]. Since Dong et  al. [6] proposed the first CNN-based SR method, more and more SR methods based on deep learning [11, 20, 23–26, 30, 40, 42, 44, 45] have been proposed in recent years. However, building deeper or wider networks has been a trend among the network design, which may limit the practical application on devices lacking of

* Xiaomin Yang [email protected] Yiye Zou [email protected] Marcelo Keese Albertini [email protected] Farhan Hussain [email protected] 1



Sichuan University, Chengdu, China

2



Faculdade de Computação Universidade Federal de Uberlândia, Uberlândia, Brazil

3

National University of Science & Technology (NUST), Islamabad, Pakistan



computing resources. Thus efforts have also been made in lightweight network design. Han et al. [10] employs pruning, quantization and huffman coding for network compressing. Depth-wise separable convolutions [15] are designed to construct fast and compact networks for high-level vision tasks. MobileNetV2 [32] introduces the inverted residual learning into [15] to achieve higher efficiency. Lightweight networks have also been being explored in SISR. DRCN [21] and DRRN [37] both use parameter sharing strategy in a recursive manner to reduce parameters. These methods maintain the performance by increasing the number of recursions. Excessive use of recursion may lead to the redundancy of computing operations. To avoid this issue, Hui et al. proposed IDN [18], a fast and compact network. Within its proposed information distillation block, features can be stored or refined adaptively. Ahn et a