Deep neural de-raining model based on dynamic fusion of multiple vision tasks

  • PDF / 5,216,108 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 101 Downloads / 184 Views

DOWNLOAD

REPORT


METHODOLOGIES AND APPLICATION

Deep neural de-raining model based on dynamic fusion of multiple vision tasks Yulong Fan1 · Rong Chen1 · Yang Li1 · Tianlun Zhang1

© Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Image quality is relevant to the performance of computer vision applications. The interference of rain streaks often greatly depreciates the visual effect of images. It is a traditional and critical vision challenge to remove rain streaks from rainy images. In this paper, we introduce a deep connectionist screen blend model for single-image rain removal research. The novel deep structure is mainly composed of shortcut connections, and ends with sibling branches. The specific architecture is designed for joint optimization of heterogeneous but related tasks. In particular, a feature-level task is design to preserve object edges which tend to be lost in de-rained images. Moreover, a comprehensive image quality assessment is an additional vision task for further improvement on de-rained results. Instead of using rules of thumb, we propose an actionable method to dynamically assign appropriate weighting coefficients for all vision tasks we use. On the other hand, various factors such as haze also give rise to weak visual appeal of rainy images. To remove these adverse factors, we develop an image enhancement framework which enables the hyperparameters to be optimized in an adaptive way, and efficiently improves the perceived quality of de-rained results. The effectiveness of the proposed de-raining system has been verified by extensive experiments, and most results of our method are impressive. The source code and more de-rained results will be available online. Keywords Deep neural network · Single-image de-raining · Screen blend model · Multi-task learning · Dynamic scheme · Evolutionary algorithm

1 Introduction

Communicated by V. Loia. This work is supported by the National Natural Science Foundation of China under Grant 61672122, Grant 61602077, Grant 61772344 and Grant 61732011, the Public Welfare Funds for Scientific Research of Liaoning Province of China under Grant 20170005, the Natural Science Foundation of Liaoning Province of China under Grant 20170540097, and the Fundamental Research Funds for the Central Universities under Grant 3132016348.

B

Rong Chen [email protected] Yulong Fan [email protected] Yang Li [email protected] Tianlun Zhang [email protected]

1

College of Information Science and Technology, Dalian Maritime University, Dalian 116026, China

In the past few years, an abundant literature devoted to the image recovery under bad weather (Zhao et al. 2015; He et al. 2011; Wang and Yuan 2017; Yang et al. 2019; Fu et al. 2017; Zhang and Patel 2018; Li et al. 2018b; Fu et al. 2019; Yang et al. 2017). Among these studies, the problem of rain removal has drawn a lot of attention (Fu et al. 2017; Zhang and Patel 2018; Li et al. 2018b; Fu et al. 2019; Yang et al. 2017; Li et al. 2018a). With rain streaks, the visibility of scene content tends to be drastically d