Rain Rendering for Evaluating and Improving Robustness to Bad Weather

  • PDF / 4,767,251 Bytes
  • 20 Pages / 595.276 x 790.866 pts Page_size
  • 64 Downloads / 164 Views

DOWNLOAD

REPORT


Rain Rendering for Evaluating and Improving Robustness to Bad Weather Maxime Tremblay1 · Shirsendu Sukanta Halder2 · Raoul de Charette2 · Jean-François Lalonde1 Received: 15 March 2020 / Accepted: 30 July 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Rain fills the atmosphere with water particles, which breaks the common assumption that light travels unaltered from the scene to the camera. While it is well-known that rain affects computer vision algorithms, quantifying its impact is difficult. In this context, we present a rain rendering pipeline that enables the systematic evaluation of common computer vision algorithms to controlled amounts of rain. We present three different ways to add synthetic rain to existing images datasets: completely physicbased; completely data-driven; and a combination of both. The physic-based rain augmentation combines a physical particle simulator and accurate rain photometric modeling. We validate our rendering methods with a user study, demonstrating our rain is judged as much as 73% more realistic than the state-of-the-art. Using our generated rain-augmented KITTI, Cityscapes, and nuScenes datasets, we conduct a thorough evaluation of object detection, semantic segmentation, and depth estimation algorithms and show that their performance decreases in degraded weather, on the order of 15% for object detection, 60% for semantic segmentation, and 6-fold increase in depth estimation error. Finetuning on our augmented synthetic data results in improvements of 21% on object detection, 37% on semantic segmentation, and 8% on depth estimation. Keywords Adverse weather · Vision and rain · Physics-based rendering · Image to image translation · GAN

1 Introduction A common assumption in computer vision is that light travels unaltered from the scene to the camera. In clear weather, this assumption is reasonable: the atmosphere behaves like a transparent medium and transmits light with very little attenuation or scattering. However, inclement weather conditions such as rain fill the atmosphere with particles producing spatio-temporal artifacts such as attenuation or rain streaks. This creates noticeable changes to the appearance of images (see Fig. 1), thus creating additional challenges to computer vision algorithms which must be robust to these conditions. Communicated by Dengxin Dai, Robby T. Tan, Vishal Patel, Jiri Matas, Bernt Schiele and Luc Van Gool. Electronic supplementary material The online version of this article (https://doi.org/10.1007/s11263-020-01366-3) contains supplementary material, which is available to authorized users.

B

Maxime Tremblay [email protected]

1

Université Laval, Quebec City, Canada

2

Inria, Paris, France

While the influence of rain on image appearance is wellknown and understood (Garg and Nayar 2005), its impact on the performance of computer vision tasks is not. Indeed, how can one evaluate what the impact of, say, a rainfall rate of 100 mm/h (a typical autumn shower) is on the performance of an object d