Cloud/haze detection in airborne videos using a convolutional neural network
- PDF / 1,940,767 Bytes
- 15 Pages / 439.642 x 666.49 pts Page_size
- 22 Downloads / 202 Views
Cloud/haze detection in airborne videos using a convolutional neural network Hamidreza Fazlali1
· Shahram Shirani1 · Mike McDonald1 · Thia Kirubarajan1
Received: 6 August 2019 / Revised: 6 July 2020 / Accepted: 13 July 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract In airborne videos surveillance, moving object detection and target tracking are the key steps. However, under bad weather conditions, the presence of clouds and haze or even smoke coming from buildings can make the processing of these videos very challenging. Current cloud detection or classification methods only consider a single image. Moreover, the images they use are often captured by satellites or planes at high altitudes with very long ranges to clouds, which can help distinguish cloudy regions from non-cloudy ones. In this paper, a new approach for cloud and haze detection is proposed by exploiting both spatial and temporal information in airborne videos. In this method, several consecutive frames are divided into patches. Then, consecutive patches are collected as patch sets and fed into a deep convolutional neural network. The network is trained to learn the appearance of clouds as well as their motion characteristics. Therefore, instead of relying on single frame patches, the decision on a patch in the current frame is made based on patches from previous and subsequent consecutive frames. This approach, avoids discarding the temporal information about clouds in videos, which may contain important cues for discriminating between cloudy and non-cloudy regions. Experimental results show that using temporal information besides the spatial characteristics of haze and clouds can greatly increase detection accuracy. Keywords Airborne video surveillance · Cloud detection · Classification · Convolutional neural network · Spatial information · Temporal information
1 Introduction Videos captured by cameras on airplanes or Unmanned Aerial Vehicles (UAVs) are often used in the aerial surveillance of ground vehicles and maritime vessels. In recent years, several methods for the detection and tracking of moving vehicles in airborne videos have been developed [14, 17, 19, 20]. Aerial videos have applications in automatic traffic monitoring, urban planning and detection of abnormal behavior. These aerial surveillance videos have advantages over ground based surveillance videos due to their large field of view and Hamidreza Fazlali
[email protected] 1
ECE Department, McMaster University, 1280 Main Street West, Hamilton, ON, Canada
Multimedia Tools and Applications
reduced obscuration [5]. Videos from aircraft have lower range to the ground than those captured from satellites. One of the main challenges in the automatic analysis of these videos is the presence of clouds or haze. As an airborne platform goes through or fly over clouds and haze, the brightness changes and the quality of the scene is reduced. Consequently, detection and tracking of moving objects become difficult due to occlusion and low visibili
Data Loading...