A Deep Learning Approach to Hazard Detection for Autonomous Lunar Landing

  • PDF / 1,974,847 Bytes
  • 20 Pages / 439.642 x 666.49 pts Page_size
  • 23 Downloads / 200 Views

DOWNLOAD

REPORT


A Deep Learning Approach to Hazard Detection for Autonomous Lunar Landing Rahul Moghe1

· Renato Zanetti1

Accepted: 2 October 2020 © American Astronautical Society 2020

Abstract A deep learning approach is presented to detect safe landing locations using LIDAR scans of the Lunar surface. Semantic Segmentation is used to classify hazardous and safe locations from a LIDAR scan during the landing phase. Digital Elevation Maps from the Lunar Reconnaissance Orbiter mission are used to generate the training, validation, and testing dataset. The ground truth is generated using geometric techniques by evaluating the surface roughness, slope, and other hazard avoidance specifications. In order to train a robust model, artificially generated training data is augmented to the training dataset. A UNet-like neural network structure learns a lower dimensional representation of LIDAR scan to retain essential information regarding safety of the landing locations. A softmax activation layer at the bottom of the network ensures that the network outputs a probability of a safe landing spot. The network is also trained with a cost function that prioritizes the false safes to achieve a sub 1% false safes value. The results presented show the effectiveness of the technique for hazard detection. Future work on electing one landing spot based on proximity to the intended landing spot and the size of safety region around it is motivated. Keywords Hazard detection · Machine learning · Autonomous landing · Semantic segmentation

Introduction Autonomous landing is essential for safety and reliability of future space exploration missions. Hazard detection during landing enables automatic detection and navigation of hazards on the surface. The Hazard Detection System (HDS) is a primary component of the cross-NASA developed Autonomous Landing and Hazard  Rahul Moghe

[email protected] 1

Aerospace Engineering and Engineering Mechanics Department, The University of Texas at Austin, Austin, TX, 78712, USA

The Journal of the Astronautical Sciences

Avoidance Technology (ALHAT) sensor suite [1, 5, 12, 19]. It provides guidance, navigation and control capabilities for autonomous landing under robust lighting conditions. It generates a Digital Elevation Map (DEM) using a LIDAR sensor which can then be processed to detect hazards on the landing area. In order to determine safe landing locations, the DEM is analyzed for candidate locations that satisfy mission specifications such as slope, terrain roughness, and proximity to hazards. Convolutional Neural Networks (CNN) are ideally suited to recognize desired patterns on spatially correlated input data such as images. The safety of a landing spot depends on the elevation of the surface in its vicinity. Moreover, complex mission specifications can be incorporated by aggregating them in the CNN training, thereby shifting the complexity to pre-flight operations and making the on-board inference computationally efficient for real time application. In this paper, we present a robust learning approac