Towards robot vision using deep neural networks in evolutionary robotics

  • PDF / 2,048,256 Bytes
  • 17 Pages / 595.276 x 790.866 pts Page_size
  • 4 Downloads / 192 Views

DOWNLOAD

REPORT


RESEARCH PAPER

Towards robot vision using deep neural networks in evolutionary robotics Nathan Watt1   · Mathys C. du Plessis1 Received: 21 February 2020 / Revised: 20 July 2020 / Accepted: 5 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract In evolutionary robotics, robot controllers are often evolved in simulation, as using the physical robot for fitness evaluation can take a prohibitively long time. Simulators provide a quick way to evaluate controller fitness. A simulator is tasked with providing appropriate sensor information to the controller. If the robot has an on-board camera, an entire virtual visual environment is needed to simulate the camera’s signal. In the past, these visual environments have been constructed by hand, requiring the use of hand-crafted models, textures and lighting, which is a tedious and time-consuming process. This paper proposes a deep neural network-based architecture for simulating visual environments. The neural networks are trained exclusively from images captured from the robot, creating a 3-dimensional visual environment without using hand-crafted models, textures or lighting. It does not rely on any external domain specific datasets, as all training data is captured in the physical environment. Robot controllers were evolved in simulation to discern between objects with different colours and shapes, and they successfully completed the same task in the real world. Keywords  Computer vision · Evolutionary robotics · Object detection · Neural networks Mathematics Subject Classification  MSC 68T45 · MSC 68T40

1 Introduction A robot controller governs how a robot interacts with its environment. The complexity of a controller increases along with the complexity of the robot’s task. Manually creating controllers for complex or unusual robots is a complicated and time-consuming process [28]. Evolutionary robotics (ER) makes use of evolutionary algorithms (EAs) to automatically evolve robot controllers [1, 27]. Typically, these controllers are neural networks (NNs), and their parameters are optimised with an EA. A population of random controllers is initialised and evolved over many iterations with natural selection and a pre-defined fitness function [3].

* Nathan Watt [email protected] Mathys C. du Plessis [email protected] 1



Nelson Mandela University, University Way, Summerstrand, Port Elizabeth 6019, South Africa

On-board cameras have the potential to provide robots with a vast amount of information about their environment. They are essential for completing many vision-oriented tasks, such as differentiating between objects. Unfortunately, training a camera-equipped robot with ER is a challenging problem, and existing implementations have considerable limitations. The use of EAs comes with both a benefit and cost. The benefit being that a solution is found with an evolutionary process and is not limited by a programmer’s proposed solutions. The cost being that the EA needs to evaluate the performance of many controllers [1], a