Innovative Testing of ADAS/AD Functions with Spider

  • PDF / 696,458 Bytes
  • 4 Pages / 595.276 x 790.866 pts Page_size
  • 123 Downloads / 193 Views

DOWNLOAD

REPORT


Innovative Testing of ADAS/AD Functions with Spider AUTHOR

Christian Schwarzl is Head of Dependable Systems Group at Virtual Vehicle Research in Graz (Austria).

The Spider was developed as a mobile hardware-in-the-loop platform at Virtual Vehicle. The freely programmable, self-driving robot enables automated testing and extensive system evaluation for future automated vehicles. Sensors and detection systems, vehicle software, and control algorithms can be tested under real conditions already during development.

NEW TESTING PLATFORMS NEEDED

In order to achieve an advantage in the Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) markets, OEMs, suppliers and sensor manufacturers are increasingly striving to offer automation functions at a high level [1]. However, the development of

40

© Virtual Vehicle

control and software functions that meet international and regional requirements is complex and cost intensive. Basically, ADAS/AD functions depend on a precise knowledge of their environment. Various sensor technologies are being used for this purpose, each with their advantages and disadvantages. To check the actual performance in the field, systematic and

reproducible tests under real conditions are necessary. The mobile Hardware-in-the-Loop (HiL) platform Spider enables such tests to be carried out and closes a gap that previous testing platforms could not cover: reproducible tests under real conditions, even before a test vehicle is available. PROOF OF CORRECT DRIVING FUNCTIONS

The 1.2 × 1 m Spider enables the testing of software, individual components such as sensors or control units as well as entire driving functions, FIGURE 1. In this interaction, an optimal sensor combination regarding reliability, robustness and costs is determined. This is due to the differences in the sensor technologies used. For example, difficult weather conditions such as heavy rain, snow or fog have different effects on the robustness of environmental detection systems – a radar performs well under such conditions, while camera and lidar perform poorly [2]. The same applies to the susceptibility to interference from surrounding objects, such as metallic objects or darkness and glare, where a camera works poorly, but radar and lidar operate well. External influences such as dirt must also be considered, as the location of the sensors has a high impact. In addition, safety must be ensured at all times: A sensor failure or reduced performance must not endanger people. In situations like these, there are two options: Either the functionality is reduced, which is bad for the customer’s satisfaction, or redundancies are built in and the number of sensors is increased – which in turn has an impact on costs.

ronment. Within an environment simulation, measurement data from sensors are calculated from a 3-D environment using sensor models, based on their perspective and position. The goal is to bring environment simulations as close to reality as possible. This depends primarily on the quality of the environment mod