Robust Real Time Color Tracking

This paper describes the vision system that was developed for the RoboCup F180 team FU-Fighters.

  • PDF / 565,605 Bytes
  • 10 Pages / 451 x 677.3 pts Page_size
  • 13 Downloads / 210 Views

DOWNLOAD

REPORT


Abstract. This paper describes the vision system that was developed for the RoboCup F180 team FU-Fighters. The system analyzes the video stream captured from a camera mounted above the eld. It localizes the robots and the ball predicting their positions in the next video frame and processing only small windows around the predicted positions. Several mechanisms were implemented to make this tracking robust. First, the size of the search windows is adjusted dynamically. Next, the quality of the detected objects is evaluated, and further analysis is carried out until it is satisfying. The system not only tracks the position of the objects, but also adapts their colors and sizes. If tracking fails, e.g. due to occlusions, we start a global search module that localizes the lost objects again. The pixel coordinates of the objects found are mapped to a Cartesian coordinate system using a non-linear transformation that takes into account the distortions of the camera. To make tracking more robust against inhomogeneous lighting, we modeled the appearance of colors in dependence of the location using color grids. Finally, we added a module for automatic identi cation of our robots. The system analyzes 30 frames per second on a standard PC, causing only light computational load in almost all situations.

1

Introduction

In the RoboCup Small-Size (F180) league, ve robots on each team play soccer on a green eld marked with white lines. The ball is orange and the robots, as well as the goals, are marked either yellow or blue. In addition to the yellow or blue team marker (a ping-pong ball centered on top of the robot), further markers are allowed, as long as they have di erent colors (refer to [6] for more details). The robots are controlled by an external computer connected to a camera mounted above the eld, such that the entire eld is visible, as shown in Figure 1. The task of the vision system is to compute the positions and orientations of the robots, as well as the position of the ball. The behavior control software uses this information to operate the robots, relying on visual feedback. Since the robots and the ball move quickly and vision is usually the only input for behavior control, a fast and reliable computer vision system is essential for successful P. Stone, T. Balch, and G. Kraetzschmar (Eds.): RoboCup 2000, LNAI 2019, pp. 239-248, 2001. c Springer-Verlag Berlin Heidelberg 2001

240

Mark Simon et al.

Fig. 1. A typical camera image, showing the eld with shadows at the walls and re ections in the center. The linear ball prediction and a variable searc h window are shown too. The robots are marked with three colored dots.

pla y. Further information about the overall system and the hierarchical reactive control of the F180 team FU-Fighters can be found in [1] and [2]. Appropriate coloring of the interesting objects partially simpli es the vision problem, but does not make it simple. There are several problems. First, the interesting objects are not always visible. The ball can be occluded, due to the central came