Modern driver assistance systems (DAS) such as collision avoidance or intersection assistance need reliable information on the current environment. Extracting such information from camera-based systems is a complex and challenging task for inner city traffic scenarios.
|A typical inner city traffic scenario. Cars, pedestrians, cyclists|
and trams are moving in a complex road layout.
Urban traffic scenarios are more complex than highway traffic is and the task for DAS is more challenging:
- Traffic scenes are more crowded with many different types of traffic participants such as cars, pedestrians, cyclists or trams.
- All objects may (partly) occlude each other, are visible in arbitrary views and may look considerably different.
- The particular scenarios may change abruptly.
- The current road geometry may differ considerably. There are many different types of urban streets and intersections that all vary considerably.
- Typical features which provide additional information on the current scenario such as lane markings are often (partly) occluded or completely missing.
The goal of this project is to develop an approach for class-independent object detection for inner city traffic scenarios. Therefore, a stereo camera setup is used to gather vision based information on the current environment.
Sparse interest points which are detected in two consecutive stereo images allow for a 3D reconstruction of the scene and an optical flow description of every point. This information leads to a scene flow description which is used to detect rigid objects in a scene.
Consequently, this approach provides a three-dimensional description of the environment of the ego vehicle. This information can be used by downstream applications such as tracking or trajectory estimation.
|System overview. Two consecutive stereo images are used to compute a scene|
flow description. Consequently, a 3D description of moving objects is obtained.