Home | Impressum | KIT

Environment Perception

Group Leader: Prof. Dr.-Ing. Christoph Stiller and Dr. rer. nat. Martin Lauer

Future cars will be able to drive autonomously without the help of a driver. This will reduce accidents and increase the comfort of driving. Thereto, the vehicles must be able to perceive and interpret their environment and to draw conclusions about the behavior of other traffic participants.

Within this research focus we are developing techniques for environment perception mainly based on monoscopic and stereoscopic cameras. The approaches deal with low level signal processing like efficient stereo matching and optical flow calculation, mid-level tasks like scene segmentation and reconstruction of scene geometry up to high-level scene interpretation tasks like behavior prediction for other traffic participants.

 

Visual Odometry With Non-Overlapping Multi-Camera-Systems

Visual odometry is a crucial component of autonomous driving systems. It supplies the driven trajectory in a dead-reckoning sense, which can then be used for global localization or behaviour generation. In order to robustify the estimation sensor setups with multiple cameras are in my particular interest, yielding consistent trajectories over several kilometers with very low drift.
For released code visit us on Github https://github.com/KIT-MRT

Contact: Dipl.-Ing. Johannes Gräter

 

Life-Long Vision based Mapping and Localization

Current intelligent vehicles require robust and accurate self-localization in a multitude of scenarios. Common approaches couple inertial measurement units (IMU) with global navigation satellite systems (GNSS). However, such solutions are not reliable in urban environments due to multipath, shadowing and atmospheric perturbations.

To overcome these drawbacks we investigate in life-long iterative mapping and high-precision localization in six degrees of freedom using multiple cameras mounted on the vehicle. The approach yields centimeter accuracy even under challenging conditions.

Contact: M.Sc. Marc Sons

 

Trajectory prediction and Mapping

In order to be able to plan the behavior of our vehicle it is necessary to predict where other traffic participants will move in the near future. Our approach tries to predict the trajectories of cars, pedestrians and bikes by creating maps based on the movement of other traffic participants in the past.

Contact: M.Sc. Jannik Quehl

 

Continuous Trajectory Bundle Adjustment with Spinning Range Sensors

We investigate the calibration of multiple spinning range sensors as well as the Simultaneous Localization and Mapping (SLAM) problem and aim to optimize calibration, map and map-relative pose together. For spinning range sensors, measurements are not generated at the same time. In addition, high-rate IMU systems make the optimization problem ill-conditioned when modeled in discrete state space. By modeling the trajectory as a continuous function, we can handle this property and model a large set of different acquisition times.

Contact: M.Sc. Sascha Wirges

 

Realtime Environment Perception with Range Sensors

We aim to find algorithms to process observations from range sensors in real-time. Here, we make the assumption that the motion of traffic participants can be estimated w.r.t. to a common ground surface. We investigate methods to transfer range sensor observations from 3D space to the ground surface. The ground surface itself can be expressed as a dense 2D grid map with each cell referencing to 3D points. With this method, fast algorithms can be applied within the dense grid representation and easily transferred back to the 3D domain.

Contact: M.Sc. Sascha Wirges

 

Automatic Generation of High Precision Maps for Automated Driving

For the operation of safe and reliable automatic vehicles, high-resolution maps, which also contain detailed information about, for example, lanes and their exact position, information about right of way and speeds limits, are indispensable. To reduce the high effort associated with the creation and validation of such maps, we are developing methods to automate this process based on sensor data from measurement vehicles.

Contact: M.Sc. Fabian Poggenhans

 

Stereo Based Environment Perception

Cameras are an essential component for receiving information about the environment of autonomous cars. With two cameras in a stereo setup it is possible to compute depth information of the perceived scene in form of a pointcloud, which then can be used for object detection and tracking or a three-dimensional reconstruction of the environment. 

Contact: M.Sc. Hendrik Königshof

 

3D environment perception using high resolution LiDAR data

Laserscanners are an important component on any sensor platform for highly automated driving due to their ability to measure distances up to millimeter accuracy by timing the returning signal of a sent out laser beam. Each measurement is a single point in space. By fusing all these measurements it is possible to create a model of the world which can then be used for obstacle detection and localization.

Contact: M.Sc. Tilman Kühner

 

Intrinsic camera calibration

Camera calibration is an essential task in computer vision. It solves the problem of how a 3D point in the world corresponds to a 2D pixel coordinate (intrinsic camera calibration) and how the cameras are located with respect to each other and other sensors (extrinsic camera calibration).

Contact: Dipl.-Ing. Johannes Beck

Project Details

 

Lane marking based localization on Highways

Lane-level accurate localization is essential for autonomous driving on Highways. Using low-cost mono cameras, we detect lane markings in the current camera images with the map to obain the position of the vehicle.

Contact: M.Sc. Johannes Janosovits

 

Robust visual tracking in traffic scenarios

Object tracking is an essential part for behavior analysis and trajectory prediction in autonomous driving. Vision-based devices are able to provide rich information about observed environments.

Our aim is to develop novel approaches to track objects in light of powerful image features, which can deal with challenging scenarios such as occlusions and deteriorated visual conditions.

Contact: M.Sc. Wei Tian

 

Meaningful features for localization and more

A common approach for localization is to use abstract features detected in e.g. camera images or Lidar scans. These features enable precise localization but have no other use. We develop meaningful features for localization which can further be used for other tasks such as planning or behavior generation.

The geometry of a facade is approximated by multiple planes. The planes are strong keys for localization but are also useful for e.g. free space estimation in planning tasks.

Contact: M.Sc. Julius Kümmerle

 

Pixel-wise image labeling with Convolutional Neural Networks

Camera images contain valuable information about the environment for autonomous cars. Some of this information cannot be acquired from other sensors. We use deep learning and especially Convolutional Neural Networks to extract information like class labels and depth for each pixel.

Contact: M.Sc. Niels Ole Salscheider

 

Simultaneously object tracking and shape estimation

Traditional object tracking methods approximate vehicle geometry using simple bounding boxes. We develop methods that simultaneously estimate object motion and reconstruct detailed object shape using laser scanners. The resulting shape information has benefits for the tracking process itself, but can also be vital for subsequent processing steps such as trajectory planning in evasive steering.

Contact: M.Sc. Stefan Krämer

 

Automatic Verification of High Precision Maps for Highly Automated Driving

The recent progress in the area of autonomous cars has shown that high precision digital maps are crucial to steer a car safely and comfortably through a complex dynamic environment. To plan a trajectory which guides a self-driving car as smoothly as a foresightedly driving human driver would do, details on a sub-lane level are needed. However, the more details are stored in a map, the faster it becomes outdated.
Thus, the goal of our research is to use sensor data from sensors, which are needed for autonomous driving, anyway, to locally verify the map that is stored on the car. This can either verify the map or mark it - or parts of it - as invalid.
As part of our work, we are developing methods and a framework to compare map and sensor data. Furthermore, we are trying to identify features that are suitable for map verification. Another challenge is to model static and dynamic occlusions which limit the range within which the map can be assessed.
Parts of the map which are eventually marked as changed can not only be invalidated for all other components of an autonomous car, but also sent back to a remote server. When permanent changes are identified, they could either trigger a remapping process or be used as a map update directly.

Contact: M.Sc. Jan-Hendrik Pauls

 

Former Projects

Other Activities
Titel Ansprechpartner