Institut für Mess- und Regelungstechnik (MRT)

Environment Perception

Group Leader: Prof. Dr.-Ing. Christoph Stiller and Dr. rer. nat. Martin Lauer

Future cars will be able to drive autonomously without the help of a driver. This will reduce accidents and increase the comfort of driving. Thereto, the vehicles must be able to perceive and interpret their environment and to draw conclusions about the behavior of other traffic participants.

Within this research focus we are developing techniques for environment perception mainly based on monoscopic and stereoscopic cameras. The approaches deal with low level signal processing like efficient stereo matching and optical flow calculation, mid-level tasks like scene segmentation and reconstruction of scene geometry up to high-level scene interpretation tasks like behavior prediction for other traffic participants.


Video-based environment perception


Video-based environment perception is important for self-driving cars. Camera images contain more detailed information than data from other sensors like radar or lidar. Cameras also can sense objects in a far distance. In this field I research deep learning approaches for object detection, tracking and pixel-wise semantic segmentation.

Contact: M.Sc. Niels Ole Salscheider


Realtime Intersection Estimation for Mapless Driving and Map Updates

Modern automated vehicles heavily rely on highly accurate maps. Maps are prone to become outdated due to constructions and changes in the lane system. These cases will result in unexpected behavior, if the vehicle is not able to navigate without a precise map. In addition, updating existing maps is not even covered in recent research, yet. Thus, our research goal is to estimate map data in complex, urban environments solely based on sensor information. The resulting estimation can provide missing planning constraints and simultaneously update outdated maps. For providing reliable intersection models, we combine Deep Learning based methods with model based sampling and optimization techniques.

Contact: M.Sc. Annika Meyer


Mapless Driving Based on Precise Lane Topology Estimation and Criticality Assessment

Modern ADAS systems rely on highly accurate maps that may become outdated very quickly due to contructions and changes in the routing system. These cases will result in unexpected behavior, if no fallback algorithms exist, that are able to navigate without a precise map. This project aims at such situations and extracts the lane geometry only based on the sensor system (e.g. laser or camera). The current focus lies on applying machine learning algorithms for directly estimating the topology from camera images. This scene model is used for trajectory planning which explicitly incorporates observed trajectories, perception and estimation uncertainties and occlusions.

Contact: M.Sc. Annika Meyer and M.Sc. Piotr Orzechowski


Extrinsic Calibration of Multi Sensor Systems

Autonomous vehicles are equipped with a great variety of sensors. The sensors need to be calibrated to enable rich perception of the environment by fusion information of different sensors. That makes the process of calibrating sensors essential for autonomous driving and robotic systems in general. We develop a powerful framework for calibrating state-of-the-art autonomous vehicles with sensors such as cameras, LiDARs and radars.

Contact: M.Sc. Tilman Kühner and Dr. rer. nat. Martin Lauer


Visual Odometry With Non-Overlapping Multi-Camera-Systems

Visual odometry is a crucial component of autonomous driving systems. It supplies the driven trajectory in a dead-reckoning sense, which can then be used for global localization or behaviour generation. In order to robustify the estimation sensor setups with multiple cameras are in my particular interest, yielding consistent trajectories over several kilometers with very low drift.
For released code visit us on Github

Contact: Dr. rer. nat. Martin Lauer


Life-Long Vision based Mapping and Localization

Current intelligent vehicles require robust and accurate self-localization in a multitude of scenarios. Common approaches couple inertial measurement units (IMU) with global navigation satellite systems (GNSS). However, such solutions are not reliable in urban environments due to multipath, shadowing and atmospheric perturbations.
To overcome these drawbacks we investigate in life-long iterative mapping and high-precision localization in six degrees of freedom using multiple cameras mounted on the vehicle. The approach yields centimeter accuracy even under challenging conditions.

Contact: M.Sc. Haohao Hu


Trajectory prediction and Mapping

In order to be able to plan the behavior of our vehicle it is necessary to predict where other traffic participants will move in the near future. Our approach tries to predict the trajectories of cars, pedestrians and bikes by creating maps based on the movement of other traffic participants in the past.

Contact: M.Sc. Jannik Quehl


Continuous Trajectory Bundle Adjustment with Spinning Range Sensors

We investigate the calibration of multiple spinning range sensors as well as the Simultaneous Localization and Mapping (SLAM) problem and aim to optimize calibration, map and map-relative pose together. For spinning range sensors, measurements are not generated at the same time. In addition, high-rate IMU systems make the optimization problem ill-conditioned when modeled in discrete state space. By modeling the trajectory as a continuous function, we can handle this property and model a large set of different acquisition times.

Contact: M.Sc. Sascha Wirges


Realtime Environment Perception with Range Sensors

We aim to find algorithms to process observations from range sensors in real-time. Here, we make the assumption that the motion of traffic participants can be estimated w.r.t. to a common ground surface. We investigate methods to transfer range sensor observations from 3D space to the ground surface. The ground surface itself can be expressed as a dense 2D grid map with each cell referencing to 3D points. With this method, fast algorithms can be applied within the dense grid representation and easily transferred back to the 3D domain.

Contact: M.Sc. Sascha Wirges


Automatic Generation of High Precision Maps for Automated Driving

For the operation of safe and reliable automatic vehicles, high-resolution maps, which also contain detailed information about, for example, lanes and their exact position, information about right of way and speeds limits, are indispensable. To reduce the high effort associated with the creation and validation of such maps, we are developing methods to automate this process based on sensor data from measurement vehicles.

Contact: M.Sc. Jan-Hendrik Pauls


Stereo Based Environment Perception

Cameras are an essential component for receiving information about the environment of autonomous vehicles. With two cameras in a stereo setup it is possible to compute depth information of the perceived scene, which then can be used together with semantic information to estimate the position and dimension of all types of road users around your vehicle.

Contact: M.Sc. Hendrik Königshof


Large-scale 3D scene reconstruction and texturing

3D reconstructions can be used for localization, simulation, visualization and many more tasks. Modern LiDAR sensors provide a large amount of sub-centimeter accurate point measurements of their environment. We fuse this data using a volumetric reconstruction approach that allows for the reconstruction of large areas. The challenge is to incorporate data from multiple drives in a way that improves the reconstruction. In order to do so, loop closures have to be detected and all sensor poses have to be determined in a way that results in a consistent model of the environment. After the geometry has been reconstructed, it can be textured from camera images. The following video shows results using data from the KITTI dataset.

Contact: M.Sc. Tilman Kühner


Intrinsic camera calibration

Camera calibration is an essential task in computer vision. It solves the problem of how a 3D point in the world corresponds to a 2D pixel coordinate (intrinsic camera calibration) and how the cameras are located with respect to each other and other sensors (extrinsic camera calibration). -> Project Details

Contact: Dr. rer. nat. Martin Lauer or Dr. Carlos Fernández López


Lane marking based localization on Highways

Lane-level accurate localization is essential for autonomous driving on Highways. Using low-cost mono cameras, we detect lane markings in the current camera images with the map to obain the position of the vehicle.

Contact: M.Sc. Johannes Janosovits


Robust visual tracking in traffic scenarios

Object tracking is an essential part for behavior analysis and trajectory prediction in autonomous driving. Vision-based devices are able to provide rich information about observed environments.
Our aim is to develop novel approaches to track objects in light of powerful image features, which can deal with challenging scenarios such as occlusions and deteriorated visual conditions.

Contact: Dr. rer. nat. Martin Lauer


Meaningful features for localization and more

A common approach for localization is to use abstract features detected in e.g. camera images or Lidar scans. These features enable precise localization but have no other use. We develop meaningful features for localization which can further be used for other tasks such as planning or behavior generation.
Groundtruth path (red), localization result (orange arrows), detections of facades, poles and road markings.

Contact: Dr. rer. nat. Martin Lauer


Pixel-wise image labeling with Convolutional Neural Networks

Camera images contain valuable information about the environment for autonomous cars. Some of this information cannot be acquired from other sensors. We use deep learning and especially Convolutional Neural Networks to extract information like class labels and depth for each pixel.

Contact: M.Sc. Niels Ole Salscheider


Simultaneous object tracking and shape estimation

Traditional object tracking methods approximate vehicle geometry using simple bounding boxes. We develop methods that simultaneously estimate object motion and reconstruct detailed object shape using laser scanners. The resulting shape information has benefits for the tracking process itself, but can also be vital for subsequent processing steps such as trajectory planning in evasive steering.

Contact: Dr. rer. nat. Martin Lauer or Dr. Carlos Fernández López


Automatic Verification of High Precision Maps for Highly Automated Driving

The recent progress in the area of autonomous cars has shown that high precision digital maps are crucial to steer a car safely and comfortably through a complex dynamic environment. To plan a trajectory which guides a self-driving car as smoothly as a foresightedly driving human driver would do, details on a sub-lane level are needed. However, the more details are stored in a map, the faster it becomes outdated.
Thus, the goal of our research is to use sensor data from sensors, which are needed for autonomous driving, anyway, to locally verify the map that is stored on the car. This can either verify the map or mark it - or parts of it - as invalid.
As part of our work, we are developing methods and a framework to compare map and sensor data. Furthermore, we are trying to identify features that are suitable for map verification. Another challenge is to model static and dynamic occlusions which limit the range within which the map can be assessed.
Parts of the map which are eventually marked as changed can not only be invalidated for all other components of an autonomous car, but also sent back to a remote server. When permanent changes are identified, they could either trigger a remapping process or be used as a map update directly.

Contact: M.Sc. Jan-Hendrik Pauls


Former Projects