Institut für Mess- und Regelungstechnik (MRT)

Perception and Scene Understanding

Group Leaders: Dr. rer. nat. Martin Lauer and Dr. Carlos Fernández López

One of the largest and most important areas of research in the context of autonomous driving is the domain of perception and scene understanding. In an abstract sense this domain contains all steps that are part of creating an environment model that can afterwards be used for planning the behavior of an autonomous car. This begins with the automated processing of all data perceived by the car's sensors like camera-images, RADAR-measurements or point clouds collected by a LiDAR. The processed data is coreferenced with each other based on a calibration between sensors and afterwads fused into one common environment model. Using techniques and algorithms from the domain of pattern recognition and machine learning it is even possible to accurately classify observed objects, traffic participants and elements from the environment to a defined set of classes and predict their behavior several seconds into the future.

 

Trajectory Estimation with Visual Odometry for Micro Mobilty Applications

Visual Odometry (VO) supplies the movement of a camera between two frames by analyzing the displacement between correlating feature points. Having a time series taken from a driving system, the driven trajectory as well as the velocity can be estimated iteratively. We stabilize the trajectory estimation by a ground plane and horizon estimation based on Time-of-Flight camera data. Using an implementation aiming for efficiency, it is possible to run the algorithm on low-cost hardware (e.g. Raspberry Pi) in real time and thus, making it applicable for micro mobility applications like electric scooter.

Contact: M.Sc. Rebekka Peter

 

Probabilistic Pedestrian Prediction

In order to correctly react on people's behavior, a prediction of their probably future positions is needed. In this video you see how pedestrians could be predicted using an artificial neural network that was trained with trajectories of humans in traffic scenes. The green circles are corresponding to the minimum prediction horizon of 1 second, the red circles are corresponding the maximum prediction horizon of 4 seconds. With this prediction the ego vehicle can now brake for pedestrians even if they did not start crossing the road, yet.

Contact: M.Sc. Florian Wirth

 

Simultaneous object tracking and shape estimation

Traditional object tracking methods approximate vehicle geometry using simple bounding boxes. We develop methods that simultaneously estimate object motion and reconstruct detailed object shape using laser scanners. The resulting shape information has benefits for the tracking process itself, but can also be vital for subsequent processing steps such as trajectory planning in evasive steering.

Contact: Dr. rer. nat. Martin Lauer or Dr. Carlos Fernández López

 

Mapless Driving Based on Precise Lane Topology Estimation and Criticality Assessment

Modern ADAS systems rely on highly accurate maps that may become outdated very quickly due to contructions and changes in the routing system. These cases will result in unexpected behavior, if no fallback algorithms exist, that are able to navigate without a precise map. This project aims at such situations and extracts the lane geometry only based on the sensor system (e.g. laser or camera). The current focus lies on applying machine learning algorithms for directly estimating the topology from camera images. This scene model is used for trajectory planning which explicitly incorporates observed trajectories, perception and estimation uncertainties and occlusions.

Contact: M.Sc. Annika Meyer and M.Sc. Piotr Orzechowski

 

Vehicle Prediction

Since most interactions with other traffic participants occur between vehicles, the task of vehicle behavior prediction is a core task of scene understanding. To complete this task it is necessary to process all available information about the surroundings including but not limited to: Recognized obstacles, road geometry, other traffic participants, drivable area and traffic rules that are applicable to the current situation. The multitude of information necessary as well as the complexity of potential interactions between vehicles present a difficulty challenge in the prediction process.

Contact: M.Sc. Jannik Quehl

 

Trajectory prediction and Mapping

In order to be able to plan the behavior of our vehicle it is necessary to predict where other traffic participants will move in the near future. Our approach tries to predict the trajectories of cars, pedestrians and bikes by creating maps based on the movement of other traffic participants in the past.

Contact: M.Sc. Jannik Quehl

 

Continuous Trajectory Bundle Adjustment with Spinning Range Sensors

We investigate the calibration of multiple spinning range sensors as well as the Simultaneous Localization and Mapping (SLAM) problem and aim to optimize calibration, map and map-relative pose together. For spinning range sensors, measurements are not generated at the same time. In addition, high-rate IMU systems make the optimization problem ill-conditioned when modeled in discrete state space. By modeling the trajectory as a continuous function, we can handle this property and model a large set of different acquisition times.

Contact: M.Sc. Sascha Wirges

 

Realtime Environment Perception with Range Sensors

    

We aim to find algorithms to process observations from range sensors in real-time. Here, we make the assumption that the motion of traffic participants can be estimated w.r.t. to a common ground surface. We investigate methods to transfer range sensor observations from 3D space to the ground surface. The ground surface itself can be expressed as a dense 2D grid map with each cell referencing to 3D points. With this method, fast algorithms can be applied within the dense grid representation and easily transferred back to the 3D domain.

Contact: M.Sc. Sascha Wirges

 

Stereo Based Environment Perception

Cameras are an essential component for receiving information about the environment of autonomous vehicles. With two cameras in a stereo setup it is possible to compute depth information of the perceived scene, which then can be used together with semantic information to estimate the position and dimension of all types of road users around your vehicle.

Contact: M.Sc. Hendrik Königshof

 

Robust visual tracking in traffic scenarios

Object tracking is an essential part for behavior analysis and trajectory prediction in autonomous driving. Vision-based devices are able to provide rich information about observed environments.

Our aim is to develop novel approaches to track objects in light of powerful image features, which can deal with challenging scenarios such as occlusions and deteriorated visual conditions.

Contact: Dr. rer. nat. Martin Lauer

 

Pixel-wise image labeling with Convolutional Neural Networks

Camera images contain valuable information about the environment for autonomous cars. Some of this information cannot be acquired from other sensors. We use deep learning and especially Convolutional Neural Networks to extract information like class labels and depth for each pixel.

Contact: M.Sc. Niels Ole Salscheider

 

   

Video-based environment perception is important for self-driving cars. Camera images contain more detailed information than data from other sensors like radar or lidar. Cameras also can sense objects in a far distance. In this field I research deep learning approaches for object detection, tracking and pixel-wise semantic segmentation.

Contact: M.Sc. Niels Ole Salscheider

 

Visual Odometry With Non-Overlapping Multi-Camera-Systems

Visual odometry is a crucial component of autonomous driving systems. It supplies the driven trajectory in a dead-reckoning sense, which can then be used for global localization or behaviour generation. In order to robustify the estimation sensor setups with multiple cameras are in my particular interest, yielding consistent trajectories over several kilometers with very low drift.
For released code visit us on Github https://github.com/KIT-MRT

Contact: Dr. rer. nat. Martin Lauer