Perception and Scene Understanding

Former projects:

 

Trajectory Estimation with Visual Odometry for Micro Mobilty Applications

Visual Odometry (VO) supplies the movement of a camera between two frames by analyzing the displacement between correlating feature points. Having a time series taken from a driving system, the driven trajectory as well as the velocity can be estimated iteratively. We stabilize the trajectory estimation by a ground plane and horizon estimation based on Time-of-Flight camera data. Using an implementation aiming for efficiency, it is possible to run the algorithm on low-cost hardware (e.g. Raspberry Pi) in real time and thus, making it applicable for micro mobility applications like electric scooter.

 

Mapless Driving Based on Precise Lane Topology Estimation and Criticality Assessment

Modern ADAS systems rely on highly accurate maps that may become outdated very quickly due to contructions and changes in the routing system. These cases will result in unexpected behavior, if no fallback algorithms exist, that are able to navigate without a precise map. This project aims at such situations and extracts the lane geometry only based on the sensor system (e.g. laser or camera). The current focus lies on applying machine learning algorithms for directly estimating the topology from camera images. This scene model is used for trajectory planning which explicitly incorporates observed trajectories, perception and estimation uncertainties and occlusions.

 

Continuous Trajectory Bundle Adjustment with Spinning Range Sensors

We investigate the calibration of multiple spinning range sensors as well as the Simultaneous Localization and Mapping (SLAM) problem and aim to optimize calibration, map and map-relative pose together. For spinning range sensors, measurements are not generated at the same time. In addition, high-rate IMU systems make the optimization problem ill-conditioned when modeled in discrete state space. By modeling the trajectory as a continuous function, we can handle this property and model a large set of different acquisition times.

 

Stereo Based Environment Perception

Cameras are an essential component for receiving information about the environment of autonomous vehicles. With two cameras in a stereo setup it is possible to compute depth information of the perceived scene, which then can be used together with semantic information to estimate the position and dimension of all types of road users around your vehicle.

 

Pixel-wise image labeling with Convolutional Neural Networks

Camera images contain valuable information about the environment for autonomous cars. Some of this information cannot be acquired from other sensors. We use deep learning and especially Convolutional Neural Networks to extract information like class labels and depth for each pixel.

 

Visual Odometry With Non-Overlapping Multi-Camera-Systems

Visual odometry is a crucial component of autonomous driving systems. It supplies the driven trajectory in a dead-reckoning sense, which can then be used for global localization or behaviour generation. In order to robustify the estimation sensor setups with multiple cameras are in my particular interest, yielding consistent trajectories over several kilometers with very low drift.
For released code visit us on Github https://github.com/KIT-MRT