The history of autonomous driving at MRT began with the Darpa Grand Challenge of 2005, a competition for autonomous off road vehicles, where MRT supplied vision components for Team ION, please click here.
In 2007, MRT entered the VW Passat "AnnieWAY", please click here, into the Darpa Urban Challenge, and advanced to the final stage. The competition was set in a mock up urban environment.
In 2011, the same vehicle won the Grand Cooperative Driving Challenge, please click here. This was the first international competition to implement highway platooning scenarios of cooperating vehicles connected with communication devices.
In 2013, MRT supplied localization, trajectory planning and control components for the vehicle that completed the 103 km of the historic Bertha-Benz-Memorial-Route autonomously, please click here and here.
In the field of autonomous driving, MRT is cooperating tightly with its sister department Mobile Perception Systems (MPS) at FZI Research Center for Information Technology, please click here.
The MRT currently contributes with various projects to this field of research.
Many other projects focus more specifically on the perception part and are in general not limited to the context of autonomous vehicles. Pleaser refer to “Environment perception” for an overview!
Life-Long Vision based Mapping and Localization
Current intelligent vehicles require robust and accurate self-localization in a multitude of scenarios. Common approaches couple inertial measurement units (IMU) with global navigation satellite systems (GNSS). However, such solutions are not reliable in urban environments due to multipath, shadowing and atmospheric perturbations.
To overcome these drawbacks we investigate in life-long iterative mapping and high-precision localization in six degrees of freedom using multiple cameras mounted on the vehicle. The approach yields centimeter accuracy even under challenging conditions.
Mapless Driving Based on Precise Lane Topology Estimation and Criticality Assessment
Modern ADAS systems rely on highly accurate maps that may become outdated very quickly due to contructions and changes in the routing system. These cases will result in unexpected behavior, if no fallback algorithms exist, that are able to navigate without a precise map. This project aims at such situations and extracts the lane geometry only based on the sensor system (e.g. laser or camera). The current focus lies on applying machine learning algorithms for directly estimating the topology from camera images. This scene model is used for trajectory planning which explicitly incorporates observed trajectories, perception and estimation uncertainties and occlusions.
A common approach for localization is to use abstract features detected in e.g. camera images or Lidar scans. These features enable precise localization but have no other use. We develop meaningful features for localization which can further be used for other tasks such as planning or behavior generation.
Groundtruth path (red), localization result (orange arrows), detections of facades, poles and road markings.
Cooperative Motion Planning for Automated Vehicles in Mixed Traffic
While motion planning techniques for automated vehicles in a reactive and anticipatory manner are already widely presented, approaches to cooperative motion planning are still remaining. Thus, we focus on the enhancement of common motion planning algorithms, allowing for cooperation with human-driven vehicles.
The blue (automated) vehicle enters the narrowing first, as it is closer to the narrowing. However, if the black (human-driven) vehicle has the right of way, it drives first despite the blue vehicle being closer.
Simultaneously object tracking and shape estimation
Traditional object tracking methods approximate vehicle geometry using simple bounding boxes. We develop methods that simultaneously estimate object motion and reconstruct detailed object shape using laser scanners. The resulting shape information has benefits for the tracking process itself, but can also be vital for subsequent processing steps such as trajectory planning in evasive steering.
Automatic Verification of High Precision Maps for Highly Automated Driving
The recent progress in the area of autonomous cars has shown that high precision digital maps are crucial to steer a car safely and comfortably through a complex dynamic environment. To plan a trajectory which guides a self-driving car as smoothly as a foresightedly driving human driver would do, details on a sub-lane level are needed. However, the more details are stored in a map, the faster it becomes outdated.
Thus, the goal of our research is to use sensor data from sensors, which are needed for autonomous driving, anyway, to locally verify the map that is stored on the car. This can either verify the map or mark it - or parts of it - as invalid.
As part of our work, we are developing methods and a framework to compare map and sensor data. Furthermore, we are trying to identify features that are suitable for map verification. Another challenge is to model static and dynamic occlusions which limit the range within which the map can be assessed.
Parts of the map which are eventually marked as changed can not only be invalidated for all other components of an autonomous car, but also sent back to a remote server. When permanent changes are identified, they could either trigger a remapping process or be used as a map update directly.