CITI Drive: Continuous Integration and Testing in Urban Driving

Autonomous Driving in Karlsruhe
MRT

Our research vehicles Joy (BMW) and Carl (Mercedes) form the backbone of our autonomous driving experiments. Each platform can be equipped with a modular hardware and software stack that enables us to explore, test, and evaluate new concepts in perception, planning, and control under real-world conditions.

Both vehicles act as research prototypes, continuously evolving through weekly integration and field testing. They are maintained and improved by a dedicated team of researchers and supported by numerous research projects of PhD researchers and their students contributing new algorithms and system improvements.

 

Autonomous Driving Stack in Action

We regularly record our driving sessions to analyze and visualize our system performance.
A typical visualization shows:
 

  • Sensor Data: Camera feed, lidar data and object detections such as other cars

  • Lanelet2 HD Map and Planned Trajectory: a simple visualization of the HD map data, the given route and the planned trajectory (future positions and speeds) updated every 100ms.

  • Current Behaviour, Traffic Light State, AD Exits: Overlays showing current behaviour such as a planned lane change or stop for a red traffic light. Additionally we count how often the safety driver needed to take over 

These recordings allow us to illustrate the real-time interaction between perception, prediction, and planning and to make the internal workings of the stack visible to both developers and observers.

Hardware & Sensors

MRT
MRT
Complete "Sensorbox" with a peak inside

Our modular hardware design allows rapid prototyping of new sensors or compute configurations, ensuring we can directly evaluate new approaches or sensors on the vehicle.

The physical hardware of our sensor setup is fully in-house developed and manufactured, allowing optimal sensor configurations for autonomous driving research in addition to a fully integrated waterproof unit called "Sensorbox" which can be moved between our two research vehicles.

Overview of the Hardware and Sensors:

  • Sensor setup: Multi-modal sensor box including 7x LiDAR sensors, most prominently the 128 beam row 360 degree top LiDAR, 3x automotive Radars, 9x High-Resolution Cameras, 2x GNSS Receiver with 3x antennas.

  • Compute unit: Developed software runs on high-performance ubuntu based server in the vehicle trunk with an AMD Epyc 64 Core CPU, 256GB RAM, 2x 48GB NVIDIA RTX 6000 Ada GPUs for Compute + 1x NVIDIA RTX 4000 Ada for Rendering and Image Processing

  • User Interface: Displays on each non-driver seat, emergency stop interfaces and low level safety gateways allowing for safe override of steering and braking/acceleration commands by the safety-driver.

  • Developer Feedback and Visualization: High-End storage and networking allows full sensor-suite data recordings. Screenrecordings with data recording replay or live data allows debugging of our software.

Software Stack

MRT

Our autonomous driving stack builds on ROS [ROS∂MRT] while extending it with custom modules developed at our institute.
This architecture allows us to combine the robustness of established open-source frameworks with the flexibility of in-house research and development.

Key features:

  • Real-time localization, perception, prediction, behaviour and motion  planning with a frequency of 10 Hz for urban driving research

  • Modular arbitration graph for behavior handling, recently published here

  • Deep-learning–based perception modules, such as our recently published traffic-light detection pipeline

  • Offline Simulation with in-house developed tooling building on our previous work CoInCar-Simulation

  • GPU accelerated camera image processing and AI model deployment

We build and maintain our own complete autonomous driving stack — including core modules, ROS utilities, and custom technical components such as our camera pipeline.
Through continuous development, new modules can be tested weekly in real-world scenarios on our development branch, providing immediate feedback for improvement.

We share our expertise with the open-source research community in form of foundational autonomous-driving libraries, such as Lanelet2 developed at our institute, and built on them with custom research modules like behavior arbitration graphs.

This software stack with a well-maintained hardware gives our researchers and students the unique ability to develop and test algorithms and models in a closed-loop real-world setting with little friction.

Continuous Driving

The Continuous Driving program is conducted weekly on our local Karlsruhe route, exposing Joy or Carl to a broad spectrum of real-world traffic situations such as intersections, pedestrian crossings, merging lanes, and traffic lights.

Each session involves a small team of researchers performing autonomous laps to evaluate perception, prediction, and control in a fully closed loop.
Our goal is to minimize safety-driver interventions while continuously improving system robustness through real-world feedback.

The wide variety of scenarios we encounter every week brings new challenges — and plenty of opportunities for students to contribute meaningfully to ongoing research.

MRT
The Continuous Driving Test Route

Get Involved

Our research vehicles are an active testbed for theses, student projects and collaborative research projects.
Students can contribute to the autonomous driving stack — from sensors and system integration to deep-learning perception and behavior arbitration. Either through thesis research applicable to the software stack or through (intensive) student jobs.
Working with Joy and Carl offers hands-on experience with real-world robotics, data-driven engineering, and the continuous evolution of an autonomous driving system.

A very popular mini-version to get involved in this research and engineering field are our lectures and especially our Kognitive Automobile Lab which teaches hands-on skills for autonomous driving.