Perception and localization algorithms developed for automated driving tasks rely on accurate environment models. These models are usually generated using information provided by mobile sensors such as cameras or range sensors. Whereas cameras provide 2D projections of surface reflectances with high spatial resolution, range sensors usually provide precise 3D surface positions. However, the spatial resolution of modern range sensors is sparse compared to cameras.
At the MRT, we developed a guided depth upsampling method that estimates surfaces accurately for each camera pixel within scenes composed of predominantly planar surfaces, such as urban areas. Provided with a calibrated camera-laser setup, the 3D surface point position can be determined by evaluating the viewing ray corresponding to an image coordinate at an estimated depth.
With this work, we would like to extend our methodology in order to upsample a set of Lidar scans, acquired on moving platforms. This process includes additional estimation of the camera-laser calibration for different scan acquisition times. Therefore, the optimization problem needs to be extended to include pose differences between Lidar and camera observations. After implementation, the model should be validated on a synthetic dataset providing ground-truth poses and an accurate camera-laser calibration. The final approach should be validated on our experimental vehicle.