Parallel Processing for Real-time Stereo Vision
Stereo cameras are cost-efficient sensors for high-resolution 3D environmental perception. Distances are not measured directly however, but instead estimated computationally. For real-time performance, corresponding algorithms must balance quality and efficiency. Since current and future processors are becoming more and more parallel, implementations need to make use of this parallelism. Heterogeneous systems, which combine multi-core CPUs and graphics processing units (GPUs), offer opportunities and challenges for optimization.
Window-based algorithms offer the best performance among stereo vision methods, but model the environment as fronto-parallel planes. This assumption often does not hold in traffic scenes, particularly on the road surface. That is one of the main causes of inaccurate or erroneous measurements.
We developed an algorithm to detect and model slanted planes with arbitrary orientations. It is able to handle e. g. different roadside structures or the road itself, not only for cars but even at the significant roll angles of motorcycles. As a by-product, different plane hypotheses also allow segmenting the scene into horizontal surfaces and vertical obstacles.
Our implementation of the above algorithm can utilize all cores of either a CPU or a GPU while processing one image pair. Additionally, performance and energy-efficiency can be increased by using both types of processors simultaneously, and by assigning sub-tasks to the more suitable one respectively.
We have created a framework to automatically perform this scheduling for all processors of a heterogeneous system. A user merely needs to select one of three parallelization strategies, while the framework's continuous adaptation takes into account each processor's performance and the variable load from competing tasks.