Volltext-Downloads (blau) und Frontdoor-Views (grau)

Fast and robust RGB-D scene labeling for autonomous driving

  • For autonomously driving cars and intelligent vehicles it is crucial to understand the scene context including objects in the surrounding. A fundamental technique accomplishing this is scene labeling. That is, assigning a semantic class to each pixel in a scene image. This task is commonly tackled quite well by fully convolutional neural networks (FCN). Crucial factors are a small model size and a low execution time. This work presents the first method that exploits depth cues together with confidence estimates in a CNN. To this end, novel experimentally grounded network architecture is proposed to perform robust scene labeling that does not require costly preprocessing like CRFs or LSTMs as commonly used in related work. The effectiveness of this approach is demonstrated in an extensive evaluation on a challenging real-world dataset. The new architecture is highly optimized for high accuracy and low execution time.

Download full text files

Export metadata

Additional Services

Search Google Scholar


Author of HS ReutlingenWeber, Thomas; Rätsch, Matthias
Erschienen in:Journal of Computers
Publisher:International Academy Publishing
Place of publication:San Bernardino, Ca.
Document Type:Journal article
Publication year:2018
Tag:CNN architecture; deep convolutional neural networks; depth information; semantic pixel-wise segmentation
Page Number:8
First Page:393
Last Page:400
DDC classes:004 Informatik
Open access?:Ja
Licence (German):License Logo  Open Access