Fast and robust RGB-D scene labeling for autonomous driving

  • For autonomously driving cars and intelligent vehicles it is crucial to understand the scene context including objects in the surrounding. A fundamental technique accomplishing this is scene labeling. That is, assigning a semantic class to each pixel in a scene image. This task is commonly tackled quite well by fully convolutional neural networks (FCN). Crucial factors are a small model size and a low execution time. This work presents the first method that exploits depth cues together with confidence estimates in a CNN. To this end, novel experimentally grounded network architecture is proposed to perform robust scene labeling that does not require costly preprocessing like CRFs or LSTMs as commonly used in related work. The effectiveness of this approach is demonstrated in an extensive evaluation on a challenging real-world dataset. The new architecture is highly optimized for high accuracy and low execution time.

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Name:Weber, Thomas; Rätsch, Matthias
URN:urn:nbn:de:bsz:rt2-opus4-17016
URL:http://www.jcomputers.us/list-196-1.html
ISSN:1796-203X
Erschienen in:Journal of computers
Publisher:International Academy Publishing
Place of publication:San Bernardino, Ca.
Document Type:Article
Language:English
Year of Publication:2018
Tag:CNN architecture; deep convolutional neural networks; depth information; semantic pixel-wise segmentation
Volume:13
Issue:4
Pagenumber:8
First Page:393
Last Page:400
Dewey Decimal Classification:004 Informatik
Access Rights:Ja
Licence (German):License Logo  Open Access