• search hit 2 of 4
Back to Result List

Multimodal neural networks: RGB-D for semantic segmentation and object detection

  • This paper presents a novel multi-modal CNN architecture that exploits complementary input cues in addition to sole color information. The joint model implements a mid-level fusion that allows the network to exploit cross modal interdependencies already on a medium feature-level. The benefit of the presented architecture is shown for the RGB-D image understanding task. So far, state-of-the-art RGB-D CNNs have used network weights trained on color data. In contrast, a superior initialization scheme is proposed to pre-train the depth branch of the multi-modal CNN independently. In an end-to-end training the network parameters are optimized jointly using the challenging Cityscapes dataset. In thorough experiments, the effectiveness of the proposed model is shown. Both, the RGB GoogLeNet and further RGB-D baselines are outperformed with a significant margin on two different tasks: semantic segmentation and object detection. For the latter, this paper shows how to extract object level groundtruth from the instance level annotations in Cityscapes in order to train a powerful object detector.

Download full text files

  • 1580.pdf
    eng

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Name:Jasch, Manuel; Weber, Thomas; Rätsch, Matthias
DOI:https://doi.org/10.1007/978-3-319-59126-1_9
ISBN:978-3-319-59126-1
Erschienen in:Image analysis : 20th Scandinavian conference, SCIA 2017, Tromsø, Norway, June 12-14, 2017, proceedings, part I. - (Lecture notes in computer science ; 10269)
Publisher:Springer
Place of publication:Cham
Editor:Puneet Sharma
Document Type:Conference Proceeding
Language:English
Year of Publication:2017
Pagenumber:12
First Page:98
Last Page:109
Catalogue entry:Im Katalog der Hochschule Reutlingen ansehen
Dewey Decimal Classification:006 Spezielle Computerverfahren
Open Access:Nein
Licence (German):License Logo  Lizenzbedingungen Springer