Volltext-Downloads (blau) und Frontdoor-Views (grau)

Multimodal neural networks: RGB-D for semantic segmentation and object detection

  • This paper presents a novel multi-modal CNN architecture that exploits complementary input cues in addition to sole color information. The joint model implements a mid-level fusion that allows the network to exploit cross modal interdependencies already on a medium feature-level. The benefit of the presented architecture is shown for the RGB-D image understanding task. So far, state-of-the-art RGB-D CNNs have used network weights trained on color data. In contrast, a superior initialization scheme is proposed to pre-train the depth branch of the multi-modal CNN independently. In an end-to-end training the network parameters are optimized jointly using the challenging Cityscapes dataset. In thorough experiments, the effectiveness of the proposed model is shown. Both, the RGB GoogLeNet and further RGB-D baselines are outperformed with a significant margin on two different tasks: semantic segmentation and object detection. For the latter, this paper shows how to extract object level groundtruth from the instance level annotations in Cityscapes in order to train a powerful object detector.

Download full text files

  • 1580.pdf

Export metadata

Additional Services

Search Google Scholar


Author of HS ReutlingenJasch, Manuel; Weber, Thomas; Rätsch, Matthias
Erschienen in:Image analysis : 20th Scandinavian conference, SCIA 2017, Tromsø, Norway, June 12-14, 2017, proceedings, part I. - (Lecture notes in computer science ; 10269)
Place of publication:Cham
Editor:Puneet Sharma
Document Type:Conference proceeding
Publication year:2017
Page Number:12
First Page:98
Last Page:109
PPN:Im Katalog der Hochschule Reutlingen ansehen
DDC classes:006 Spezielle Computerverfahren
Open access?:Nein
Licence (German):License Logo  In Copyright - Urheberrechtlich geschützt