Volltext-Downloads (blau) und Frontdoor-Views (grau)

Adapting egocentric visual hand pose estimation towards a robot-controlled exoskeleton

  • The basic idea behind a wearable robotic grasp assistancesystem is to support people that suffer from severe motor impairments in daily activities. Such a system needs to act mostly autonomously and according to the user’s intent. Vision-based hand pose estimation could be an integral part of a larger control and assistance framework. In this paper we evaluate the performance of egocentric monocular hand pose estimation for a robot-controlled hand exoskeleton in a simulation. For hand pose estimation we adopt a Convolutional Neural Network (CNN). We train and evaluate this network with computer graphics, created by our own data generator. In order to guide further design decisions we focus in our experiments on two egocentric camera viewpoints tested on synthetic data with the help of a 3D-scanned hand model, with and without an exoskeleton attached to it.We observe that hand pose estimation with a wrist-mounted camera performs more accurate than with a head-mounted camera in the context of our simulation. Further, a grasp assistance system attached to the hand alters visual appearance and can improve hand pose estimation. Our experiment provides useful insights for the integration of sensors into a context sensitive analysis framework for intelligent assistance.

Download full text files

  • 2028.pdf

Export metadata

Additional Services

Share in Twitter Search Google Scholar


Author of HS ReutlingenGulde, Thomas; Curio, Cristóbal
Erschienen in:Computer Vision – ECCV 2018 Workshops : Munich, Germany, September 8-14, 2018, proceedings. Part 4.- (Lecture notes in computer science ; 11134)
Place of publication:Cham
Editor:Laura Leal-Teixé
Document Type:Conference Proceeding
Year of Publication:2018
Page Number:16
First Page:241
Last Page:256
DDC classes:004 Informatik
Open Access?:Nein
Licence (German):License Logo  Lizenzbedingungen Springer