TY - CHAP U1 - Konferenzveröffentlichung A1 - Baulig, Gerald A1 - Gulde, Thomas A1 - Curio, Cristóbal ED - Leal-Teixé, Laura T1 - Adapting egocentric visual hand pose estimation towards a robot-controlled exoskeleton T2 - Computer Vision – ECCV 2018 Workshops : Munich, Germany, September 8-14, 2018, proceedings. Part 4.- (Lecture notes in computer science ; 11134) N2 - The basic idea behind a wearable robotic grasp assistancesystem is to support people that suffer from severe motor impairments in daily activities. Such a system needs to act mostly autonomously and according to the user’s intent. Vision-based hand pose estimation could be an integral part of a larger control and assistance framework. In this paper we evaluate the performance of egocentric monocular hand pose estimation for a robot-controlled hand exoskeleton in a simulation. For hand pose estimation we adopt a Convolutional Neural Network (CNN). We train and evaluate this network with computer graphics, created by our own data generator. In order to guide further design decisions we focus in our experiments on two egocentric camera viewpoints tested on synthetic data with the help of a 3D-scanned hand model, with and without an exoskeleton attached to it.We observe that hand pose estimation with a wrist-mounted camera performs more accurate than with a head-mounted camera in the context of our simulation. Further, a grasp assistance system attached to the hand alters visual appearance and can improve hand pose estimation. Our experiment provides useful insights for the integration of sensors into a context sensitive analysis framework for intelligent assistance. Y1 - 2018 SN - 978-3-030-11024-6 SB - 978-3-030-11024-6 U6 - https://doi.org/10.1007/978-3-030-11024-6_16 DO - https://doi.org/10.1007/978-3-030-11024-6_16 SP - 241 EP - 256 S1 - 16 PB - Springer CY - Cham ER -