Adapting egocentric visual hand pose estimation towards a robot-controlled exoskeleton
- The basic idea behind a wearable robotic grasp assistancesystem is to support people that suffer from severe motor impairments in daily activities. Such a system needs to act mostly autonomously and according to the user’s intent. Vision-based hand pose estimation could be an integral part of a larger control and assistance framework. In this paper we evaluate the performance of egocentric monocular hand pose estimation for a robot-controlled hand exoskeleton in a simulation. For hand pose estimation we adopt a Convolutional Neural Network (CNN). We train and evaluate this network with computer graphics, created by our own data generator. In order to guide further design decisions we focus in our experiments on two egocentric camera viewpoints tested on synthetic data with the help of a 3D-scanned hand model, with and without an exoskeleton attached to it.We observe that hand pose estimation with a wrist-mounted camera performs more accurate than with a head-mounted camera in the context of our simulation. Further, a grasp assistance system attached to the hand alters visual appearance and can improve hand pose estimation. Our experiment provides useful insights for the integration of sensors into a context sensitive analysis framework for intelligent assistance.
Author of HS Reutlingen | Gulde, Thomas; Curio, Cristóbal |
---|---|
DOI: | https://doi.org/10.1007/978-3-030-11024-6_16 |
ISBN: | 978-3-030-11024-6 |
Erschienen in: | Computer Vision – ECCV 2018 Workshops : Munich, Germany, September 8-14, 2018, proceedings. Part 4.- (Lecture notes in computer science ; 11134) |
Publisher: | Springer |
Place of publication: | Cham |
Editor: | Laura Leal-Teixé |
Document Type: | Conference proceeding |
Language: | English |
Publication year: | 2018 |
Page Number: | 16 |
First Page: | 241 |
Last Page: | 256 |
DDC classes: | 004 Informatik |
Open access?: | Nein |
Licence (German): | In Copyright - Urheberrechtlich geschützt |