TY - CHAP U1 - Konferenzveröffentlichung A1 - Essich, Michael A1 - Ludl, Dennis A1 - Gulde, Thomas A1 - Curio, Cristóbal ED - Laurendeau, Denis T1 - Learning to translate between real world and simulated 3D sensors while transferring task models T2 - 2019 International Conference on 3D Vision : 3DV 2019 : Quebec, Canada, 15-18 September 2019 : proceedings N2 - Learning-based vision tasks are usually specialized on the sensor technology for which data has been labeled. The knowledge of a learned model is simply useless when it comes to data which differs from the data on which the model has been initially trained or if the model should be applied to a totally different imaging or sensor source. New labeled data has to be acquired on which a new model can be trained. Depending on the sensor, this can even get more complicated when the sensor data becomes more abstract and hard to be interpreted and labeled by humans. To enable reuse of models trained for a specific task across different sensors minimizes the data acquisition effort. Therefore, this work focuses on learning sensor models and translating between them, thus aiming for sensor interoperability. We show that even for the complex task of human pose estimation from 3D depth data recorded with different sensors, i.e. a simulated and a Kinect 2TM depth sensor, human pose estimation can greatly improve by translating between sensor models without modifying the original task model. This process especially benefits sensors and applications for which labels and models are difficult if at all possible to retrieve from raw sensor data. Y1 - 2019 SN - 978-1-72813-131-3 SB - 978-1-72813-131-3 U6 - https://doi.org/10.1109/3DV.2019.00080 DO - https://doi.org/10.1109/3DV.2019.00080 SP - 681 EP - 689 S1 - 9 PB - IEEE CY - Piscataway, NJ ER -