International Conference on Computer Vision (ICCV) 2021
Abstract
We present, for the first time, a comprehensive framework for egocentric interaction recognition using markerless 3D annotations of two hands manipulating objects. To this end, we propose a method to create a unified dataset for egocentric 3D interaction recognition. Our method produces annotations of the 3D pose of two hands and the 6D pose of the manipulated objects, along with their interaction labels for each frame. Our dataset, called H2O (2 Hands and Objects), provides synchronized multi-view RGB-D images, interaction labels, object classes, ground-truth 3D poses for left & right hands, 6D object poses, ground-truth camera poses, object meshes and scene point clouds. To the best of our knowledge, this is the first benchmark that enables the study of first-person actions with the use of the pose of both left and right hands manipulating objects and presents an unprecedented level of detail for egocentric 3D interaction recognition. We further propose the first method to predict interaction classes by estimating the 3D pose of two hands and the 6D pose of the manipulated objects, jointly from RGB images. Our method models both inter- and intra-dependencies between both hands and objects by learning the topology of a graph convolutional network that predicts interactions. We show that our method facilitated by this dataset establishes a strong baseline for joint hand-object pose estimation and achieves state-of-the-art accuracy for first person interaction recognition.
Pipeline Overview
(a) We calibrate cameras using IR sphere markers and PnP, (b) create object meshes using BADSLAM on RGB-D captures, and (c) estimate object poses using DenseFusion on RGB-D images and mask images from Mask R-CNN. We then select the pose with the highest confidence among five cameras. (d) Consequently, we detect hand joints with OpenPose and optimize hand shape. (e) We finally detect and smooth temporally inaccurate poses.
Video
Workshop
Human Body, Hands, and
Activities from Egocentric and Multi-view Cameras @ ECCV 2022
Result submission & Extended abstract deadline: Oct. 8th, 2022
Participants are encouraged to submit 2-4 pages extended abstract to our ECCV workshop.
The total prize value will be up to 4000 CHF (4000 USD) for our two challenges. We will select the Top 3 in the leaderboard with the valid submission (with extended abstract) for each challenge.
Citation
H2O: Two Hands Manipulating Objects for First Person Interaction Recognition
Taein Kwon, Bugra Tekin, Jan Stuhmer, Federica Bogo, Marc Pollefeys
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021
@InProceedings{Kwon_2021_ICCV, author = {Kwon, Taein and Tekin, Bugra and St\"uhmer, Jan and Bogo, Federica and Pollefeys, Marc}, title = {H2O: Two Hands Manipulating Objects for First Person Interaction Recognition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10138-10148} }
Team
Taein Kwon | Bugra Tekin | Jan Stuhmer | Federica Bogo | Marc Pollefeys |