My research interests are Action Recognition, Egocentric Vision, Multimodal Human-AI Interaction,
Mixed Reality and
Computer Vision.
Previously, at UCLA, I graduated with an M.S. in Electrical and Computer Engineering. Also, I
received my B.S. in
Electrical Engineering from Yonsei University, Seoul, Korea
EgoBody is a large-scale dataset capturing ground-truth 3D human motions during social interactions
in 3D scenes. Given
two interacting subjects, we leverage a lightweight multi-camera rig to reconstruct their 3D shape
and pose over time.
Our dataset, called H2O (2 Hands and Objects), provides synchronized multi-view RGB-D images,
interaction labels, object
classes, ground-truth 3D poses for left & right hands, 6D object poses, ground-truth camera poses,
object meshes and
scene point clouds.
Scholarship, Recipient of Korean Government Scholarship from NIIED 2018
Scholarship, Yonsei International Foundation 2016
IBM Innovation Prize, Startup Weekend, Technology Competition 2015
Best Technology Prize, Internet of Things (IoT) Hackathon by the government of Korea 2014
Best Laboratory Intern, Yonsei Institute of Information and Communication Technology 2014
Scholarship, Yonsei University Foundation 2014, 2010
Creative Prize, Startup Competition, Yonsei University 2014
Scholarship, Korean Telecom Group Foundation 2011
Talks
2022/03/30: Context-Aware Sequence Alignment using 4D Skeletal Augmentation. Applied Machine
Learning Days (AMLD) @EPFL
& Swiss JRC [Link]
2021/10/17: H2O: Two Hands Manipulating Objects for First Person Interaction Recognition.
ICCV 2021 Workshop on
Egocentric Perception, Interaction and Computing (EPIC) [Link|Video]
2021/04/20: H2O: Two Hands Manipulating Objects for First Person Interaction Recognition.
Swiss Joint Research Center
(JRC) Workshop 2021 [Link|Video]