Real-Time Human Pose and Shape Estimation for Virtual Try-on Using a Single Commodity Depth Camera

Apr 1, 2014·
Mao Ye
Huamin Wang
Huamin Wang
,
Nianchen Deng
,
Xubo Yang
,
Ruigang Yang
· 0 min read
Abstract
We present a system that allows the user to virtually try on new clothes. It uses a single commodity depth camera to capture the user in 3D. Both the pose and the shape of the user are estimated with a novel real-time template-based approach that performs tracking and shape adaptation jointly. The result is then used to drive realistic cloth simulation, in which the synthesized clothes are overlayed on the input image. The main challenge is to handle missing data and pose ambiguities due to the monocular setup, which captures less than 50 percent of the full body. Our solution is to incorporate automatic shape adaptation and novel constraints in pose tracking. The effectiveness of our system is demonstrated with a number of examples.
Type
Publication
IEEE transactions on visualization and computer graphics (IEEE VR), 20(4)

Huamin Wang
Authors
Chief Scientist
My research interests include computer graphics, computer vision, generative AI, and embodied AI.