Zhouyingcheng Liao

Zhouyingcheng Liao (ε»–ε‘¨εΊ”ζˆ)

Ph.D Student

The University of Hong Kong (HKU)

Email: zycliao@cs.hku.hk or zycliao@gmail.com

GoogleScholar LinkedIn Github Instagram

πŸ˜„ I am a Ph.D. student since Jan. 2023 at the University of Hong Kong, supervised by Taku Komura. I obtained my M.Sc. degree at Saarland University and Max Planck Institute for Informatics (supervisor: Marc Habermann) and B.Sc. degree at Shanghai Jiao Tong University.

I had a few wonderful research internship experiences at different companies and institutes, namely Adobe Research (2021, mentor: Yang Zhou), miHoYo (2020, mentor: Jun Xing), MPI-INF (2019, supervisor: Gerard Pons-Moll.)

My research interests lie in the intersection of computer vision and computer graphics. More specifically, I am interested in modeling and animating digital characters (including the body and the clothes) driven by data. My research goal is to develop algorithms that can free content creators from hard labours.

πŸ™ŒFor Prospective Student

Our team is seeking undergraduate/master students interested in human pose estimation, motion control, and Large-Language-Motion-Models (LLMM). We offer opportunities to work on your thesis or join as a Research Assistant.

πŸ‘‡ What you can get:
      βœ… Cutting-edge research in human-compute graphics.
      βœ… Hands-on experience with high-end equipment: advanced mesh scanners and motion capture system.
      βœ… Collaborative thesis development or full-time research role.
      βœ… Recommendation letters for your applications.
If you feel interested, just send me an email (zycliao@cs.hku.hk).

πŸ“š Publications

EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Human Motion Generation

Wenyang Zhou, Zhiyang Dou, Zeyu Cao, Zhouyingcheng Liao, Jingbo Wang, Wenjia Wang, Yuan Liu, Taku Komura, Wenping Wang, Lingjie Liu

Arxiv 2023

[webpage] [paper] [video]

A fast and high-quality human motion generation method, which takes only 0.05s for a sequence of 196 frames.

VINECS: Video-based Neural Character Skinning

Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian Theobalt

CVPR 2024

[paper]

The first end-to-end method for generating a dense and rigged 3D character mesh with learned pose-dependent skinning weights solely from multi-view videos.

Skeleton-free Pose Transfer for Stylized 3D Characters

Zhouyingcheng Liao, Jimei Yang, Jun Saito, Gerard Pons-Moll, Yang Zhou

ECCV 2022

[webpage] [paper] [code]

The first neural method that achieves automatic pose transfer between any stylized 3D characters, without any rigging, skinning or manual correspondence.

TailorNet: Predicting Clothing in 3d as a Function of Human Pose, Shape and Garment Style

Chaitanya Patel*, Zhouyingcheng Liao*, Gerard Pons-Moll (*: co-first author)

CVPR 2020 oral

[webpage] [paper] [video] [code] [data]

TailorNet is the first neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style (garment geometry), while retaining wrinkle detail. We also present a garment animation dataset of 55800 frames generated by physically based simulation

Live Face Verification with Multiple Instantialized Local Homographic Parameterization.

Chen Lin, Zhouyingcheng Liao, Peng Zhou, Jianguo Hu, Bingbing Ni

IJCAI 2018

[paper]

Uniface: A Unified Network for Face Detection and Recognition

Zhouyingcheng Liao, Peng Zhou, Qinlong Wu, Bingbing Ni

ICPR 2018

[paper]

πŸŽ“ Academic Service

Reviewer

Eurographics 2024

CVPR 2023, 2024

Siggraph Asia 2023

TVCG

Computer & Graphics

Teaching Assistant

Computer Game Design and Programming (COMP3329@HKU) Spring 2024

Computer Game Design and Programming (COMP3329@HKU) Spring 2023