Zhouyingcheng Liao

Zhouyingcheng Liao (ε»–ε‘¨εΊ”ζˆ)

Ph.D Student

The University of Hong Kong (HKU)

Email: zycliao@cs.hku.hk or zycliao@gmail.com

GoogleScholar LinkedIn Github Instagram

πŸ˜„ I am a Ph.D. student since Jan. 2023 at the University of Hong Kong, supervised by Taku Komura. I obtained my M.Sc. degree at Saarland University and Max Planck Institute for Informatics (supervisor: Marc Habermann) and B.Sc. degree at Shanghai Jiao Tong University.

I had a few wonderful research internship experiences at different companies and institutes, namely Adobe Research (2021, mentor: Yang Zhou), miHoYo (2020, mentor: Jun Xing), MPI-INF (2019, supervisor: Gerard Pons-Moll.)

My research interests lie in the intersection of computer vision and computer graphics. More specifically, I am interested in modeling and animating digital characters (including the body and the clothes) driven by data. My research goal is to develop algorithms that can free content creators from hard labours.

πŸ“š Publications

SENC: Handling Self-collision in Neural Cloth Simulation

Zhouyingcheng Liao*, Sinan Wang*, Taku Komura

ECCV 2024

[Webpage]

A self-supervised neural cloth simulator that effectively addresses cloth self-collision.

EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Human Motion Generation

Wenyang Zhou, Zhiyang Dou, Zeyu Cao, Zhouyingcheng Liao, Jingbo Wang, Wenjia Wang, Yuan Liu, Taku Komura, Wenping Wang, Lingjie Liu

ECCV 2024

[webpage] [paper] [video]

A fast and high-quality human motion generation method, which takes only 0.05s for a sequence of 196 frames.

VINECS: Video-based Neural Character Skinning

Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian Theobalt

CVPR 2024

[paper]

The first end-to-end method for generating a dense and rigged 3D character mesh with learned pose-dependent skinning weights solely from multi-view videos.

Skeleton-free Pose Transfer for Stylized 3D Characters

Zhouyingcheng Liao, Jimei Yang, Jun Saito, Gerard Pons-Moll, Yang Zhou

ECCV 2022

[webpage] [paper] [code]

The first neural method that achieves automatic pose transfer between any stylized 3D characters, without any rigging, skinning or manual correspondence.

TailorNet: Predicting Clothing in 3d as a Function of Human Pose, Shape and Garment Style

Chaitanya Patel*, Zhouyingcheng Liao*, Gerard Pons-Moll (*: co-first author)

CVPR 2020 oral

[webpage] [paper] [video] [code] [data]

TailorNet is the first neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style (garment geometry), while retaining wrinkle detail. We also present a garment animation dataset of 55800 frames generated by physically based simulation

Live Face Verification with Multiple Instantialized Local Homographic Parameterization.

Chen Lin, Zhouyingcheng Liao, Peng Zhou, Jianguo Hu, Bingbing Ni

IJCAI 2018

[paper]

Uniface: A Unified Network for Face Detection and Recognition

Zhouyingcheng Liao, Peng Zhou, Qinlong Wu, Bingbing Ni

ICPR 2018

[paper]

πŸŽ“ Academic Service

Reviewer

Siggraph Asia 2023, 2024

Eurographics 2024

CVPR 2023, 2024

TVCG

Computer & Graphics

Teaching Assistant

Data-driven Computer Animation (COMP3360/7508@HKU) Spring 2024

Computer Game Design and Programming (COMP3329@HKU) Spring 2024

Computer Game Design and Programming (COMP3329@HKU) Spring 2023