Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones
T. Menzel, M. Botsch, and M. Latoschik. Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), 22, New York, NY, USA, Association for Computing Machinery, (2022)
DOI: 10.1145/3562939.3565622
Abstract
Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.
%0 Conference Paper
%1 menzel2022automated
%A Menzel, Timo
%A Botsch, Mario
%A Latoschik, Marc Erich
%B Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22)
%C New York, NY, USA
%D 2022
%I Association for Computing Machinery
%K myown via-vr
%N 22
%R 10.1145/3562939.3565622
%T Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones
%U https://doi.org/10.1145/3562939.3565622
%X Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.
%@ 9781450398893
@inproceedings{menzel2022automated,
abstract = {Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.},
added-at = {2022-11-07T14:00:32.000+0100},
address = {New York, NY, USA},
author = {Menzel, Timo and Botsch, Mario and Latoschik, Marc Erich},
biburl = {https://www.bibsonomy.org/bibtex/2f2a537253a87bb62683b8246f6cdfa8b/hci-uwb},
booktitle = {Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22)},
doi = {10.1145/3562939.3565622},
interhash = {1ebee66e480d7d97a8daa3839e9c6746},
intrahash = {f2a537253a87bb62683b8246f6cdfa8b},
isbn = {9781450398893},
keywords = {myown via-vr},
number = 22,
publisher = {Association for Computing Machinery},
timestamp = {2022-12-10T14:33:09.000+0100},
title = {Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones},
url = {https://doi.org/10.1145/3562939.3565622},
venue = {Tsukuba, Japan},
year = 2022
}