@hci-uwb

An Evaluation of Other-Avatar Facial Animation Methods for Social VR

, , , and . Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, page 1--7. New York, NY, USA, Association for Computing Machinery, (2023)
DOI: 10.1145/3544549.3585617

Abstract

We report a mixed-design study on the effect of facial animation method (static, synthesized, or tracked expressions) and its synchronization to speaker audio (in sync or delayed by the method’s inherent latency) on an avatar’s perceived naturalness and plausibility. We created a virtual human for an actor and recorded his spontaneous half-minute responses to conversation prompts. As a simulated immersive interaction, 44 participants unfamiliar with the actor observed and rated performances rendered with the avatar, each with the different facial animation methods. Half of them observed performances in sync and the others with the animation method’s latency. Results show audio synchronization did not influence ratings and static faces were rated less natural and less plausible than animated faces. Notably, synthesized expressions were rated as more natural and more plausible than tracked expressions. Moreover, ratings of verbal behavior naturalness differed in the same way. We discuss implications of these results for avatar-mediated communication.

Links and resources

Tags

community

  • @hci-uwb
  • @dblp
@hci-uwb's tags highlighted