Title:
|
ANIMATING AN AUTONOMOUS 3D TALKING AVATAR |
Author(s):
|
Dominik Borer, Dominic Lutz, Robert W. Sumner and Martin Guay |
ISBN:
|
978-989-8533-91-3 |
Editors:
|
Katherine Blashki and Yingcai Xiao |
Year:
|
2019 |
Edition:
|
Single |
Keywords:
|
Conversational Embodiment, Interactive Conversational Agent, Parametric Body Motion |
Type:
|
Full Paper |
First Page:
|
263 |
Last Page:
|
274 |
Language:
|
English |
Cover:
|
|
Full Contents:
|
click to dowload
|
Paper Abstract:
|
One of the main challenges with embodying a conversational agent is annotating how and when motions can be played
and composed together in real-time, without any visual artifact. The inherent problem is to do sofor a large amount of
motionswithout introducing mistakes in the annotation. To our knowledge, there is no automatic method that can
process animations and automatically label actions and compatibility between them. In practice, a state machine, where
clips are the actions, is created manually by setting connections between the states with the timing parameters for these
connections. Authoring this state machine for a large amount of motions leads to a visual overflow, and increases the
amount of possible mistakes. In consequence, conversational agent embodiments are left with little variations and quickly
become repetitive. In this paper, we address this problem with a compact taxonomy of chit chat behaviors, that we can
utilize to simplify and partially automate the graph authoring process. We measured the time required to label actions of
an embodiment using our simple interface, compared to the standard state machine interface in Unreal Engine, and found
that our approach is 7 times faster. We believe that our labeling approach could be a path to automated labeling: once a
sub-set of motions are labeled (using our interface), we could learn a prediction that could attribute a label to new
clipsallowing to really scale up virtual agent embodiments. |
|
|
|
|