Fine-grained talking face generation with video reinterpretation
- PDF / 4,178,399 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 12 Downloads / 204 Views
ORIGINAL ARTICLE
Fine-grained talking face generation with video reinterpretation Xin Huang1
· Mingjie Wang1 · Minglun Gong2
Accepted: 12 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract Generating a talking face video from a given audio clip and an arbitrary face image has many applications in areas such as special visual effects and human–computer interactions. This is a challenging task, as it requires disentangling semantic information from both input audio clips and face image, then synthesizing novel animated facial image sequences from the combined semantic features. The desired output video should maintain both video realism and audio–lip motion consistency. To achieve these two objectives, we propose a coarse-to-fine tree-like architecture for synthesizing realistic talking face frames directly from audio clips. This is followed by a video-to-word regeneration module to translate the synthesized talking videos back to the words space, which is enforced to align with the input audios. With multi-level facial landmark attentions, the proposed audio-to-video-to-words framework can generate fine-grained talking face videos that are not only synchronous with the input audios but also maintain visual details from the input face images. Multi-purpose discriminators are also adopted for adversarial learning to further improve both image fidelity and semantic consistency. Extensive experiments on GRID and LRW datasets demonstrate the advantages of our framework over previous methods in terms of video quality and audio–video synchronization. Keywords Talking face · Video generation · Multi-purpose discriminators
1 Introduction Automatically generating talking face videos under different conditions, such as audio speech, text, and sketch, is a problem of interests in both computer vision and graphics. A talking face contains rich and complex semantic information, and humans are sensitive to subtle artifacts shown on faces. Hence, generating high-quality, audio-corresponding videos based on diverse conditions is a very difficult task. Although significant progress has been made in generating videos using temporal-dependency model [4,5,26], realizing photo-realistic visual contents and optimizing generated videos by lip semantic alignment remain challenging. The key issue is to learn the shared representation of two modalities (e.g., the given audio and an arbitrary image). Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00371-020-01982-7) contains supplementary material, which is available to authorized users.
B
Xin Huang [email protected]
1
Memorial University of Newfoundland, St. John’s, Canada
2
University of Guelph, Guelph, Canada
To achieve this, we explored coarse-to-fine learning module for generating fine-grained talking face video, as well as designed an end-to-end neural architecture built upon temporal-dependent GAN framework, which is conditioned on a face image, facial landmarks, and audio information. The results a
Data Loading...