NeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior

CVPR 2024 Workshop on AI for Content Creation (AI4CC)

1Seoul National University, 2KAIST

TL;DR: One-Shot Audio-driven 3D talking head generation enhancing 3D consistency using NeRF and generative knowledge with single image input.

Ours: One-shot Audio-driven

Abstract

Audio-driven talking head generation is advancing from 2D to 3D content. Notably, Neural Radiance Field (NeRF) is in the spotlight as a means to synthesize high-quality 3D talking head outputs. Unfortunately, this NeRF-based approach typically requires a large number of paired audio-visual data for each identity, thereby limiting the scalability of the method. Although there have been attempts to generate audio-driven 3D talking head animations with a single image, the results are often unsatisfactory due to insufficient information on obscured regions in the image. In this paper, we mainly focus on addressing the overlooked aspect of 3D consistency in the one-shot, audio-driven domain, where facial animations are synthesized primarily in front-facing perspectives. We propose a novel method, NeRFFaceSpeech, which enables to produce high-quality 3D-aware talking head. Using prior knowledge of generative models combined with NeRF, our method can craft a 3D-consistent facial feature space corresponding to a single image. Our spatial synchronization method employs audio-correlated vertex dynamics of a parametric face model to transform static image features into dynamic visuals through ray deformation, ensuring realistic 3D facial motion. Moreover, we introduce LipaintNet that can replenish the lacking information in the inner-mouth area, which can not be obtained from a given single image. The network is trained in a self-supervised manner by utilizing the generative capabilities without additional data. The comprehensive experiments demonstrate the superiority of our method in generating audio-driven talking heads from a single image with enhanced 3D consistency compared to previous approaches. In addition, we introduce a quantitative way of measuring the robustness of a model against pose changes for the first time, which has been possible only qualitatively.

Figures

Video Outputs 1: Rotational Posed

Video Outputs 2: Comparison with Baselines

Video Outputs 3: Horizontal Flipped Posed
[Top: Input View | Bottom: Flipped View]

Video Outputs 4: Abalation Study
[wo/ LipaintNet: No Inner-mouth Info.] [wo/ AvgMask: Lip Trembling]

Video Outputs 5: Motion-driven
[no audio]

Video Outputs 6: Reenactment
[no audio]

BibTeX


        @misc{kim2024nerffacespeech,
        title={NeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior}, 
        author={Gihoon Kim and Kwanggyoon Seo and Sihun Cha and Junyong Noh},
        year={2024},
        eprint={2405.05749},
        archivePrefix={arXiv},
        primaryClass={cs.CV}}
            
        @inproceedings{kim2024nerffacespeech,
        title={NeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior},
        author={Gihoon Kim, Kwanggyoon Seo, Sihun Cha and Junyong Noh},
        booktitle={IEEE Computer Vision and Pattern Recognition Workshops},
        year={2024}}