Home
Login

A deep learning project that generates realistic human avatars from audio, capable of synthesizing full-body gestures and facial expressions from speech.

NOASSERTIONPython 2.8kfacebookresearchaudio2photoreal Last Updated: 2024-09-15

Audio2PhotoReal Project Detailed Introduction

Project Overview

Audio2PhotoReal is an innovative deep learning project open-sourced by Facebook Research (Meta Research) that implements an end-to-end synthesis system for generating realistic human avatars from audio. The core capability of this project is to generate realistic full-body human animations, including facial expressions and body poses, based on voice input.

Technical Features

Core Functionality

  • Audio-Driven Human Synthesis: Generates complete human animations based solely on audio input.
  • Realistic Facial Expressions: Generates natural facial expressions using 256-dimensional facial encodings.
  • Full-Body Pose Generation: Achieves complete body movements with 104-dimensional joint angles.
  • Dialogue Scene Optimization: Specifically models human behavior in dialogue scenarios.

Technical Architecture

The project adopts a multi-model collaborative architecture design:

  1. Facial Diffusion Model: Generates 256-dimensional facial encodings based on audio.
  2. Body Diffusion Model: Generates 104-dimensional joint rotations by combining audio and guiding poses.
  3. Guiding Pose Generator: VQ-VAE architecture, generates guiding poses at a frequency of 1fps.
  4. VQ Encoder-Decoder: Performs vector quantization on the continuous 104-dimensional pose space.

Dataset

Data Structure

The project provides complete datasets for four characters:

  • PXB184, RLW104, TXB805, GQS883

Each character contains approximately 26-30 dialogue scenes, each scene containing:

*audio.wav: Stereo audio file (48kHz)
  - Channel 0: Current character audio
  - Channel 1: Dialogue partner audio

*body_pose.npy: (T × 104) Joint angle array
*face_expression.npy: (T × 256) Facial encoding array
*missing_face_frames.npy: Indices of missing/corrupted facial frames
data_stats.pth: Mean and standard deviation statistics for each modality

Data Partition

train_idx = list(range(0, len(data_dict["data"]) - 6))
val_idx = list(range(len(data_dict["data"]) - 6, len(data_dict["data"]) - 4))
test_idx = list(range(len(data_dict["data"]) - 4, len(data_dict["data"])))

Installation and Usage

Quick Demo

# Create environment
conda create --name a2p_env python=3.9
conda activate a2p_env

# Install dependencies
sh demo/install.sh

# Run demo
python -m demo.demo

Full Installation

# Environment configuration
conda create --name a2p_env python=3.9
conda activate a2p_env
pip install -r scripts/requirements.txt

# Download necessary models
sh scripts/download_prereq.sh

# Install PyTorch3D
pip install "git+https://github.com/facebookresearch/pytorch3d.git"

Data Download

# Download a single dataset
curl -L https://github.com/facebookresearch/audio2photoreal/releases/download/v1.0/<person_id>.zip -o <person_id>.zip
unzip <person_id>.zip -d dataset/

# Download all datasets
sh scripts/download_alldatasets.sh

# Download pre-trained models
sh scripts/download_allmodels.sh

Model Training

1. Facial Diffusion Model Training

python -m train.train_diffusion \
  --save_dir checkpoints/diffusion/c1_face_test \
  --data_root ./dataset/PXB184/ \
  --batch_size 4 \
  --dataset social \
  --data_format face \
  --layers 8 \
  --heads 8 \
  --timestep_respacing '' \
  --max_seq_length 600

2. Body Diffusion Model Training

python -m train.train_diffusion \
  --save_dir checkpoints/diffusion/c1_pose_test \
  --data_root ./dataset/PXB184/ \
  --lambda_vel 2.0 \
  --batch_size 4 \
  --dataset social \
  --add_frame_cond 1 \
  --data_format pose \
  --layers 6 \
  --heads 8 \
  --timestep_respacing '' \
  --max_seq_length 600

3. VQ Encoder Training

python -m train.train_vq \
  --out_dir checkpoints/vq/c1_vq_test \
  --data_root ./dataset/PXB184/ \
  --lr 1e-3 \
  --code_dim 1024 \
  --output_emb_width 64 \
  --depth 4 \
  --dataname social \
  --loss_vel 0.0 \
  --data_format pose \
  --batch_size 4 \
  --add_frame_cond 1 \
  --max_seq_length 600

4. Guiding Pose Transformer Training

python -m train.train_guide \
  --out_dir checkpoints/guide/c1_trans_test \
  --data_root ./dataset/PXB184/ \
  --batch_size 4 \
  --resume_pth checkpoints/vq/c1_vq_test/net_iter300000.pth \
  --add_frame_cond 1 \
  --layers 6 \
  --lr 2e-4 \
  --gn \
  --dim 64

Inference Generation

Facial Generation

python -m sample.generate \
  --model_path checkpoints/diffusion/c1_face/model000155000.pt \
  --num_samples 10 \
  --num_repetitions 5 \
  --timestep_respacing ddim500 \
  --guidance_param 10.0

Body Generation

python -m sample.generate \
  --model_path checkpoints/diffusion/c1_pose/model000340000.pt \
  --resume_trans checkpoints/guide/c1_pose/checkpoints/iter-0100000.pt \
  --num_samples 10 \
  --num_repetitions 5 \
  --timestep_respacing ddim500 \
  --guidance_param 2.0

Complete Rendering

python -m sample.generate \
  --model_path checkpoints/diffusion/c1_pose/model000340000.pt \
  --resume_trans checkpoints/guide/c1_pose/checkpoints/iter-0100000.pt \
  --num_samples 10 \
  --num_repetitions 5 \
  --timestep_respacing ddim500 \
  --guidance_param 2.0 \
  --face_codes ./checkpoints/diffusion/c1_face/samples_c1_face_000155000_seed10_/results.npy \
  --pose_codes ./checkpoints/diffusion/c1_pose/samples_c1_pose_000340000_seed10_guide_iter-0100000.pt/results.npy \
  --plot

Visualization

Dataset Visualization

python -m visualize.render_anno \
  --save_dir vis_anno_test \
  --data_root dataset/PXB184 \
  --max_seq_length 600

Technical Requirements

  • CUDA: 11.7
  • Python: 3.9
  • GCC: 9.0
  • Main Dependencies: PyTorch, PyTorch3D, Gradio

Application Scenarios

  • Virtual Meetings and Remote Collaboration
  • Digital Human Live Streaming and Content Creation
  • Gaming and Entertainment Applications
  • Education and Training Simulations
  • Human-Computer Interaction Interfaces

Star History Chart