Official implementation of CVPR2025 paper "RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars".
-
Clone this repository.
git clone https://github.com/LinzhouLi/RGBAvatar.git cd RGBAvatar
-
Create conda environment.
conda create -n rgbavatar python=3.10
-
Install PyTorch and nvdiffrast. Please make sure that the PyTorch CUDA version matches your system's CUDA version. We use CUDA 11.8 here.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 pip install git+https://github.com/NVlabs/nvdiffrast
-
Install other packages.
pip install -r requirements.txt
-
Compile PyTorch CUDA extension.
pip install submodules/diff-gaussian-rasterization
For offline reconstruction, we use FLAME template model and follow INSTA to preprocess the video sequence.
-
You need to create an account on the FLAME website and download FLAME 2020 model. Please unzip FLAME2020.zip and put
generic_model.pkl
under./data/FLAME2020
. -
Please follow the instructions in INSTA. You may first use Metrical Photometric Tracker to track and run
generate.sh
provided by INSTA to mask the head. -
Organize the INSTA's output in the following form, and modify the
data_dir
in config file to refer to the dataset path.<DATA_DIR> ├──<SUBJECT_NAME> ├── checkpoint # FLAME parameter for each frame, generated by the tracker ├── images # generated by the script of INSTA
For online reconstruction, we use FaceWareHouse template model and a real-time face tracker DDE to compute the expression coeafficients in real-time. We will release the code of this version in the future.
python train_offline.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH --preload
Command Line Arguments for train_offline.py
Subject name for training (bala
by default).
A nick name for the experiment, training results will be saved under output/WORK_NAME
.
Config file path (config/offline.yaml
by default).
Use train
/test
/all
split of the image sequence (train
by default).
Whether to preload image data to CPU memory, which accelerate the training speed.
Whether to output log information during training.
python train_online.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH --video_fps 25
Command Line Arguments for train_online.py
Subject name for training (bala
by default).
A nick name for the experiment, training results will be saved under output/WORK_NAME
.
Config file path (config/online.yaml
by default).
FPS of the input video stream (25
by default ).
Whether to output log information during training.
python calculate_metrics.py --subject SUBJECT_NAME --work_name WORK_NAME --config CONFIG_FILE_PATH
Command Line Arguments for calculate_metrics.py
Subject name for training (bala
by default).
Path of the expeirment output folder (output
by default).
Name of the experiment to be evaluated.
Frame number where split the training and test set. (-350
by default ).
python render.py --subject SUBJECT_NAME --work_name WORK_NAME
Command Line Arguments for render.py
Subject name for training (bala
by default).
Path of the expeirment output folder (output
by default).
Name of the experiment to be rendered.
Whether to use white background, back by default.
Whether to render the alpha channel.
TBD
@inproceedings{li2025rgbavatar,
title={RGBAvatar: Reduced Gaussian Blendshapes for Online Modeling of Head Avatars},
author={Li, Linzhou and Li, Yumeng and Weng, Yanlin and Zheng, Youyi and Zhou, Kun},
booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025},
}