Learning to Cartoonize Using White-box Cartoon Representations
Output examples can be found at both of the following links:
original project page | paper
Prerequisites (Windows):
- NVIDIA CUDA and CuDNN
- MSVC 2015 (found within Microsoft C++ Build Tools)
- Python 3.6 is the latest version compatible with the required tensorflow versions
> python36 -m venv .\wbcvenv
> .\wbcvenv\Scripts\activate
> python36 -m pip install pip
> python36 -m pip install tensorflow==1.12.0
> python36 -m pip install tensorflow-gpu==1.12.0
> python36 -m pip install scikit-image==0.14.5
> python36 -m pip install opencv-python
> python36 -m pip install tqdm
- Store test images in /test_code/test_images
- Run ./cartoonize.py
- Results will be saved in /test_code/cartoonized_images
- Place your training data in corresponding folders in /dataset
- Run pretrain.py, results will be saved in /pretrain folder
- Run train.py, results will be saved in /train_cartoon folder
- Codes are cleaned from production environment and untested
- There may be minor problems but should be easy to resolve
- Pretrained VGG_19 model can be found at following here (link provided by SystemErrorWang).
- Due to copyright issues, we cannot provide cartoon images used for training
- However, these training datasets are easy to prepare
- Scenery images are collected from Shinkai Makoto, Miyazaki Hayao and Hosoda Mamoru films
- Clip films into frames and random crop and resize to 256x256
- Portrait images are from Kyoto animations and PA Works
- We use this repo to detect facial areas
- Manual data cleaning will greatly increace both datasets quality