This is the official GitHub repository of Sakuga-42M.
The Sakuga-42M Dataset is the first large-scale cartoon animation dataset, comprising 42 million keyframes. We hope that our efforts in providing this fundamental large-scale dataset could somehow alleviate the data scarcity that has haunted this research domain for years and make it possible to introduce large-scale models and approaches that lead to more robust and transferable applications, which could help animators create more easily.
We anticipate more researchers join us on this journey to explore the potential of cartoon animation research. Suggestions and contributions are always welcome.
- Release the dataset parquet files
- Release the dataset preparation codes
- Release the pre-trained models
- Release the tagging/rating, captioning, and text detection pipelines
Split | Download | # Keyframes | # Clips | # Videos | Storage |
---|---|---|---|---|---|
Training(Full) | link (529 MB) | 38,137,371 | 1,117,898 | 142,089 | ~441 GB |
Training (Aesthetic) | link (74.5 MB) | 6,154,562 | 139,989 | 61,273 | ~56 GB |
Training (Small) | link (53.6 MB) | 3,811,189 | 111,790 | 68,326 | ~45 GB |
Validation | link (28.6 MB) | 2,035,853 | 59,717 | 44,564 | ~25 GB |
Testing | link (28.5 MB) | 2,018,545 | 59,718 | 44,247 | ~25 GB |
Please follow the instructions below to prepare the complete dataset.
git clone https://github.com/zhenglinpan/SakugaDataset.git
conda create -n sakuga -y
conda activate sakuga
pip install -r requirement.txt
One-key solution for downloading all videos😉:
cd download
bash download.sh
Or step by step:
-
Download the parquet files from the following links and put them into
./download/parquet
folder. -
Run
./download/download.py
to download the videos, files will be saved in./download/download
by default.
note: this step takes at least 15
hours.
Run the code to split videos into smaller clips
cd prepare_dataset
python split_video.py
And remove the repetitive frames
cd prepare_dataset
python detect_keyframes.py
At this time, you should have the dataset ready for your research. Enjoy!
We will be releasing the code for our tagging/rating, captioning, and text detection in near future in case users want to expand the dataset. Stay tuned.
Rough Sketch | Tiedown(TP) |
![]() |
![]() |
Western | Asian |
![]() |
![]() |
Cell-Look | Illus-Look |
![]() |
![]() |
If you find this project useful for your research, please cite our paper. 🤗
@article{sakuga42m2024,
title = {Sakuga-42M Dataset: Scaling Up Cartoon Research},
author = {Zhenglin Pan, Yu Zhu, Yuxuan Mu},
journal = {arXiv preprint arXiv:2405.07425},
year = {2024}
}
Zhenglin Pan: zhengli3@ualberta.ca