noxwano3 / SakugaDataset

TIMER_Official Repository for Sakuga-42M Dataset

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

img1

Paper

Icon Introduction

This is the official GitHub repository of Sakuga-42M.

The Sakuga-42M Dataset is the first large-scale cartoon animation dataset, comprising 42 million keyframes. We hope that our efforts in providing this fundamental large-scale dataset could somehow alleviate the data scarcity that has haunted this research domain for years and make it possible to introduce large-scale models and approaches that lead to more robust and transferable applications, which could help animators create more easily.

We anticipate more researchers join us on this journey to explore the potential of cartoon animation research. Suggestions and contributions are always welcome.

Icon TO-DO

  • Release the dataset parquet files
  • Release the dataset preparation codes
  • Release the pre-trained models
  • Release the tagging/rating, captioning, and text detection pipelines

Icon Download

Dataset

Split Download # Keyframes # Clips # Videos Storage
Training(Full) link (529 MB) 38,137,371 1,117,898 142,089 ~441 GB
Training (Aesthetic) link (74.5 MB) 6,154,562 139,989 61,273 ~56 GB
Training (Small) link (53.6 MB) 3,811,189 111,790 68,326 ~45 GB
Validation link (28.6 MB) 2,035,853 59,717 44,564 ~25 GB
Testing link (28.5 MB) 2,018,545 59,718 44,247 ~25 GB

Icon Preperation

img1

Please follow the instructions below to prepare the complete dataset.

1. Setup environment

git clone https://github.com/zhenglinpan/SakugaDataset.git

conda create -n sakuga -y
conda activate sakuga

pip install -r requirement.txt

2. Download Dataset

One-key solution for downloading all videos😉:

cd download
bash download.sh

Or step by step:

  1. Download the parquet files from the following links and put them into ./download/parquet folder.

  2. Run ./download/download.py to download the videos, files will be saved in ./download/download by default.

note: this step takes at least 15 hours.

3. Split Videos

Run the code to split videos into smaller clips

cd prepare_dataset
python split_video.py

4. Extract Keyframes

And remove the repetitive frames

cd prepare_dataset
python detect_keyframes.py

5. Good-To-Go !

At this time, you should have the dataset ready for your research. Enjoy!

We will be releasing the code for our tagging/rating, captioning, and text detection in near future in case users want to expand the dataset. Stay tuned.

Icon Supporting Research

img1

Icon Demonstration

Diversity

Rough Sketch Tiedown(TP)
Western Asian
Cell-Look Illus-Look

Video-Text Descriptions Pairs

multiple girls with blonde, red, and brown hair, wearing idol outfits, dance in a line on stage... Nyarlathotep holds out her arms with a glowing face. The second frame shows her in a store.. Her hair is now curled back as she smiles, still in the same outfit and setting...
an anime character kneels on a red surface while a man in white and black stands behind... a diverse group of cartoon characters gathers around a table, enjoying a meal with a variety of dishes... a cute green cat with fangs, tongue, and a smile, jumps with energy and glowing aura...
the dinosaur emerges from the egg shell, curled up in the blanket... in an anime clip, a man in a suit falls from a building, holding onto a rope... a beam shoots out from his clenched fist in a battle scene. The man, now standing on a hill...

Icon Citation

If you find this project useful for your research, please cite our paper. 🤗

@article{sakuga42m2024,
    title   = {Sakuga-42M Dataset: Scaling Up Cartoon Research},
    author  = {Zhenglin Pan, Yu Zhu, Yuxuan Mu},
    journal = {arXiv preprint arXiv:2405.07425},
    year    = {2024}
}

Icon Contact

Zhenglin Pan: zhengli3@ualberta.ca

About

TIMER_Official Repository for Sakuga-42M Dataset

License:MIT License


Languages

Language:Python 91.8%Language:Shell 8.2%