THUDM / CogVideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Demonstration data

zhoudaquan opened this issue · comments

Thanks for the amazing work!

can I check where does the demonstration dataset come from? Is there any part publicly available?

thanks.

Hi, sorry for the late response. What do you mean by demonstration dataset? If it refers to the train set, you can use WebVid as an alternative, which contains 10M text-video pairs.