the dataset used in demo video?
1165048017 opened this issue · comments
The eyes and ears come from the CMU-Panoptic dataset (and weak 2D supervision from COCO). CMU-Panoptic preprocessing code is not yet included in the repo, but I'm planning to add it soon, along with several others. Currently only those datasets are used in the public repo that were used in the published paper.
Hello! I read your paper it is a great work. For a project I need to train metrabs on the CMU Panoptic dataset, I was wondering if you could share the pre-processing code for cmu panoptic.