heshuting555 / PADing

[CVPR-2023] Primitive Generation and Semantic-related Alignment for Universal Zero-Shot Segmentation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to reproduce

lfgogogo opened this issue · comments

Hi,thank you for your brilliant work.May i ask how to reproduce the log here using the open source models ? Looking forward to your reply.

Go to the README and find and follow the section "Training".
using the command:
CUDA_VISIBLE_DEVICES=0 python train_net.py --config-file configs/{}-segmentation/PADing.yaml --num-gpus 1 MODEL.WEIGHTS pretrained_weight_{}.pth {} can be semantic,instance and panoptic.

I hope it is helpful to you!

Thanks,i tried,but i got a warning :Some model parameters or buffers are not found in the checkpoint,
then a error:RuntimeError: CUDA error: device-side assert triggered,it seems the model weight does not match with the config file ,and the command was CUDA_VISIBLE_DEVICES=0 python train_net.py --config-file configs/semantic-segmentation/PADing.yaml --num-gpus 1 MODEL.WEIGHTS pretrained_weight_semantic.pth.By the way,i ran the inference command which works fine.

Sorry to bother,one more question,how can i visualize the results in the json?I just print the segmentation,i get something like this:
image
What does it mean?