trqminh / self-supervised-amodal-video-object-segmentation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SaVos

This is the official implementation of the NeurIPS'22 paper Self-supervised Amodal Video Object Segmentation . The code was implemented by Jian Yao, Yuxin Hong and Jianxiong Gao during their internship at the AWS Shanghai AI Lab.

avatar

The FishBowl dataset originates from Unsupervised Object Learning via Common Fate. In this repo, we provide the checkpoint and 1000 videos for testing. The train and test video data in this dataset are generated from the same script with different seeds using the open-sourced code. The data provided includes raw video data, predicted visible masks obtained by PointTrack, and flow obtained by Flownet2.

Set up

conda create env -f environment.yml

FishBowl

Download FishBowl data

Down the test data and checkpoint.

Download the csv for evaluation

We currently filter the data (e.g. filtered by Occ rate as described in paper) and write into csv, to evalute, please download the test file

mv PATH_TO_TEST_DATA FishBowl/FishBowl_dataset/data/test_data
mv PATH_TO_CHECKPOINT FishBowl/log_bidirectional_consist_next_vm_label_1.5bbox_finalconsist/best_model.pt
mv PATH_TO_TEST_FILES FishBowl/test_files

Construct data summary from custom visible mask

We provide the script to construct the data summary from the custom visible mask. The custom visible mask is obtained by PointTrack. The script is in construct_data_summary.py. The data summary is used for training and evaluation.

Change the variable predicted_vm_path and mapped_data_info_path in construct_data_summary.py to your own path. Then run the script by:

python construct_data_summary.py

It will create a file in FishBowl/FishBowl_dataset/data/test_data/custom_test_data.pkl. The file is used for evaluation.

Inference

You can run the inference by:

cd FishBowls
bash run_inference.sh

or

cd FishBowls
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=4 \
main.py --mode test --training_method ${TRAIN_METHOD} \
--log_path log_${TRAIN_METHOD} --device cuda --batch_size 1 \
--data_path "" --num_workers 2 --loss_type BCE \
--enlarge_coef 1.5

You can change the args.dataset to ['FishBowl', 'FishBowl_nofm'] to test on the FishBowl dataset with original or custom visible mask.

Training

If you generate the training data (raw video data, flow and predict visible mask), you can train by:

cd FishBowl
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=4 \
main.py --mode train --training_method ${TRAIN_METHOD} \
--log_path log_${TRAIN_METHOD} --device cuda --batch_size 3 \
--data_path "" --num_workers 2 --loss_type BCE --verbose \
--enlarge_coef 1.5 2>&1 | tee log_${TRAIN_METHOD}.log

Kins-Car

Download Kitti & Kins data

Download the data .

mv PATH_TO_KINS_VIDEO_CAR Kins_Car/dataset/data

Training

cd Kins_Car
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=4 \
main.py --mode train --training_method ${TRAIN_METHOD} \
--log_path log_${TRAIN_METHOD} --device cuda --batch_size 2 \
--data_path "" --num_workers 2 --loss_type BCE --verbose \
--enlarge_coef 1.5 2>&1 | tee log_${TRAIN_METHOD}.log

Inference

cd Kins_Car
TRAIN_METHOD="bidirectional_consist_next_vm_label_1.5bbox_finalconsist"
python -m torch.distributed.launch --nproc_per_node=1 test.py --training_method ${TRAIN_METHOD}

Evaluation

cd Kins_Car
python eval.py

Visualization

cd Kins_Car
python run_video_res.py

Chewing Gum Dataset

For whom are interested in the synthetic dataset, we also provide the script to generate the Chewing Gum Dataset in utils/gen_chewgum.py.

About

License:MIT No Attribution


Languages

Language:Python 66.9%Language:Jupyter Notebook 32.9%Language:Shell 0.2%