Official implementation of Exploiting unlabeled data with vision and language models for object detection.
12/29/2023
: Please check ou improved the pseudo labeling with self-training and a split-and-fusion head (paper and code).
Our project is developed on Detectron2. Please follow the official installation instructions.
Download the COCO dataset, and put it in the datasets/
directory.
Download our pre-generated pseudo-labeled data, and put them in the datasets/open_voc
directory.
Dataset are organized in the following way:
datasets/
coco/
annotations/
instances_train2017.json
instances_val2017.json
open_voc/
instances_eval.json
instances_train.json
images/
train2017/
000000000009.jpg
000000000025.jpg
...
val2017/
000000000776.jpg
000000000139.jpg
...
If you want to generate and evaluate pseudo labels on your own, please follow our pseudo label generation instruction
Mask R-CNN:
Training Method | Novel AP | Base AP | Overall AP | download |
---|---|---|---|---|
With LSJ | 34.4 | 60.2 | 53.5 | model |
W/O LSJ | 32.3 | 54.0 | 48.3 | model |
python -m train_net.py --config configs/coco_openvoc_LSJ.yaml --num-gpus=1 --eval-only --resume
The best model on COCO in the paper is trained with large scale Jitter (LSJ), but training with LSJ requires too many GPU memories. Thus, beside the LSJ version, we also provide training without LSJ.
Training Mask R-CNN with Large Scale Jitter (LSJ).
python train_net.py --config configs/coco_openvoc_LSJ.yaml --num-gpus=8 --use_lsj
Training Mask R-CNN without Large Scale Jitter (LSJ).
python train_net.py --config configs/coco_openvoc_mask_rcnn.yaml --num-gpus=8
If you use VL-PLM in your work or wish to refer to the results published in this repo, please cite our paper:
@inproceedings{zhao2022exploiting,
title={Exploiting unlabeled data with vision and language models for object detection},
author={Zhao, Shiyu and Zhang, Zhixing and Schulter, Samuel and Zhao, Long and Vijay Kumar, BG and Stathopoulos, Anastasis and Chandraker, Manmohan and Metaxas, Dimitris N},
booktitle={ECCV},
pages={159--175},
year={2022},
organization={Springer}
}