bianhao123 / TransVG

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TransVG

此仓库基于djiajunustc/TransVG (github.com)官方版本修改,by Hao Bian

复现

  1. 首先按照官方教程配置环境, 数据划分的文件需放在CODE_ROOT/data, 下载地址[Gdrive] or [One Drive],

    CODE_ROOT/data
    ├── flickr
    │   ├── corpus.pth
    │   ├── flickr_test.pth
    │   ├── flickr_train.pth
    │   └── flickr_val.pth
    ├── gref
    │   ├── corpus.pth
    │   ├── gref_train.pth
    │   └── gref_val.pth
    ├── gref_umd
    │   ├── corpus.pth
    │   ├── gref_umd_test.pth
    │   ├── gref_umd_train.pth
    │   └── gref_umd_val.pth
    ├── referit
    │   ├── corpus.pth
    │   ├── referit_test.pth
    │   ├── referit_train.pth
    │   ├── referit_trainval.pth
    │   └── referit_val.pth
    ├── unc
    │   ├── corpus.pth
    │   ├── unc_testA.pth
    │   ├── unc_testB.pth
    │   ├── unc_train.pth
    │   ├── unc_trainval.pth
    │   └── unc_val.pth
    └── unc+
        ├── corpus.pth
        ├── unc+_testA.pth
        ├── unc+_testB.pth
        ├── unc+_train.pth
        ├── unc+_trainval.pth
        └── unc+_val.pth
  2. 原数据按如下格式放置在CODE_ROOT/ln_data, (可建立软链接,如ln -s src_data_path CODE_ROOT/ln_data, 源文件数据在公共目录/cto_studio/datastory/phrase_grounding/dataset)

    ln_data/
    ├── data.tar
    ├── MSCOCO
    │   ├── train2014
    ├── RefCOCO
    │   ├── refcoco
    │   │   ├── instances.json
    │   │   ├── refs(google).p
    │   │   └── refs(unc).p
    │   ├── refcoco+
    │   │   ├── instances.json
    │   │   └── refs(unc).p
    │   └── refcocog
    │       ├── instances.json
    │       ├── refs(google).p
    │       └── refs(umd).p
    └── Flickr
    └── Flickr_Entities
    └── VG
    └── ZSG
    
    
  3. 放置detr的预训练模型,在CODE_ROOT/checkpoints,下载地址[Gdrive]

    checkpoints/
    ├── detr-r50-referit.pth
    ├── detr-r50-unc.pth
    └── download_detr_model.sh
  4. 写了一个脚本,方便不同数据集统一训练

GPUS=0 # 指定设备
DATASET=refcoco # refcoco+, refcocog_g, refcocog_u
sh train_dataset.sh $GPUS $DATASET
  1. 写了一个脚本,方便不同数据集统一测试
GPUS=0 # 指定设备
DATASET=refcoco # refcoco+, refcocog_g, refcocog_u
sh test_dataset.sh $GPUS $DATASET
  1. 复现结果文件在CODE_ROOT/outputs, 如refcoco数据集
val testA
refcoco(ResNet50) 0.8118081180811808 0.8252032520325203

Installation

  1. Clone this repository.

    git clone https://github.com/djiajunustc/TransVG
    
  2. Prepare for the running environment.

    You can either use the docker image we provide, or follow the installation steps in ReSC.

    docker pull djiajun1206/vg:pytorch1.5
    

Getting Started

Please refer to GETTING_STARGTED.md to learn how to prepare the datasets and pretrained checkpoints.

Model Zoo

The models with ResNet-50 backbone and ResNet-101 backbone are available in [Gdrive]

        RefCOCO         RefCOCO+         RefCOCOg ReferItGame
val testA testB val testA testB g-val u-val u-test val test
R-50 80.5 83.2 75.2 66.4 70.5 57.7 66.4 67.9 67.4 71.6 69.3
R-101 80.8 83.4 76.9 68.0 72.5 59.2 68.0 68.7 68.0 - -

Training and Evaluation

  1. Training

    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --use_env train.py --batch_size 8 --lr_bert 0.00001 --aug_crop --aug_scale --aug_translate --backbone resnet50 --detr_model ./checkpoints/detr-r50-referit.pth --bert_enc_num 12 --detr_enc_num 6 --dataset referit --max_query_len 20 --output_dir outputs/referit_r50 --epochs 90 --lr_drop 60
    

    We recommend to set --max_query_len 40 for RefCOCOg, and --max_query_len 20 for other datasets.

    We recommend to set --epochs 180 (--lr_drop 120 acoordingly) for RefCOCO+, and --epochs 90 (--lr_drop 60 acoordingly) for other datasets.

  2. Evaluation

    CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --use_env eval.py --batch_size 32 --num_workers 4 --bert_enc_num 12 --detr_enc_num 6 --backbone resnet50 --dataset referit --max_query_len 20 --eval_set test --eval_model ./outputs/referit_r50/best_checkpoint.pth --output_dir ./outputs/referit_r50
    

Acknowledge

This codebase is partially based on ReSC and DETR.

About


Languages

Language:Python 90.8%Language:Shell 9.2%