YTEP-ZHI / PolyLoss

Source code of Universal Weighting Metric Learning for Cross-Modal Matching. The paper is accepted by CVPR2020.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Universal Weighting Metric Learning for Cross-Modal Matching

This repository contains PyTorch implementation of our paper Universal Weighting Metric Learning for Cross-Modal Matching. The paper is accepted by CVPR2020. It is built on top of the SCAN in PyTorch.

Requirements and Installation

We recommended the following dependencies.

import nltk
nltk.download()
> d punkt

Data preparation

Download the dataset files. We use splits produced by Andrej Karpathy. The raw images can be downloaded from from their original sources here, here and here.

The precomputed image features are extracted from the raw images using the bottom-up attention model from here. More details about the image feature extraction can also be found in SCAN(https://github.com/kuanghuei/SCAN).

Data files can be found in SCAN (We use the same dataset split as theirs):

wget https://scanproject.blob.core.windows.net/scan-data/data_no_feature.zip

Training

Arguments used to train Flickr30K models and MSCOCO models are similar with those of SCAN:

For Flickr30K:

python train.py --data_path "$DATA_PATH" --data_name coco_precomp --vocab_path "$VOCAB_PATH" --logger_name runs/coco_scan/log --model_name runs/coco_scan/log --max_violation --bi_gru  --agg_func=Mean --cross_attn=i2t --lambda_softmax=4

1.You can change the parameters in the model.py (lines 337-401) to train on other datasets.

2.You can also apply our PolyLoss function (polyloss.py) to other cross-modal retrieval methods.

Pretrained model

If you don't want to train from scratch, you can download the pretrained model (Flickr30K) from DropBox here.

rsum: 460.7
Average i2t Recall: 84.9
Image to text: 69.4 89.9 95.4 1.0 4.1
Average t2i Recall: 68.7
Text to image: 47.5 75.5 83.1 2.0 12.4

Reference

If you found this code useful, please cite the following paper:

@inproceedings{wei2020universal,
  title={Universal Weighting Metric Learning for Cross-Modal Matching},
  author={Wei, Jiwei and Xu, Xing and Yang, Yang and Ji, Yanli and Wang, Zheng and Shen, Heng Tao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={13005--13014},
  year={2020}
}

About

Source code of Universal Weighting Metric Learning for Cross-Modal Matching. The paper is accepted by CVPR2020.

License:Apache License 2.0


Languages

Language:Python 100.0%