ccq195 / IHDA

Instance-level Heterogeneous Domain Adaptation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GitHub license GitHub issues GitHub last-commit

Instance-level Heterogeneous Domain Adaptation for Limited-labeled Sketch-to-Photo Retrieval

About this work pdf

Although sketch-to-photo retrieval has a wide range of applications, it is costly to obtain paired and rich-labeled ground truth. Differently, photo retrieval data is easier to acquire. Therefore, previous works pre-train their models on rich-labeled photo retrieval data (i.e., source domain) and then fine-tune them on the limited-labeled sketch-to-photo retrieval data (i.e., target domain). However, without co-training source and target data, source domain knowledge might be forgotten during the fine-tuning process, while simply co-training them may cause negative transfer due to domain gaps. Moreover, identity label spaces of source data and target data are generally disjoint and therefore conventional category-level Domain Adaptation (DA) is not directly applicable. To address these issues, we propose an Instance-level Heterogeneous Domain Adaptation (IHDA) framework. We apply the fine-tuning strategy for identity label learning, aiming to transfer the instance-level knowledge in an inductive transfer manner. Meanwhile, labeled attributes from the source data are selected to form a shared label space for source and target domains. Guided by shared attributes, DA is utilized to bridge cross-dataset domain gaps and heterogeneous domain gaps, which transfers instance-level knowledge in a transductive transfer manner. Experiments show that our method has set a new state of the art on three sketch-to-photo image retrieval benchmarks without extra annotations, which opens the door to train more effective models on limited-labeled heterogeneous image retrieval tasks.

Getting Started

Running Environment

The code was tested on Ubuntu 18.04, with Anaconda Python 3.6 and PyTorch v1.1.0.

Datasets

We have datasets of PKU-Sketch and QMUL-Shoes ready under processed_data/.

For PKU-Sketch dataset, just use the existing files for evaluation. Note that, we take 10-time cross-validation for PKU-Sketch dataset, the dataset we offered is one of the splittings. To randomly generate new splittings, you need to download PKU-Sketch dataset, set the corresponding path in config/config_pku.py, and then run processed_data/pre_process_pku.py.

For QMUL-Shoes dataset, please unzip files under processed_data/sbir/.

Model Download Link:

We provide a set of trained models available for download in zap.t market.t pku_best_96.t sbir_best_69.t Please download them and put them under save_model/

Run Testing

Run Testing.ipynb to test the result.

Run Training

Training PKU-Sketch

  1. Run train_market.py (download Market1501 dataset and set the corresponding path in config/config_market.py first).
  2. Run train_pku_market.py (you can skip step 1 if use the pretrained model market.t)

The rank-1 performance may reach 96%, which is much higher than our paper reported, but do take more splitting and training to get an overall performance.

Training QMUL-Shoes

  1. Run train_zap.py (download Zap50k dataset and set the corresponding path in config/config_zap.py first).
  2. Run train_zap_sbir.py (you can skip step 1 if use the pretrained model zap.t)

The rank-1 performance may be between 64%-69%, which is unstable from one machine to another, but the overall performance should be beter than others.

License

The code is distributed under the MIT License. See LICENSE for more information.

Citation

@article{yang2020instance,
  title={Instance-level Heterogeneous Domain Adaptationfor Limited-labeled Sketch-to-Photo Retrieval},
  author={Yang, Fan and Wu, Yang and Wang, Zheng and Li, Xiang and Sakti, Sakriani and Nakamura, Satoshi},
  journal={IEEE Transactions on Multimedia},
  year={2020}
}

Acknowledgements (parts of our code are borrowed from)

About

Instance-level Heterogeneous Domain Adaptation

License:MIT License


Languages

Language:Jupyter Notebook 98.6%Language:Python 1.4%