yuntaodu / SFDA-Domain-Adaptation-without-Source-Data

This is an anonymous GitHub for a double-blind submission

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SFDA-Domain-Adaptation-without-Source-Data

Prerequisites

  • Ubuntu 18.04
  • Python 3.6+
  • PyTorch 1.5+ (recent version is recommended)
  • NVIDIA GPU (>= 12GB)
  • CUDA 10.0 (optional)
  • CUDNN 7.5 (optional)

Getting Started

Installation

  • Configure virtual (anaconda) environment
conda create -n env_name python=3.6
source activate env_name
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
  • Install python libraries
conda install -c conda-forge matplotlib
conda install -c anaconda yaml
conda install -c anaconda pyyaml 
conda install -c anaconda scipy
conda install -c anaconda scikit-learn 
conda install -c conda-forge easydict
pip install easydl

Download this repository

  • We provide two versions of the repository (with dataset / without dataset) for a flexible experiment

  • Full SFDA repository (with dataset): download link

    • In this case, go to training and testing step directly
  • Visualization of repository structure (Full SFDA repository)
|-- APM_update.py
|-- SFDA_test.py
|-- SFDA_train.py
|-- config.py
|-- data.py
|-- lib.py
|-- net.py
|-- office-train-config.yaml
|-- data
|   `-- office
|       |-- domain_adaptation_images
|       |   |-- amazon
|       |   |   `-- images
|       |   |-- dslr
|       |   |   `-- images
|       |   `--- webcam
|       |       `-- images         
|       |-- amazon_31_list.txt
|       |-- dslr_31_list.txt
|       `-- webcam_31_list.txt
|-- pretrained_weights
|   `-- 02
|       `-- domain02accBEST_model_checkpoint.pth.tar
`-- source_pretrained_weights
    `-- 02
        `-- model_checkpoint.pth.tar

Download dataset

  • Download the Office31 dataset (link) and unzip in ./data/office
  • Download the text file (link), e.g., amazon_31_list.txt, dslr_31_list.txt, webcam_31_list.txt, in ./data/office

Download source-pretrained parameters (Fs and Cs of Figure 2 in our main paper)

  • Download source-pretrained parameters (link) in ./source_pretrained_weights/[scenario_number]

ex) Source-pretrained parameters of A[0] -> W[2] senario should be located in ./source_pretrained_weights/02

Training and testing

  • Arguments required for training and testing are contained in office-train-config.yaml
  • Here is an example of running an experiment on Office31 (default: A -> W)
  • Scenario can be changed by editing source: 0, target: 2 in office-train-config.yaml
  • We will update the full version of our framework including settings for OfficeHome and Visda-C

Training

  • Run the following command
python SFDA_train.py --config office-train-config.yaml

Testing (on pretrained model)

  • As a first step, download SFDA pretrained parameters (link) in ./pretrained_weights/[scenario_number]

    ex) SFDA pretrained parameters of A[0] -> W[2] senario should be located in ./pretrained_weights/02

  • or run the training code to obtain pretrained weights

  • Run the following command

python SFDA_test.py --config office-train-config.yaml

Experimental results on Office31

  • Results using the provided code
A→W D→W W→D A→D D→A W→A Avg
Accuracy (%) 91.06 97.35 98.99 91.96 71.60 68.62 86.60

About

This is an anonymous GitHub for a double-blind submission


Languages

Language:Python 100.0%