Kim-Minseon / RoCL

Code for the paper "Adversarial Self-supervised Contrastive Learning" (NeurIPS 2020)

Home Page:https://sites.google.com/view/rocl2020

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RoCL-Adversarial self-supervised contrastive learning

This repository is the official PyTorch implementation of "Adversarial self supervised contrastive learning" by Minseon Kim, Jihoon Tack and Sung Ju Hwang.

Requirements

Currently, requires following packages

Training

To train the model(s) in the paper, run this command:

mkdir Data folder inside the RoCL

mkdir ./Data
python -m torch.distributed.launch --nproc_per_node=2 rocl_train.py --ngpu 2 --batch-size=256 --model='ResNet18' --k=7 --loss_type='sim' --advtrain_type='Rep' --attack_type='linf' --name=<name-of-the-file> --regularize_to='other' --attack_to='other' --train_type='contrastive' --dataset='cifar-10'

Evaluation

To evaluate my model linear evaluation and robustness, run:

./total_process.sh test <checkpoint-load> <name> <model type='ResNet18' or 'ResNet50'> <learning rate=0.1> <dataset='cifar-10' or 'cifar-100'>

Results

Our model achieves the following performance on :

Classification and robustness on CIFAR 10

Model name Accuracy robustness
RoCL ResNet18 83.71 % 40.27%

Citation

@inproceedings{kim2020adversarial,
  title={Adversarial Self-Supervised Contrastive Learning},
  author={Minseon Kim and Jihoon Tack and Sung Ju Hwang},
  booktitle = {Advances in Neural Information Processing Systems},
  year={2020}
}

About

Code for the paper "Adversarial Self-supervised Contrastive Learning" (NeurIPS 2020)

https://sites.google.com/view/rocl2020


Languages

Language:Python 96.2%Language:Shell 3.8%