wpwow / CW-RGP

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

An Investigation into Whitening Loss for Self-supervised Learning

This is a PyTorch implementation of the paper.

Requirements

Experiments

The code includes experiments in section 4.1.

Experimental Setup for Comparison of Baselines

The datasets include CIFAR-10, CIFAR-100, STL-10 and Tiny ImageNet, and the setup is strictly following W-MSE paper.

The unsupervised pretraining scripts for small and medium datasets are shown in scripts/base.sh

The results are shown in the following table:

Method CIFAR-10 CIFAR-100 STL-10 Tiny-ImageNet
top-1  5-nn top-1  5-nn top-1  5-nn top-1  5-nn
CW-RGP 2 91.92  89.54 67.51  57.35 90.76  87.34 49.23  34.04
CW-RGP 4 92.47  90.74 68.26  58.67 92.04  88.95 50.24  35.99

Experimental Setup for Large-Scale Classification

The unsupervised pretraining and linear classification scripts for ImageNet are shown in scripts/ImageNet.sh

Pre-trained Models

Our pretrained ResNet-50 models:

pretrain
epochs
batch
size
pretrain
ckpt
lincls
ckpt
top-1 acc.
100 512 train lincls 69.7
200 512 train lincls 71.0

Transferring to Object Detection

Same as MoCo for object detection transfer, please see moco/detection.

Transfer learning results of CW-RGP (200-epochs pretrained on ImageNet):

downstream task $AP_{50}$ $AP$ $AP_{75}$ ckpt log
VOC 07+12 detection $82.2_{±0.07}$ $57.2_{±0.10}$ $63.8_{±0.11}$ voc_ckpt voc_log
COCO detection $60.5_{±0.28}$ $40.7_{±0.14}$ $44.1_{±0.14}$ coco_ckpt coco_log
COCO instance seg. $57.3_{±0.16}$ $35.5_{±0.12}$ $37.9_{±0.14}$ coco_ckpt coco_log

About


Languages

Language:Python 98.3%Language:Shell 1.7%