BIGWangYuDong / openmixup

CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark

Home Page:https://openmixup.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OpenMixup

πŸ“˜Documentation | πŸ› οΈInstallation | πŸš€Model Zoo | πŸ‘€Awesome Mixup | πŸ”Awesome MIM | πŸ†•News

Introduction

The main branch works with PyTorch 1.8 (required by some self-supervised methods) or higher (we recommend PyTorch 1.10). You can still use PyTorch 1.6 for supervised classification methods.

OpenMixup is an open-source toolbox for supervised, self-, and semi-supervised visual representation learning with mixup based on PyTorch, especially for mixup-related methods.

Major Features
  • Modular Design. OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplatable to OpenMMLab projects (e.g., MMSelfSup).

  • All in One. OpenMixup provides popular backbones, mixup methods, semi-supervised, and self-supervised algorithms. Users can perform image classification (CNN & Transformer) and self-supervised pre-training (contrastive and autoregressive) under the same setting.

  • Standard Benchmarks. OpenMixup supports standard benchmarks of image classification, mixup classification, self-supervised evaluation, and provides smooth evaluation on downstream tasks with open-source projects (e.g., object detection and segmentation on Detectron2 and MMSegmentation).

What's New

[2020-07-30] OpenMixup v0.2.5 is released (issue #10).

Installation

There are quick installation steps for develepment:

conda create -n openmixup python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
pip3 install openmim
mim install mmcv-full
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop

Please refer to install.md for more detailed installation and dataset preparation.

Getting Started

Please see get_started.md for the basic usage of OpenMixup. You can start a multiple GPUs training with CONFIG_FILE using the following script. An example,

bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]

Please Then, see Tutorials for more tech details:

Overview of Model Zoo

Please refer to Model Zoos for various backbones, mixup methods, and self-supervised algorithms. We also provide the paper lists of Awesome Mixups for your reference. Checkpoints and traning logs will be updated soon!

Change Log

Please refer to changelog.md for details and release history.

License

This project is released under the Apache 2.0 license.

Acknowledgement

  • OpenMixup is an open-source project for mixup methods created by researchers in CAIRI AI LAB. We encourage researchers interested in visual representation learning and mixup methods to contribute to OpenMixup!
  • This repo borrows the architecture design and part of the code from MMSelfSup and MMClassification.

Citation

If you find this project useful in your research, please consider cite our repo:

@misc{2022openmixup,
    title={{OpenMixup}: Open Mixup Toolbox and Benchmark for Visual Representation Learning},
    author={Li, Siyuan and Liu, Zichen and Wu, Di and Stan Z. Li},
    howpublished = {\url{https://github.com/Westlake-AI/openmixup}},
    year={2022}
}

Contributors

For now, the direct contributors include: Siyuan Li (@Lupin1998), Zicheng Liu (@pone7), and Di Wu (@wudi-bu). We thank contributors from MMSelfSup and MMClassification.

Contact

This repo is currently maintained by Siyuan Li (lisiyuan@westlake.edu.cn) and Zicheng Liu (liuzicheng@westlake.edu.cn).

About

CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark

https://openmixup.readthedocs.io

License:Apache License 2.0


Languages

Language:Python 98.8%Language:Shell 1.1%Language:Dockerfile 0.0%