There are 48 repositories under self-supervised-learning topic.
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
OpenMMLab Pre-training Toolbox and Benchmark
A python library for self-supervised learning on images.
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation" and [TPAMI'23] "ViTPose++: Vision Transformer for Generic Body Pose Estimation"
Papers about pretraining and self-supervised learning on Graph Neural Networks (GNN).
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning
A Semantic Controllable Self-Supervised Learning Framework to learn general human representations from massive unlabeled human images, which can benefit downstream human-centric tasks to the maximum extent
SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]
Code for TKDE paper "Self-supervised learning on graphs: Contrastive, generative, or predictive"
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
A comprehensive list of awesome contrastive self-supervised learning papers.
Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™
Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
Awesome Deep Graph Clustering is a collection of SOTA, novel deep graph clustering methods (papers, codes, and datasets).
LightlyTrain is the first PyTorch framework to pretrain computer vision models on unlabeled data for industrial applications
A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).
DIPY is the paragon 3D/4D+ medical imaging library in Python. Contains generic methods for spatial normalization, signal processing, machine learning, statistical analysis and visualization of medical images. Additionally, it contains specialized methods for computational anatomy including diffusion, perfusion and structural imaging.
[MICCAI 2019 Young Scientist Award] [MEDIA 2020 Best Paper Award] Models Genesis
[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".
INTERSPEECH 2023-2024 Papers: A complete collection of influential and exciting research papers from the INTERSPEECH 2023-24 conference. Explore the latest advances in speech and language processing. Code included. Star the repository to support the advancement of speech technology!
[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking