There are 14 repositories under pre-trained-model topic.
An Open-Source Framework for Prompt-Learning.
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
[CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
Must-read Papers on Knowledge Editing for Large Language Models.
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Official Repository for the Uni-Mol Series Methods
[MICCAI 2019 Young Scientist Award] [MEDIA 2020 Best Paper Award] Models Genesis
PyTorch code for "Prototypical Contrastive Learning of Unsupervised Representations"
Self-supervised contrastive learning for time series via time-frequency consistency
Eden AI: simplify the use and deployment of AI technologies by providing a unique API that connects to the best possible AI engines
[ICLR 2024 Oral] Supervised Pre-Trained 3D Models for Medical Image Analysis (9,262 CT volumes + 25 annotated classes)
A work in progress to build out solutions in Rust for MLOPs
Exploring Visual Prompts for Adapting Large-Scale Models
[TPAMI] Searching prompt modules for parameter-efficient transfer learning.
Powerful handwritten text recognition. A simple-to-use, unofficial implementation of the paper "TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models".
A curated list of papers on pre-training for graph neural networks (Pre-train4GNN).
A collection of Audio and Speech pre-trained models.
Code of the CVPR 2021 Oral paper: A Recurrent Vision-and-Language BERT for Navigation
Official repository of the AAAI'2022 paper "GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy Injection"
The Paper List on Data Contamination for Large Language Models Evaluation.
[NeurIPS'2023] "GPT-ST: Generative Pre-Training of Spatio-Temporal Graph Neural Networks"
An official implementation of Advancing Radiograph Representation Learning with Masked Record Modeling (ICLR'23)
The implementation of our ICCV 2023 paper "Downstream-agnostic Adversarial Examples"