There are 67 repositories under transformer topic.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
OpenMMLab Detection Toolbox and Benchmark
🧑🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
Natural Language Processing Tutorial for Deep Learning Researchers
Port of OpenAI's Whisper model in C/C++
A powerful HTTP package for Dart/Flutter, which supports Global settings, Interceptors, FormData, Aborting and canceling a request, Files uploading and downloading, Requests timeout, Custom adapters, etc.
CVPR 2023 论文和开源项目合集
Trax — Deep Learning with Clear Code and Speed
Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
Code for the paper "Jukebox: A Generative Model for Music"
Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
Chinese version of GPT2 training code, using BERT tokenizer.
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Google AI 2018 BERT pytorch implementation
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
A TensorFlow Implementation of the Transformer: Attention Is All You Need
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
Time series Timeseries Deep Learning Machine Learning Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
viewpager with parallax pages, together with vertical sliding (or click) and activity transition
Production First and Production Ready End-to-End Speech Recognition Toolkit
PostHTML is a tool to transform HTML/XML with JS plugins
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
An Open-Source Framework for Prompt-Learning.
SwinIR: Image Restoration Using Swin Transformer (official repository)
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
LightSeq: A High Performance Library for Sequence Processing and Generation
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI**)
Transformer related optimization, including BERT, GPT
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
深度学习入门课、资深课、特色课、学术案例、产业实践案例、深度学习知识百科及面试题库The course, case and knowledge of Deep Learning and AI