There are 40 repositories under transformers topic.
🧑🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
CVPR 2023 论文和开源项目合集
👑 Easy-to-use and powerful NLP library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis and 🖼 Diffusion AIGC system etc.
:mag: Haystack is an open source NLP framework to interact with your data using Transformer models and LLMs (GPT-3 and alike). Haystack offers production-ready tools to quickly build ChatGPT-like question answering, semantic search, text generation, and more.
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
A PyTorch-based Speech Toolkit
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
This repository contains demos I made with the Transformers library by HuggingFace.
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
State of the Art Natural Language Processing
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
🔥🔥🔥🔥 (Earlier YOLOv7 not official one) YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! 🔥🔥🔥
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
A simple but complete full-attention transformer with a set of promising experimental features from various papers
A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
TransmogrifAI (pronounced trăns-mŏgˈrə-fī) is an AutoML library for building modular, reusable, strongly typed machine learning workflows on Apache Spark with minimal hand-tuning
Context aware, pluggable and customizable data protection and de-identification SDK for text and images
Scenic: A Jax Library for Computer Vision Research and Beyond
Reformer, the efficient Transformer, in Pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch
Open-source offline translation library written in Python
Russian GPT3 models.
🧠💬 Articles I wrote about machine learning, archived from MachineCurve.com.