Carro's repositories
nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
flash-attention
Fast and memory-efficient exact attention
DeepSpeed-MII
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
dust
Design and Deploy Large Language Model Apps
EnergonAI
Large-scale model inference.
cai-examples
Examples of training models with hybrid parallelism using ColossalAI
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
GODEL
Large-scale pretrained models for goal-directed dialog
LaMDA-pytorch
Open-source pre-training implementation of Google's LaMDA in PyTorch. The totally not sentient AI. BT-included
ccxt
This is a fork of the Bybit enabled CCXY
Tensortrade-old
tensortrade that is compatible with tensorforce 0.4.4
substrate-graph
a compact graph indexer stack for parity substrate, polkadot, kusama
bt-imagen
imagen for bittensor
DALLE2-pytorch
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
bittensor
Internet-scale Neural Networks
mesh-transformer-jax
Model parallel transformers in JAX and Haiku
docker-searx
Alpine-based Docker image for the Searx metasearch engine
searx-docker
Create a searx instance using Docker
GUIMiner
GUIMiner for bittensor network! Denton Hackathon 2021
marge-pytorch
Implementation of Marge, Pre-training via Paraphrasing, in Pytorch
jukebox
Code for the paper "Jukebox: A Generative Model for Music"
GPTNeo
An implementation of model parallel GPT2& GPT3-like models, with the ability to scale up to full GPT3 sizes (and possibly more!), using the mesh-tensorflow library.
simpletransformers
Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.