codingchild's repositories
Deep_knowledge_tracing_baseline
Deep_knowlege_tracing_baseline
cl_bert_kt
cl_bert_kt
lm-trainer-v3
lm-trainer-v3
gpt-4-vision-for-eval
gpt-4-vision-for-eval
trl
Train transformer language models with reinforcement learning.
lm-trainer-v2
lm-trainer-v2
debate_bot
debate_bot
mlm-trainer
mlm-trainer
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
LOMO
LOMO: LOw-Memory Optimization
math_scoring_with_gpt
math_scoring_with_gpt
phoenix
ML Observability in a Notebook - Uncover Insights, Surface Problems, Monitor, and Fine Tune your Generative LLM, CV and Tabular Models
MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
oslo-1
OSLO: Open Source for Large-scale Optimization
bitsandbytes
8-bit CUDA functions for PyTorch
Open-Llama
The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.
auto_gpt_stable
auto_gpt_stable
whisper-diarization
Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper
lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
ddpm_practice
ddpm_practice
self-instruct
Aligning pretrained language models with instruction data generated by themselves.
vision
Clean, reproducible, boilerplate-free deep learning project template.
KoAlpaca
KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
alpaca-lora
Code for reproducing the Stanford Alpaca InstructLLaMA result on consumer hardware
nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
pretraining-with-human-feedback
Code accompanying the paper Pretraining Language Models with Human Preferences