Don Kang's repositories
flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
alpaca-lora
Instruct-tune LLaMA on consumer hardware
AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
alpaca.cpp
Locally run an Instruction-Tuned Chat-Style LLM
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
dalai
The simplest way to run LLaMA on your local machine
evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
llama-int8
Quantized inference code for LLaMA models
Finetune_LLMs
Repo for fine-tuning GPTJ and other GPT models
unishop-monolith-to-microservices
Unishop MonoToMicro Workshop
progen
Official release of the ProGen models
ReLSO-Guided-Generative-Protein-Design-using-Regularized-Transformers
a Transformer-based neural network for generating highly optimized protein sequences called Regularized Latent Space Optimization (RELSO)
vecsim-demo2
Explore vector similarity in Redis
ctrl
Conditional Transformer Language Model for Controllable Generation