chienhung1519's starred repositories
ExpertLLaMA
An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.
LLaMA-LoRA-Tuner
UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
LLM-As-Chatbot
LLM as a Chatbot Service
tree-of-thoughts
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
openai-cookbook
Examples and guides for using the OpenAI API
Vicuna-LoRA-RLHF-PyTorch
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
Chinese-Vicuna
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
MiniGPT-4-ZH
MiniGPT-4 中文部署翻译 完善部署细节
safeguards-shield
Build accurate and secure AI applications to unlock value faster
BLOOM-LORA
Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigscience/license) using Alpaca-LoRA and Alpaca_data_cleaned.json
traditional-chinese-alpaca
A Traditional-Chinese instruction-following model with datasets based on Alpaca.
GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
text-generation-webui
A Gradio web UI for Large Language Models.
alpaca-7b-chinese
Finetune LLaMA-7B with Chinese instruction datasets
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.