Jhih-Jie Chen's starred repositories
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
alpaca-lora
Instruct-tune LLaMA on consumer hardware
llama-recipes
Scripts for fine-tuning Llama2 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization & question answering. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment.Demo apps to showcase Llama2 for WhatsApp & Messenger
PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
FasterTransformer
Transformer related optimization, including BERT, GPT
pycorrector
pycorrector is a toolkit for text error correction. 文本纠错,实现了Kenlm,T5,MacBERT,ChatGLM3,LLaMA等模型应用在纠错场景,开箱即用。
safetensors
Simple, safe way to store and distribute tensors
P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image models in Holistic Evaluation of Text-to-Image Models (HEIM) (https://arxiv.org/abs/2311.04287).
Taiwan-LLM
Traditional Mandarin LLMs for Taiwan
autocorrect
Spelling corrector in python
flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
tappay-web-example
TapPay SDK example code for Web
NLPLabs-2022
Lab sessions for NLP course
mysql2sqlite
Converts MySQL dump to SQLite3 compatible dump