guoxiang's starred repositories
awesome-chatgpt-prompts-zh
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
ChatGLM-6B
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
google-research
Google Research
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
generative_agents
Generative Agents: Interactive Simulacra of Human Behavior
ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
Luotuo-Chinese-LLM
骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
pytorch-loss
label-smooth, amsoftmax, partial-fc, focal-loss, triplet-loss, lovasz-softmax. Maybe useful
coral-pytorch
CORAL and CORN implementations for ordinal regression with deep neural networks.
Vicuna-LoRA-RLHF-PyTorch
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
SCELoss-Reproduce
Reproduce Results for ICCV2019 "Symmetric Cross Entropy for Robust Learning with Noisy Labels" https://arxiv.org/abs/1908.06112
CamelBell-Chinese-LoRA
CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-LLM project created by 冷子昂 @ 商汤科技 & 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技
moco.tensorflow
A TensorFlow re-implementation of Momentum Contrast (MoCo): https://arxiv.org/abs/1911.05722
nlp-fluency
评估自然语言的流畅度
contrastive-classification-keras
Implementation of self-supervised image-level contrastive pretraining methods using Keras.
llm-rankers
Zero-shot Document Ranking with Large Language Models.