Yingfei(Jeremy) Xiang's repositories
Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
deep-language-networks
We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network - DLN
faiss_tips
Some useful tips for faiss
FlagEmbedding
Open-source Embeddings
flash-attention
Fast and memory-efficient exact attention
human-eval
Code for the paper "Evaluating Large Language Models Trained on Code"
IncognitoPilot
An AI code interpreter for sensitive data, powered by GPT-4 or Llama 2.
LLaMA-Efficient-Tuning
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
LLM-Shearing
Preprint: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现包括二次预训练、有监督微调、奖励建模、强化学习训练。
sentence-transformers
Multilingual Sentence & Image Embeddings with BERT
SupContrast
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)
SuperAdapters
Finetune ALL LLMs with ALL Adapeters on ALL Platforms!
LLMs-cookbook
Examples and guides for using the LLMs
LongLoRA
Code and documents of LongLoRA and LongAlpaca