chenxn2020's starred repositories
LLMsNineStoryDemonTower
【LLMs九层妖塔】分享 LLMs在自然语言处理(ChatGLM、Chinese-LLaMA-Alpaca、小羊驼 Vicuna、LLaMA、GPT4ALL等)、信息检索(langchain)、语言合成、语言识别、多模态等领域(Stable Diffusion、MiniGPT-4、VisualGLM-6B、Ziya-Visual等)等 实战与经验。
MultiInstruct
MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning
AlignLLMHumanSurvey
Aligning Large Language Models with Human: A Survey
LLMAgentPapers
Must-read Papers on LLM Agents.
codeinterpreter-api
👾 Open source implementation of the ChatGPT Code Interpreter
LRV-Instruction
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Labal-Anything-Pipeline
Baby-DALL3: Annotation anything in visual tasks and Generate anything just all in one-pipeline with GPT-4 (a small baby of DALL·E 3).
mPLUG-DocOwl
mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding
Multi-Modality-Arena
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
Awesome-LLM-KG
Awesome papers about unifying LLMs and KGs
awesome-instruction-dataset
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
xlang-paper-reading
Paper collection on building and evaluating language model agents via executable language grounding
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.