DawnShine (sunshineflg)

sunshineflg

Geek Repo

Company:Yantai University

Location:Yantai

Github PK Tool:Github PK Tool

DawnShine's starred repositories

FlagEmbedding

Retrieval and Retrieval-augmented LLMs

Language:PythonLicense:MITStargazers:6228Issues:0Issues:0

promptbench

A unified evaluation framework for large language models

Language:PythonLicense:MITStargazers:2290Issues:0Issues:0

WikiChat

WikiChat stops the hallucination of large language models by retrieving data from Wikipedia.

Language:PythonLicense:Apache-2.0Stargazers:938Issues:0Issues:0

MedicalGPT

MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。

Language:PythonLicense:Apache-2.0Stargazers:3031Issues:0Issues:0

VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型

Language:PythonLicense:Apache-2.0Stargazers:4047Issues:0Issues:0

OmniTab

Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering

Language:PythonStargazers:28Issues:0Issues:0

BELLE

BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)

Language:HTMLLicense:Apache-2.0Stargazers:7756Issues:0Issues:0

promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.

Language:PythonLicense:MITStargazers:8921Issues:0Issues:0

self-rag

This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.

Language:PythonLicense:MITStargazers:1654Issues:0Issues:0

airllm

AirLLM 70B inference with single 4GB GPU

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:3511Issues:0Issues:0

LLaMA-Factory

A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)

Language:PythonLicense:Apache-2.0Stargazers:27978Issues:0Issues:0

LongLoRA

Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)

Language:PythonLicense:Apache-2.0Stargazers:2563Issues:0Issues:0

ColossalAI

Making large AI models cheaper, faster and more accessible

Language:PythonLicense:Apache-2.0Stargazers:38410Issues:0Issues:0

trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

Language:PythonLicense:MITStargazers:4406Issues:0Issues:0
Stargazers:12Issues:0Issues:0

fastllm

纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行

Language:C++License:Apache-2.0Stargazers:3230Issues:0Issues:0

eat_pytorch_in_20_days

Pytorch🍊🍉 is delicious, just eat it! 😋😋

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:4942Issues:0Issues:0

Jpom

🚀简而轻的低侵入式在线构建、自动部署、日常运维、项目监控软件

Language:JavaLicense:NOASSERTIONStargazers:1354Issues:0Issues:0

Megatron-LM

Ongoing research training transformer models at scale

Language:PythonLicense:NOASSERTIONStargazers:9525Issues:0Issues:0

BMTools

Tool Learning for Big Models, Open-Source Solutions of ChatGPT-Plugins

Language:PythonLicense:Apache-2.0Stargazers:2860Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:12650Issues:0Issues:0

SpanBERT

Code for using and evaluating SpanBERT.

Language:PythonLicense:NOASSERTIONStargazers:883Issues:0Issues:0

Firefly

Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型

Language:PythonStargazers:5364Issues:0Issues:0

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Language:PythonLicense:MITStargazers:29908Issues:0Issues:0

PaLM-rlhf-pytorch

Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

Language:PythonLicense:MITStargazers:7655Issues:0Issues:0

ceval

Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]

Language:PythonLicense:MITStargazers:1562Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language:PythonLicense:Apache-2.0Stargazers:15260Issues:0Issues:0

qlora

QLoRA: Efficient Finetuning of Quantized LLMs

Language:Jupyter NotebookLicense:MITStargazers:9756Issues:0Issues:0

alpaca-lora

Instruct-tune LLaMA on consumer hardware

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:18445Issues:0Issues:0

Alpaca-CoT

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:2535Issues:0Issues:0