zjulgc's starred repositories

WritingAIPaper

Writing AI Conference Papers: A Handbook for Beginners

Stargazers:968Issues:0Issues:0

MoE-PEFT

An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT

Language:PythonLicense:Apache-2.0Stargazers:23Issues:0Issues:0

humaneval-xl

[LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization

Language:PythonLicense:MITStargazers:26Issues:0Issues:0

mergoo

A library for easily merging multiple LLM experts, and efficiently train the merged LLM.

Language:PythonLicense:LGPL-3.0Stargazers:393Issues:0Issues:0

llm-action

本项目旨在分享大模型相关技术原理以及实战经验。

Language:HTMLLicense:Apache-2.0Stargazers:9370Issues:0Issues:0

OLMoE

OLMoE: Open Mixture-of-Experts Language Models

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:393Issues:0Issues:0

trl

Train transformer language models with reinforcement learning.

Language:PythonLicense:Apache-2.0Stargazers:9543Issues:0Issues:0

awesome-LLM-resourses

🧑‍🚀 全世界最好的LLM资料总结 | Summary of the world's best LLM resources.

License:Apache-2.0Stargazers:1471Issues:0Issues:0

Agentless

Agentless🐱: an agentless approach to automatically solve software development problems

Language:PythonLicense:MITStargazers:674Issues:0Issues:0

OpenMoE

A family of open-sourced Mixture-of-Experts (MoE) Large Language Models

Language:PythonStargazers:1365Issues:0Issues:0

RouteLLM

A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!

Language:PythonLicense:Apache-2.0Stargazers:2951Issues:0Issues:0

MOELoRA-peft

[SIGIR'24] The official implementation code of MOELoRA.

Language:PythonLicense:MITStargazers:114Issues:0Issues:0

mLoRA

An Efficient "Factory" to Build Multiple LoRA Adapters

Language:PythonLicense:Apache-2.0Stargazers:256Issues:0Issues:0

mLoRA

This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT

Language:PythonLicense:Apache-2.0Stargazers:24Issues:0Issues:0

MixLoRA

State-of-the-art Parameter-Efficient MoE Fine-tuning Method

Language:PythonLicense:Apache-2.0Stargazers:63Issues:0Issues:0

DeepSeek-MoE

DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models

Language:PythonLicense:MITStargazers:976Issues:0Issues:0

Machine-Learning-Book

《机器学习宝典》包含:谷歌机器学习速成课程(招式)+机器学习术语表(口诀)+机器学习规则(心得)+机器学习中的常识性问题 (内功)。该资源适用于机器学习、深度学习研究人员和爱好者参考!

Language:Jupyter NotebookStargazers:1051Issues:0Issues:0

SWE-bench

[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?

Language:PythonLicense:MITStargazers:1795Issues:0Issues:0

Awesome-Code-LLM

👨‍💻 An awesome and curated list of best code-LLM for research.

License:MITStargazers:881Issues:0Issues:0
Language:PythonStargazers:110Issues:0Issues:0

unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory

Language:PythonLicense:Apache-2.0Stargazers:16176Issues:0Issues:0

GrowingBugRepository

A bug repository that keeps growing

Language:PerlLicense:MITStargazers:267Issues:0Issues:0

bigcodebench

BigCodeBench: Benchmarking Code Generation Towards AGI

Language:PythonLicense:Apache-2.0Stargazers:188Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:96Issues:0Issues:0

geektime-books

:books: 极客时间电子书

Stargazers:10607Issues:0Issues:0

model-explorer

A modern model graph visualizer and debugger

Language:JavaScriptLicense:Apache-2.0Stargazers:997Issues:0Issues:0

AwesomeLLM4APR

A Systematic Literature Review on Large Language Models for Automated Program Repair

Stargazers:77Issues:0Issues:0

LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)

Language:PythonLicense:Apache-2.0Stargazers:31616Issues:0Issues:0

llm-transparency-tool

LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. *Check out demo at* https://huggingface.co/spaces/facebook/llm-transparency-tool-demo

Language:PythonLicense:NOASSERTIONStargazers:738Issues:0Issues:0