StyxXuan's starred repositories

Language:PythonLicense:Apache-2.0Stargazers:64Issues:0Issues:0

lagent

A lightweight framework for building LLM-based agents

Language:PythonLicense:Apache-2.0Stargazers:1038Issues:0Issues:0

fusion_bench

FusionBench: A Comprehensive Benchmark of Deep Model Fusion

Language:PythonLicense:MITStargazers:24Issues:0Issues:0

lm-evaluation-harness

A framework for few-shot evaluation of language models.

Language:PythonLicense:MITStargazers:5801Issues:0Issues:0

evalplus

Rigourous evaluation of LLM-synthesized code - NeurIPS 2023

Language:PythonLicense:Apache-2.0Stargazers:1046Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:277Issues:0Issues:0

LoRAMoE

LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment

Language:PythonStargazers:149Issues:0Issues:0

Multi-LoRA-Composition

Repository for the Paper "Multi-LoRA Composition for Image Generation"

Language:PythonStargazers:411Issues:0Issues:0
Language:Jupyter NotebookLicense:MITStargazers:114Issues:0Issues:0

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

Language:PythonLicense:Apache-2.0Stargazers:23969Issues:0Issues:0
License:CC-BY-4.0Stargazers:786Issues:0Issues:0
Stargazers:29Issues:0Issues:0

magicoder

Magicoder: Source Code Is All You Need

Language:PythonLicense:MITStargazers:1930Issues:0Issues:0

Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.

Language:PythonLicense:Apache-2.0Stargazers:12501Issues:0Issues:0
Language:PythonStargazers:174Issues:0Issues:0

manim

A community-maintained Python framework for creating mathematical animations.

Language:PythonLicense:MITStargazers:19957Issues:0Issues:0

phatgoose

Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"

Language:PythonLicense:MITStargazers:67Issues:0Issues:0

BLoRA

batched loras

Language:PythonStargazers:323Issues:0Issues:0

MOELoRA-peft

[SIGIR'24] The official implementation code of MOELoRA.

Language:PythonLicense:MITStargazers:105Issues:0Issues:0

open-interpreter

A natural language interface for computers

Language:PythonLicense:AGPL-3.0Stargazers:50736Issues:0Issues:0

mixture-of-experts

PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538

Language:PythonLicense:GPL-3.0Stargazers:897Issues:0Issues:0

OpenMoE

A family of open-sourced Mixture-of-Experts (MoE) Large Language Models

Language:PythonStargazers:1297Issues:0Issues:0

xlora

X-LoRA: Mixture of LoRA Experts

Language:PythonLicense:Apache-2.0Stargazers:128Issues:0Issues:0

FederatedGPT-Shepherd

Shepherd: A foundational framework enabling federated instruction tuning for large language models

Language:PythonLicense:Apache-2.0Stargazers:190Issues:0Issues:0

plot_demo

论文里可以用到的实验图示例

Language:PythonStargazers:176Issues:0Issues:0

TuPaTE

Code for EMNLP 2022 paper "Efficiently Tuned Parameters are Task Embeddings"

Language:PythonLicense:Apache-2.0Stargazers:8Issues:0Issues:0

mLoRA

An Efficient "Factory" to Build Multiple LoRA Adapters

Language:PythonLicense:Apache-2.0Stargazers:215Issues:0Issues:0

OpenNE

An Open-Source Package for Network Embedding (NE)

Language:PythonLicense:MITStargazers:1682Issues:0Issues:0

awesome-mixture-of-experts

A collection of AWESOME things about mixture-of-experts

Stargazers:834Issues:0Issues:0

lorax

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

Language:PythonLicense:Apache-2.0Stargazers:1893Issues:0Issues:0