Yang's repositories

Language:HTMLStargazers:0Issues:1Issues:0

llm-chain

`llm-chain` is a powerful rust crate for building chains in large language models allowing you to summarise text and complete complex tasks

Language:RustLicense:MITStargazers:0Issues:0Issues:0

russitant

An assistant powered by LLM

Language:TypeScriptStargazers:0Issues:0Issues:0

awesome-llm

Lists for LLM

Stargazers:0Issues:1Issues:0

awesome-o1

A bibliography and survey of the papers surrounding o1

Stargazers:0Issues:0Issues:0

axolotl

Go ahead and axolotl questions

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

byrne-euclid

MetaPost + ConTeXt rendition of Oliver Byrne's "The first six books of the Elements of Euclid"

Language:TeXLicense:GPL-3.0Stargazers:0Issues:1Issues:0

Chinese-Mixtral-8x7B

中文Mixtral-8x7B(Chinese-Mixtral-8x7B)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:1Issues:0
Language:PythonStargazers:0Issues:0Issues:0

dora

DORA (Dataflow-Oriented Robotic Architecture) is middleware designed to streamline and simplify the creation of AI-based robotic applications. It offers low latency, composable, and distributed dataflow capabilities. Applications are modeled as directed graphs, also referred to as pipelines.

License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:1Issues:0

evol-teacher

Open Source WizardCoder Dataset

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

LLaMA-Efficient-Tuning

Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llm

An ecosystem of Rust libraries for working with large language models

Language:RustLicense:Apache-2.0Stargazers:0Issues:0Issues:0

llm-index

A rust implementation of llama-index

License:Apache-2.0Stargazers:0Issues:1Issues:0

llm.c

LLM training in simple, raw C/CUDA

Language:CudaLicense:MITStargazers:0Issues:0Issues:0

Magic_Words

Code for the paper "What's the Magic Word? A Control Theory of LLM Prompting"

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

MoE-LLaVA

Mixture-of-Experts for Large Vision-Language Models

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

native-sparse-attention

🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"

License:MITStargazers:0Issues:0Issues:0

open-interpreter

OpenAI's Code Interpreter in your terminal, running locally

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

paper-qa

High accuracy RAG for answering questions from scientific documents with citations

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

qlora

QLoRA: Efficient Finetuning of Quantized LLMs

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

reachy_2023

Reachy 2023 workspace

License:Apache-2.0Stargazers:0Issues:0Issues:0

text-generation-webui

A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.

Language:PythonLicense:AGPL-3.0Stargazers:0Issues:0Issues:0

transformers.js

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

License:Apache-2.0Stargazers:0Issues:0Issues:0

unet.cu

UNet diffusion model in pure CUDA

Language:CudaStargazers:0Issues:0Issues:0

unsloth

5X faster 60% less memory QLoRA finetuning

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:1Issues:0

WizardLM

LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath

Language:PythonStargazers:0Issues:0Issues:0