kemolo's repositories
AetherConverTools
以太流派的AI转绘工具包
axolotl
Go ahead and axolotl questions
Bert-VITS2
vits2 backbone with bert
bytepiece
更纯粹、更高压缩率的Tokenizer
CaMeLS
Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.
control-lora-v2
ControlLoRA Version 2: A Lightweight Neural Network To Control Stable Diffusion Spatial Information Version 2
Cutie
[arXiv 2023] Putting the Object Back Into Video Object Segmentation
Depth-Anything
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
detect-pretrain-code
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
function_vectors
Function Vectors in Large Language Models
Genshin_Datasets
Genshin Datasets For SVC/SVS/TTS
grok-1
Grok open release
h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
intercode
Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898
LLM-Agent-Paper-List
The paper list of the paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
llm-viz
3D Visualization of an GPT-style LLM
llm_multiagent_debate
Code for Improving Factuality and Reasoning in Language Models through Multiagent Debate
MathGLM
Official Pytorch Implementation for MathGLM
Medusa
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
MetaGPT
🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
Omost
Your image is almost there!
prose-benchmarks
PROSE Public Benchmark Suite
Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言和代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。