KYLN24's starred repositories
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
datasets-viewer
A VSCode extension to preview Huggingface datasets quickly.
agentscope
Start building LLM-empowered multi-agent applications in an easier way.
GPT-SoVITS
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
uptime-kuma
A fancy self-hosted monitoring tool
neural-speed
An innovative library for efficient LLM inference via low-bit quantization
HuixiangDou
HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance
min-sized-rust
🦀 How to minimize Rust binary size 📦
instant-ngp
Instant neural graphics primitives: lightning fast NeRF and more
MediaCrawler
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫
ParallelTokenizer
Use the tokenizer in parallel to achieve superior acceleration