Zhihao Lin's repositories
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
AlphaNet
AlphaNet Improved Training of Supernet with Alpha-Divergence
AttentiveNAS
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"
CMUA-Watermark
The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)
examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
FQ-ViT
[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
FOX-NAS
FOX-NAS: Fast, On-device and Explainable NeuralArchitecture Search
InternLM
InternLM has open-sourced 7 and 20 billion parameter base models and chat models tailored for practical scenarios and the training system.
lagent
A lightweight framework for building LLM-based agents
LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
LLaVA
[NeurIPS 2023 Oral] Visual Instruction Tuning: LLaVA (Large Language-and-Vision Assistant) built towards multimodal GPT-4 level capabilities.
lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM
LMOps
General technology for enabling AI capabilities w/ LLMs and MLLMs
minisora
The Mini Sora project aims to explore the implementation path and future development direction of Sora.
mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
model-quantization
Collections of model quantization algorithms
opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
VLMEvalKit
An open-source evaluation toolkit of large vision-language models (LVLMs)
WeChatMsg
提取微信聊天记录,将其导出成HTML、Word、CSV文档永久保存,对聊天记录进行分析生成年度聊天报告
xtuner
XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models
YOLOX
MegEngine implementation of YOLOX
ZeroQ
[CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework