Yizhou Lu's starred repositories
audiocraft
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
FasterTransformer
Transformer related optimization, including BERT, GPT
gemma_pytorch
The official PyTorch implementation of Google's Gemma models
audio-ai-timeline
A timeline of the latest AI models for audio generation, starting in 2023!
llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
Long-Context
This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a model’s information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
Speech-Backbones
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
distrifuser
[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
libri-light
dataset for lightly supervised training using the librivox audio book recordings. https://librivox.org/.
tiny-training
On-Device Training Under 256KB Memory [NeurIPS'22]
Large-Audio-Models
Keep track of big models in audio domain, including speech, singing, music etc.
Youku-mPLUG
Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks
CoFiPruning
[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
retraining-free-pruning
[NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers
patch_conv
Patch convolution to avoid large GPU memory usage of Conv2D
public_talks
Materials of public talks given By SJTU X-LANCE members