C's repositories
stable-fast
https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.
Comfy-WaveSpeed
https://wavespeed.ai/ [WIP] The all in one inference optimization solution for ComfyUI, universal, flexible, and fast.
ParaAttention
https://wavespeed.ai/ Context parallel attention that accelerates DiT model inference with dynamic caching
fzf-preview.vim
fzf :heart: preview
pytorch-intel-mps
A fork of PyTorch that supports the use of MPS backend on Intel Mac without GPU card.
multiterm.vim
Toggle and Switch Between Multiple Floating Terminals in NeoVim or Vim
stable-fast-colab
Colab demo of stable-fast, an acceleration framework for diffusers
.vim_runtime
Yet a highly customized universal vim/neovim configuration.
AdaptiveFloat4
A novel high-precision 4bit quantization format
docker-ubuntu-desktop
Docker Image for Ubuntu Desktop which support HW GPU accelerated GUI apps. you can access the Container with ssh or remote desktop, just like Cloud VM.
stable-diffusion-webui-stable-fast
This the AUTOMATIC1111 WebUI extension for stable-fast: A lightweight performance optimization framework for StableDiffusion
.tmux_runtime
A tmux configuration
codeinterpreter-api
Open source implementation of the ChatGPT Code Interpreter 👾
cutlass
CUDA Templates for Linear Algebra Subroutines
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
flash-attention
Fast and memory-efficient exact attention
hydrangea-vim
Hydrangea theme for Vim.
markdown-cv
a simple template to write your CV in a readable markdown file and use CSS to publish/print it.
modded-nanogpt-rwkv
RWKV-7: Surpassing GPT
nunchaku
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
ThunderKittens
Tile primitives for speedy kernels
xDiT
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) on multi-GPU Clusters