A fast inference library for running LLMs locally on modern consumer-class GPUs
Repository from Github https://github.comoobabooga/exllamav2Repository from Github https://github.comoobabooga/exllamav2