vineeths96 / Compressed-Transformers

In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization aware training of the linear layers and demonstrate the performance for 8 bits, 4 bits, 2 bits and 1 bit (binary) quantization.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

vineeths96/Compressed-Transformers Issues