IST-DASLab / qmoe

Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".

Home Page:https://arxiv.org/abs/2310.16795

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Supporting group-wise quantization and sub1 packing

NicoNico6 opened this issue · comments

Dear Authors,

Sorry for the intrusion once more.

To the best of my understanding, the original GPTQ algorithm accommodates a range of group-wise quantizations, such as group sizes of -1, 128, and 64. Upon reviewing the code, and assuming my interpretation is correct, it appears that although the batch_GPTQ inherently supports various group sizes, the add_expert function within the Sub1CheckpointManager class and the make function in the Sub1Linear seemingly only support row-wise quantization by default, corresponding to a group size of -1. Consequently, only the row-wise min_max variable is preserved for subsequent packing operations.

Would it be feasible to apply the LWZ algorithm to tensors that have undergone group-wise quantization (for instance, groupsize=128, ternary weights) and to design the sub1 packing process accordingly?