usyd-fsalab / fp6_llm

An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to create FP16 quantization scales?

mgoin opened this issue · comments

All of the FP6 gemm functions take the FP6 weights and their FP16 scales for each output channel

 * [Input]
 *  fp6_tensor:  int  tensor of shape [OC, IC // 16 * 3];   // 3 INT32 words contains 16 FP6  weights.
 *  fp16_scale:  half tensor of shape [OC];                 // for row-wise quantization.

We have functions for converting FP16 weights to FP6 (weight_prepacking_fp16_to_fp6) and for packing the FP6 weights into the final inference format (weight_matrix_prepacking), but nothing to generate the scales to up-convert back to FP16.

In the testing code for either python or c++ the scales are always randomly initialized. Is there a function that generates the scales needed for accurate dequantization with real weights?

@Summer-Summer any help here would be appreciated please

Sorry for the inconvenience. The generation of quantization scales is part of the model quantization process, and I believe that you can find the related code here.

I will add that API to this repo when I have more spare time.