666DZY666 / micronet

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

wbwtab下的bn_fuse 设置问题

cqray1990 opened this issue · comments

wbwtab下的bn_fuse 设置如何参数,都是32bit的bn融合,
parser.add_argument("--W", type=int, default=32, help="Wb:2, Wt:3, Wfp:32")
parser.add_argument("--A", type=int, default=32, help="Ab:2, Afp:32")

为啥代码还是进入:

******************* 针对特征(A)二值的bn融合 *******************

if bn_counter >= 1 and bn_counter <= bin_bn_fuse_num:
    mask_positive = gamma.data.gt(0)
    mask_negetive = gamma.data.lt(0)

    w_fused[mask_positive] = w[mask_positive]
    b_fused[mask_positive] = (
        b[mask_positive]
        - mean[mask_positive]
        + beta[mask_positive] * (std[mask_positive] / gamma[mask_positive])
    )

融合二值的,这种设置应该是普通融合啊?