ujsyehao / mobilenetv3-ssd

provide pytorch model and ncnn model

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

quantize to int8 error

Alanjunhao opened this issue · comments

Hi,
Thank you for your great work.
One question is that, when I use ncnn to quantize the provided .bin file to int8, Error occurred: "load_model error at layer 311, parameter file has inconsistent content." I was wondering is it because u modified .param file by hand, so there are some inconsistencies. Could you elaborate a bit 1) the reason u change the param file; 2) is there a way to quantize the bin model u provided.
Million of thanks!

commented

The reason for the error is maybe a modified param file.

  1. I modify the param file(mainly add post-process layer such as permute, flatten, priorbox layer..)
  2. I will check the quantize problem

Thanks for your quick reply, looking forward to your investigation!

Also, the param file generates error msg when loaded by ncnn in iOS simulator:
"
layer input not exists
layer index -1 not exists
custom layer input not exists
custom layer index -1 not exists
(lldb)
"
Hope u could also take a look at this issue, thx~

commented

@Alanjunhao Sorry, I have no IOS device.