sophgo / tpu-mlir

Machine learning compiler based on MLIR for Sophgo TPU.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CV180X芯片ONNX模型转换INT16,INT8模型失败问题

aceraoc opened this issue · comments

我的整体的环境是在docker下编译的,但是我使用CV181X和CV183X转换ONNX模型为INT16,INT8模型是成功的,下面是我的模型转换命令:
#ONNX2MLIR
model_transform.py
--model_name yolov5s
--model_def /workspace/model_yolov5s/yolov5s.onnx
--input_shapes [[1,3,640,640]]
--mean 0.0,0.0,0.0
--scale 0.0039216,0.0039216,0.0039216
--keep_aspect_ratio
--pixel_format rgb
--output_names 350,498,646
--test_input /workspace/model_yolov5s/image/dog.jpg
--test_result yolov5s_top_outputs.npz
--mlir yolov5s.mlir

#MLIR2BF16
model_deploy.py
--mlir yolov5s.mlir
--quantize BF16
--chip cv180x
--test_input yolov5s_in_f32.npz
--test_reference yolov5s_top_outputs.npz
--tolerance 0.99,0.99
--model yolov5s_cv180x_bf16.cvimodel

#MLIR2INT8
run_calibration.py yolov5s.mlir
--dataset ../COCO2017/
--input_num 100
-o yolov5s_cali_table

model_deploy.py
--mlir yolov5s.mlir
--quantize INT8
--calibration_table yolov5s_cali_table
--chip cv180x
--test_input yolov5s_in_f32.npz
--test_reference yolov5s_top_outputs.npz
--tolerance 0.85,0.45
--model yolov5s_cv180x_int8_sym.cvimodel

我的错误结果如下:
python3: /work/chip_compiler/cviruntime/src/cmodel/cmodel_cmdbuf.cpp:164: virtual bmerr_t CModelCmdbuf::rt_load_cmdbuf(bmctx_t, uint8_t*, size_t, long long unsigned int, long long unsigned int, bool, bm_memory**): Assertion `tiu_sz <= (int)sz && tiu_sz <= (int)g_tiu_cmdbuf_reserved_size' failed.
Aborted (core dumped)

有大佬可以帮我看下我的命令是否使用有问题吗?