PaddlePaddle / PaddleOCR

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

文本识别模型tensorrt推理

UBUNTUHWB opened this issue · comments

1.使用paddle2onnx工具将文本识别模型转换成.onnx模型
2.使用tensorrt8.2框架编译生成.engine推理模型
3.可以正确生成.engine文件,但是推理输出结果不对

我想问一下是不是需要安装paddleocr特定环境才能使用文本识别执行tensorrt推理,我用同样的方法将dbnet的.onnx生成.engine,执行推理的输出是正确的

可以描述下推理结果不对的具体情况嘛? db结果正确的话环境应该无误,可以看下前后处理是否一致。

float* image_based_output = output->cpu(ibatch);
std::vector preds;
for (int i = 0; i < 128; i++) {
int maxj = 0;
for (int j = 1; j < 6625; j++) {
if (image_based_output[6625 * i + j] > image_based_output[6625 * i + maxj]) maxj = j;
}
preds.push_back(maxj);
}
输出维度[batch, 128, 6625],后处理应该没问题的,输出结果与python版本onnx推理结果不一致

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.