hova88 / PointPillars_MultiHead_40FPS

A REAL-TIME 3D detection network [Pointpillars] compiled by CUDA/TensorRT/C++.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Driver error

dianbayier opened this issue · comments

terminate called after throwing an instance of 'pwgen::PwgenException'
what(): Driver error:
Aborted (core dumped)

terminate called after throwing an instance of 'pwgen::PwgenException' what(): Driver error: Aborted (core dumped)

Never met. Please post more details

void PointPillars::OnnxToTRTModel(
const std::string& model_file,
nvinfer1::ICudaEngine** engine_ptr) {

 std::cout<<" OnnxToTRTModel 1" <<std::endl;
int verbosity = static_cast<int>(nvinfer1::ILogger::Severity::kWARNING);
std::cout<<" OnnxToTRTModel 2" <<std::endl;
// create the builder
const auto explicit_batch =
    static_cast<uint32_t>(kBatchSize) << static_cast<uint32_t>(
        nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(g_logger_);
nvinfer1::INetworkDefinition* network =
    builder->createNetworkV2(explicit_batch);
std::cout<<" OnnxToTRTModel 3" <<std::endl;
// parse onnx model
auto parser = nvonnxparser::createParser(*network, g_logger_);
if (!parser->parseFromFile(model_file.c_str(), verbosity)) {
    std::string msg("failed to parse onnx file");
    g_logger_.log(nvinfer1::ILogger::Severity::kERROR, msg.c_str());
    exit(EXIT_FAILURE);
}
std::cout<<" OnnxToTRTModel 4" <<std::endl;
// Build the engine
builder->setMaxBatchSize(kBatchSize);
//builder->setHalf2Mode(true);
std::cout<<" OnnxToTRTModel 5" <<std::endl;
nvinfer1::IBuilderConfig* config = builder->createBuilderConfig();
std::cout<<" OnnxToTRTModel 6" <<std::endl;
config->setMaxWorkspaceSize(1 << 20);
std::cout<<" OnnxToTRTModel 7" <<std::endl;
nvinfer1::ICudaEngine* engine =
    builder->buildEngineWithConfig(*network, *config);
std::cout<<" OnnxToTRTModel 8" <<std::endl;
*engine_ptr = engine;
parser->destroy();
network->destroy();
config->destroy();
builder->destroy();

}

PointPillars start
InitTRT start
InitTRT start1
InitTRT start11
OnnxToTRTModel 1
OnnxToTRTModel 2
OnnxToTRTModel 3

Input filename: ../model/pfe.onnx
ONNX IR version: 0.0.4
Opset version: 10
Producer name: pytorch
Producer version: 1.3
Domain:
Model version: 0
Doc string:

WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
OnnxToTRTModel 4
OnnxToTRTModel 5
OnnxToTRTModel 6
OnnxToTRTModel 7
OnnxToTRTModel 8
InitTRT start12
OnnxToTRTModel 1
OnnxToTRTModel 2
OnnxToTRTModel 3

Input filename: ../model/backbone.onnx
ONNX IR version: 0.0.4
Opset version: 10
Producer name: pytorch
Producer version: 1.3
Domain:
Model version: 0
Doc string:

OnnxToTRTModel 4
OnnxToTRTModel 5
OnnxToTRTModel 6
OnnxToTRTModel 7
terminate called after throwing an instance of 'pwgen::PwgenException'
what(): Driver error:
Aborted (core dumped)

nvinfer1::ICudaEngine* engine =
builder->buildEngineWithConfig(*network, *config);

Throw an exception

terminate called after throwing an instance of 'pwgen::PwgenException'
what(): Driver error:
Aborted (core dumped)

OnnxToTRTModel(backbone_file_, &backbone_engine_);
----->nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
Throw an exception

terminate called after throwing an instance of 'pwgen::PwgenException'
what(): Driver error:
Aborted (core dumped)

OnnxToTRTModel(backbone_file_, &backbone_engine_); ----->nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config); Throw an exception

terminate called after throwing an instance of 'pwgen::PwgenException' what(): Driver error: Aborted (core dumped)

https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars