xuansan915 / TensorRT5-Example

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About This Sample

This sample demonstrates how to first train a model using TensorFlow and Keras, freeze the model and write it to a protobuf file, convert it to UFF, and finally run inference using TensorRT5.

Installing Prerequisites

  1. Make sure you have the python dependencies installed.

    • For python2, run python2 -m pip install -r requirements.txt from the top-level of this sample.
    • For python3, run python3 -m pip install -r requirements.txt from the top-level of this sample.
  2. Make sure you have the UFF toolkit as well as graphsurgeon installed. if not, download it from here

  3. Train the model and write out the frozen graph:

    $ mkdir models
    $ python train.py
    

Create an engine

1. use uff file

  1. Convert the .pb file to .uff, using the convert-to-uff utility:

    $ convert-to-uff ./models/model.pb -o ./models/
    

    The converter will display information about the input and output nodes, which you can use to the register inputs and outputs with the parser. In this case, we already know the details of the input and output nodes and have included them in the sample.

  2. Create a TensorRT inference engine from the uff file

    $ python uff2plan.py
    

2. use frozen graph

Create a TensorRT inference engine from the pb file:

    $ python pb2plan.py

Run inference

$ python inference.py

About

License:MIT License


Languages

Language:Python 100.0%