Use TensorRT API to implement Caffe-SSD, SSD(channel pruning), Mobilenet-SSD
============================================ I hope my code will help you learn and understand the TensorRT API better. It’s welcome to discuss the deep learning algorithm, model optimization, TensorRT API and so on, and learn from each other.
#Introduction:
- The original Caffe-SSD can run 3-5fps on my jetson tx2.
- TensorRT-SSD can run 8-10fps on my jetson tx2.
- TensorRT-SSD(channel pruning) can run 16-17fps on my jetson tx2.
- TensorRT-Mobilenet-SSD can run 40-43fps on my jetson tx2(it‘s cool!), and run 100+fps on gtx1060.
#Requirements:
- TensorRT3.0
- Cuda8.0 or Cuda9.0
- OpenCV
The code will be published shortly...
==============================================
In the Other_layer_tensorRT folder, there are the implementation of some other layers with TensorRT api, including:
- PReLU
Continuously updated...
-
2018/02/06, update detection_out layer
-
2018/03/07, add the common.cpp file
-
2018/04/21, TensorFlow 1.7 wheel with JetPack 3.2.(enable TensorRT support)
-
2018/05/07, TensorRT parse two(many) models, see sample_parse_two_models.txt
-
2018/05/30, add MobileNet-SSD_iplugin.prototxt (21 classes)
-
2018/07/19, fix the error of Concat layer in pluginIplement.cpp