Giters
tensorflow
/
tensorrt
TensorFlow/TensorRT integration
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
729
Watchers:
34
Issues:
197
Forks:
224
tensorflow/tensorrt Issues
Local rendezvous is aborting with status: NOT_FOUND: TRTEngineCacheResource not yet created while converting a saved model to trt engine
Updated
21 days ago
InvalidArgumentError: Graph execution error: Input to reshape is a tensor with 1204224 values, but the requested shape has 4816896
Closed
a month ago
ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
Closed
2 years ago
Comments count
3
Tensorflow TensorRT mismatch
Closed
5 months ago
"Incompatible shapes" error during inference
Updated
7 months ago
Serve tf-trt converted model return error: NodeDef mentions attr 'max_batch_size' not in Op: name=TRTEngineOp
Updated
a year ago
inceptionv3 c++ example inference build trt engine failed;
Updated
a year ago
Comments count
1
Cuda synchronize alternative for profiling
Updated
a year ago
Comments count
8
No speed improvements after TF-TRT optimizing on a tensorflow BERT model
Updated
a year ago
No improvement in GPU memory consumption during inference
Updated
2 years ago
Comments count
3
[Jetson] No OpKernel was registered to support Op 'TRTEngineOp'
Updated
2 years ago
Comments count
6
Unable to save gradient functions when exporting a _DefinedFunction when using converter.save('model')
Closed
2 years ago
Comments count
1
"ValueError: Failed to import metagraph, check error log for more info." Error
Updated
2 years ago
Comments count
1
Failed convert Huggingface Model to Tensorrt
Updated
2 years ago
Comments count
2
example-cpp/mnist_demo build tf-trt example fails
Updated
2 years ago
Comments count
2
Support for TensorRT 8.0
Updated
2 years ago
Comments count
7
blacklisting certain parts of the model to be not converter to TRT
Updated
2 years ago
Converting on jetson nano
Updated
2 years ago
How to write my input_fn when I conver my tf model to trt?
Closed
2 years ago
Comments count
5
Converting a Vision Transformer model with pre-built engines
Updated
2 years ago
Is it possible to use tensorrt to speed up original tensorflow t5 exported saved_model?
Updated
2 years ago
Comments count
4
how to convert Transformer model with tensorRT ops
Updated
2 years ago
Variables saved in converted model
Updated
2 years ago
Loading the file to build the model failed
Updated
2 years ago
Comments count
2
where to get the saved models
Updated
2 years ago
Very Low Validation Accuracy for Resnet50 and Resnet101 models using TF-TRT
Updated
2 years ago
Comments count
2
[Bug/Feature Request] - TF-TensorRT to support “string” datatype
Updated
2 years ago
Comments count
1
Not able to optimize tensorflow (tf 1.14) object detection model graph
Updated
2 years ago
Inference time using TF-TRT is the same as Native Tensorflow for Object Detection Models (SSD Resnet640x640 and EfficientDetD0)
Updated
2 years ago
Comments count
3
How to evaluate overall model accuracy of TF_TRT FP32, FP16 and FP08 based image classifier?
Updated
2 years ago
EfficientDet D0 pre-build TRT engine failed. TF 2.7.0, TRT 7.2.3
Closed
2 years ago
Comments count
1
TF-TRT model is 3 time to TF model
Closed
2 years ago
Comments count
2
Tensorflow Docker 2.7.0-gpu has incompatible TRT version 8.0.0
Closed
2 years ago
Comments count
1
Calibration Fails with (Assertion mIndex >= 0 failed.symbol is not concrete)
Updated
2 years ago
BERT Large (TF2) model conversion fails
Updated
2 years ago
Comments count
5
Crash in TensorRT during convert the ONNX model to PLAN in parallel.
Closed
2 years ago
Comments count
2
The following error occurs when running tf1.14. 0. Please find the solution
Closed
3 years ago
Comments count
1
DefaultLogger INVALID_ARGUMENT: Cannot set empty memory
Updated
3 years ago
[Blog post] tf-models-official missing in tensorflow/tensorflow:latest-gpu
Updated
3 years ago
Accuracy drops a lot,when after build TensorRTengine
Updated
3 years ago
Comments count
1
error: Running multiple TensorRT optimized models in Tensorflow
Updated
3 years ago
the final TRT model is too large
Updated
3 years ago
No speed improvement for retinanet TF-TRT object detection model
Updated
3 years ago
convert int8 engine failed
Updated
3 years ago
Converted fp16 or int8 model require up to 10 minutes to startup.
Updated
3 years ago
Comments count
6
*Core dumped* bug
Updated
3 years ago
Shape output bug in dynamic shape mode
Updated
3 years ago
Comments count
2
Container TF-TRT does not exist
Updated
3 years ago
Comments count
1
Tensorflow Tensorrt model not working properly in for loop
Updated
3 years ago
No improvement using TensorRT5
Updated
3 years ago
Previous
Next