Giters
Tencent
/
TPAT
TensorRT Plugin Autogen Tool
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
365
Watchers:
13
Issues:
37
Forks:
42
Tencent/TPAT Issues
好久不维护了吗?
Updated
2 months ago
Fail to run example test_onehot_dynamic_direct.py
Closed
a year ago
Comments count
12
so build succeed, tensorrt run error one hot example
Updated
9 months ago
Comments count
11
Could you provide simple tutorial on how to run onnx_to_plugin for simple operator?
Closed
a year ago
Comments count
4
KeyError int8
Closed
a year ago
Comments count
1
out of memeory
Closed
a year ago
Comments count
2
Is maintained of this repo?
Closed
a year ago
Comments count
1
unsupported ptx version error
Closed
a year ago
so build succeed, tensorrt run error
Updated
a year ago
Comments count
6
support for one hot plugin with dynamic axis other than batch_size dim
Updated
a year ago
Comments count
4
Is there a talk or article about of the implementation of this project?
Updated
a year ago
Comments count
1
Error when running one_hot example
Updated
a year ago
Comments count
6
Half model error
Closed
a year ago
Comments count
8
Support for dynamic shape ?
Closed
a year ago
Comments count
2
cuda kernel code generated by Ansor‘s search space will use shared memory optimization to auto tuning?
Updated
a year ago
Comments count
1
precision for one hot plugin is wrong
Updated
a year ago
Comments count
2
No radical Subgraph optimization for TensorRT
Updated
a year ago
Comments count
1
Support CUDA 11.5 and TensorRT 8.2.1.3?
Updated
2 years ago
Comments count
1
无法跑通 example
Closed
2 years ago
Comments count
1
RandomNormal not supported for frontend ONNX
Updated
2 years ago
Comments count
1
test_tpat.py error
Closed
2 years ago
Comments count
4
What if the custom op type not in tvm.relay.frontend.onnx._get_convert_map?
Closed
2 years ago
Comments count
1
When will dynamic BatchSize be supported
Closed
2 years ago
Comments count
4
Does TPAT support grid_sample?
Closed
2 years ago
Comments count
2
Conversion Error for IsInf OP
Closed
2 years ago
Comments count
11
tensorflow bert model can not build successfully, when to solve?
Updated
2 years ago
Comments count
1
test_tpat error
Updated
2 years ago
Comments count
3
when to support scan operator?
Updated
2 years ago
Comments count
2
Could you build a docker image for us to download?
Updated
2 years ago
Comments count
1
what‘s the blazerml-tvm build error below?
Closed
2 years ago
TPAT and TRT - no kernel image is available for execution on the device
Closed
2 years ago
Comments count
1
Is it custom operator supported?
Closed
2 years ago
Comments count
4
can not find project_libbacktrace and report an error while building tvm form source
Closed
2 years ago
Cuda Error in execute: 209 (no kernel image is available for execution on the device)
Closed
2 years ago
Comments count
2
Is sparse convolution now supported?
Updated
2 years ago
Comments count
2
Can't build TPAT
Updated
2 years ago
Comments count
9
Docker image
Closed
2 years ago