Giters
NVIDIA-AI-IOT
/
cuDLA-samples
YOLOv5 on Orin DLA
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
167
Watchers:
7
Issues:
39
Forks:
17
NVIDIA-AI-IOT/cuDLA-samples Issues
QAT programm killed everytime
Updated
2 months ago
bash data/model/build_dla_standalone_loadable.sh error
Updated
2 months ago
Comments count
5
missing file
Updated
2 months ago
IOFormat
Updated
2 months ago
cudla import external semaphore FAILED
Updated
3 months ago
Comments count
10
how to build tensorrt engine and use python api to run engine on dla ?
Updated
3 months ago
No module named quantization
Updated
3 months ago
Comments count
1
Size of output tensor in Run function
Updated
3 months ago
Cannot run the sample with cuDLA standalone mode
Closed
3 months ago
Standalone mode and performance issue
Closed
6 months ago
Comments count
8
How to profile cuDLA computation
Updated
4 months ago
Comments count
2
How to choose which DLA to run the code on?
Closed
4 months ago
Comments count
1
Implementation of batch size inference
Closed
4 months ago
Comments count
1
How to measure inference time for cudla standalone mode?
Updated
4 months ago
Comments count
1
How to obtain the min and max values of activation and weights?
Updated
4 months ago
Comments count
1
How to run simultaneously on two DLAs
Updated
4 months ago
Comments count
1
YOLOv5s PQT/QAT - 0% mAP when infer on TensorRT int8 .engine models
Closed
5 months ago
Comments count
1
/model.0/conv/_input_quantizer/Constant_1_output_0' is not supported on DLA.
Updated
5 months ago
Comments count
2
Quantization of yolov5
Updated
6 months ago
Comments count
1
build dla patch error
Closed
6 months ago
Comments count
2
Will cuDA standalone mode occupy cuda gpu resources?
Updated
6 months ago
Comments count
3
how to test inference time
Closed
6 months ago
Comments count
5
Size of input tensor in ReformatImage function
Updated
6 months ago
Comments count
3
jetpack
Closed
6 months ago
Comments count
2
Protobuf error when build_dla_standalone_loadable.sh
Closed
6 months ago
Comments count
2
Tensor sizes are different onnx model and DLA loadable engine outputs
Updated
6 months ago
Comments count
1
why we should use apply_custom_rules_to_quantizer?
Updated
6 months ago
Comments count
1
/usr/bin/ld: cannot find -lnvscisync
Closed
6 months ago
src/cudla_context_standalone.h:30:10: fatal error: nvscibuf.h: No such file or directory
Closed
6 months ago
Comments count
2
can not run dla1 in jetson agx orin dk
Closed
6 months ago
Comments count
1
fatal error: nvscibuf.h: No such file or directory
Closed
6 months ago
Comments count
4
Is there a plan that support the demo in DS
Updated
6 months ago
Comments count
3
Can I use this repository working on Jetson Xavier
Updated
6 months ago
Comments count
5
[hybrid mode] load cuDLA module from memory FAILED in src/cudla_context_hybrid.cpp:96, CUDLA ERR: 7
Updated
7 months ago
Comments count
2
build dla standalone loadable error
Closed
a year ago
Comments count
11
For qdq translator, may exit by node scales do't match
Closed
8 months ago
Comments count
2
[Note] Running cuDLA-samples on 6.0.6.0
Updated
8 months ago
Comments count
4
Error during Model Conversion Process - Impact Inquiry
Updated
10 months ago
Comments count
5