Giters
microsoft
/
onnxconverter-common
Common utilities for ONNX converters
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
233
Watchers:
14
Issues:
52
Forks:
64
microsoft/onnxconverter-common Issues
convert to FP16 generate orphan and self-recurring nodes
Updated
9 days ago
Comments count
2
[Error] Load fp16
Updated
11 days ago
Comments count
2
Performance degrade after sess_options.enable_profiling = True
Updated
12 days ago
Comments count
1
convert float32 model to float16, but memory usage has not decreased
Updated
12 days ago
Comments count
5
protobuf version
Closed
a month ago
Comments count
4
onnxconverter_common.auto_mixed_precision.auto_convert_mixed_precision never ends
Updated
a month ago
Comments count
1
convert_float_to_float16() produces a model that causes ValidationError with onnx.checker.check_model()
Updated
a month ago
Comments count
6
Converting model fp32 to fp16 with auto_mixed_precision_model_path from gets NaN
Updated
a month ago
Comments count
1
`auto_convert_mixed_precision` Error: two nodes with same node name error occurred during
Closed
9 months ago
Comments count
5
FP16 conversion yields an unusable model
Updated
a month ago
Comments count
2
Is there any upgrades on onnxconverter-common?
Updated
a month ago
Comments count
1
resize op convert to FP16 fail
Updated
a month ago
Comments count
4
#Onnx Quantisation
Updated
a month ago
Comments count
2
Integrate with ONNX 1.16.0 release branch
Closed
a month ago
Comments count
1
Redundant dependencies in requirements.txt
Closed
a month ago
Comments count
1
Documentation
Closed
a month ago
Comments count
1
Extra Cast nodes causes overflow in onnxruntime 1.17
Closed
a month ago
Comments count
1
How to convert a model to mixed precision?
Closed
2 months ago
Comments count
21
FP16 model can not get acceleration on GPU with ONNXRuntime-GPU
Closed
2 months ago
Comments count
2
Add `convert_float_to_bfloat16` function to avoid overflow
Updated
6 months ago
Failing tests
Closed
2 years ago
Comments count
3
onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'Resize__139_input_cast_1' of node: name: Resize__139 OpType: Resize is not output of any previous nodes.
Updated
9 months ago
Comments count
2
Issues when converting model to float16
Closed
2 years ago
Comments count
3
Inference issue after convert_float_to_float16
Closed
2 years ago
Comments count
2
fp32 to fp16
Closed
2 years ago
Comments count
2
Fp32 to Fp16 conversion failure for CSNet
Closed
2 years ago
Comments count
2
Publish source distribution to pypi
Closed
2 years ago
Comments count
2
1.12.2 version released?
Closed
2 years ago
Comments count
1
Fp32-->fp16: original fp32 model works well with input data, but converted fp16 model failed with the same input data
Closed
2 years ago
Source tarball for v1.9.0 republished?
Closed
2 years ago
Add bounds warning to FP16 conversion script
Closed
2 years ago
Comments count
1
Error: onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'TopK_111_input_cast_0' of node:
Closed
2 years ago
Comments count
1
publish onnxconverter package in anaconda
Closed
2 years ago
Comments count
1
support sizes for Resize op
Closed
2 years ago
Comments count
4
`Unsupported shape calculation for operator mlProgram` while using `onnxmltools.convert_coreml`
Closed
2 years ago
Comments count
1
F16 file does not convert correctly
Closed
2 years ago
Comments count
3
Problem of converting model from 32-bit to 16-bit
Closed
2 years ago
Comments count
1
StrictVersion is deprecated
Closed
2 years ago
Comments count
1
Security Development Lifecycle review for 2022-06
Closed
2 years ago
Comments count
2
"No space left on device" issue on auto_convert_mixed_precision_model_path()
Closed
2 years ago
Comments count
1
add NOTICE file to onnxconverter-common
Closed
2 years ago
Comments count
1
verify onnx 1.12.0 rc
Closed
2 years ago
Comments count
3
Fp16 model runs slower than fp32 model
Closed
2 years ago
Comments count
4
auto_convert_mixed_precision() doesn't support >2GB model
Closed
2 years ago
Comments count
2
regarding the keep_io_types in float16 converter
Closed
3 years ago
Comments count
1
convert to .onnx fail with tensorflow2.4
Closed
3 years ago
Comments count
1
tensorflow max_pool_with_argmax op does not return indices
Closed
3 years ago
Comments count
1
Bad opitmization for MergePadConvOptimizer
Closed
3 years ago
Comments count
2
apply_relu6 was deprecated and remove.
Closed
4 years ago
Comments count
3
Test test_to_onnx_type fails in version 1.6.0
Closed
4 years ago
Comments count
1
Previous
Next