microsoft / onnxjs

ONNX.js: run ONNX models using JavaScript

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unsupported operators 'DynamicQuantizeLinear' and 'Resize'

waittim opened this issue · comments

DynamicQuantizeLinear:

opset.ts:48 Uncaught (in promise) TypeError: cannot resolve operator 'DynamicQuantizeLinear' with opsets: ai.onnx v11, ai.onnx.ml v2, ai.onnx.training v1, ai.onnx.preview.training v1, com.microsoft v1, com.microsoft.nchwc v1, com.microsoft.mlfeaturizers v1
    at Object.e.resolveOperator (opset.ts:48)
    at t.resolve (session-handler.ts:60)
    at e.initializeOps (session.ts:235)
    at session.ts:92
    at t.event (instrument.ts:294)
    at e.initialize (session.ts:81)
    at e.<anonymous> (session.ts:63)
    at onnx.min.js:14
    at Object.next (onnx.min.js:14)
    at a (onnx.min.js:14)

Resize:

Uncaught (in promise) TypeError: cannot resolve operator 'Resize' with opsets: ai.onnx v11, ai.onnx.ml v2, ai.onnx.training v1, ai.onnx.preview.training v1, com.microsoft v1, com.microsoft.nchwc v1, com.microsoft.mlfeaturizers v1
    at Object.e.resolveOperator (opset.ts:48)
    at t.resolve (session-handler.ts:60)
    at e.initializeOps (session.ts:235)
    at session.ts:92
    at t.event (instrument.ts:294)
    at e.initialize (session.ts:81)
    at e.<anonymous> (session.ts:63)
    at onnx.min.js:14
    at Object.next (onnx.min.js:14)
    at a (onnx.min.js:14)
e.resolveOperator @ opset.ts:48
t.resolve @ session-handler.ts:60
e.initializeOps @ session.ts:235
(anonymous) @ session.ts:92
t.event @ instrument.ts:294
e.initialize @ session.ts:81
(anonymous) @ session.ts:63
(anonymous) @ onnx.min.js:14
(anonymous) @ onnx.min.js:14
a @ onnx.min.js:14
async function (async)
test @ onnxjs-test.html:8
(anonymous) @ onnxjs-test.html:14

The onnx model is quantized by onnxruntime.quantization.quantize_qat.

The original onnx model is converted from PyTorch model(.cfg&.pt). Both the original onnx model and the quantized onnx model can work in the Python environment.
When I'm using the original onnx model, it shows that 'Uncaught (in promise) TypeError: int64 is not supported'. Therefore, I quantized it to int8 by onnxruntime.quantization. However, it still doesn't work.
you can find the model I used in this folder.

Did you ever resolve this? I'm facing the same problem, even had the same idea to try quantizing

I just moved to another approach.

I'm having the same problem at the moment. What approach did you end up using instead?