onnxjs support for torchvision FCN models
afg1 opened this issue · comments
I've exported a pretrained resnet101-FCN from pytorch using the following:
import torch
import torchvision.models as models
import torch.onnx
import numpy as np
seg_rn = models.segmentation.fcn_resnet101(pretrained=True, progress=True, num_classes=2, aux_loss=None)
seg_rn.eval()
x = torch.randn(1,3,512,512, requires_grad=True)
torch_out = seg_rn(x)
torch.onnx.export(seg_rn, x, "seg_rn.onnx",
export_params=True,
opset_version=12,
do_constant_folding=True,
input_names=['input'],
output_names=['out'],
dynamic_axes={'input': {0:'batch_size'}, 'out':{0:'batch_size'}})
And tried to import it in onnxjs with this:
window.session = new InferenceSession({ backendHint: 'webgl' }); // does that make it global?
const modelURL = "./models/seg_rn.onnx";
await window.session.loadModel(modelURL);
However, I get the following error when the model tries to load:
Uncaught (in promise) TypeError: cannot resolve operator 'Shape' with opsets: ai.onnx v11
Obviously this is related to the Shape operator.
- What exactly is the issue?
- I saw that Shape was updated recently (just after the last release), will that update fix this issue?
- It looks like I'm using different opsets when I export the model and when I try to load it. How do I control the opset used by onnxjs?
Thanks