oramasearch / onnx-go

onnx-go gives the ability to import a pre-trained neural network within Go without being linked to a framework or library.

Home Page:https://blog.owulveryck.info/2019/04/03/from-a-project-to-a-product-the-state-of-onnx-go.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for empty tensors

pietermarsman opened this issue · comments

Context

I'm trying to load a feature pyramid network on top of a resnet model into onnx-go. The FPN uses an onnx Resize operator because it needs to upsample the feature maps. The Resize operator has an input (roi) that are optional.

I'm using torch. When exporting a torch Resize operator to onnx the roi parameter is not used (only used for tf_crop_and_resize coordinate transformation mode). But the torch onnx export uses a constant value of an empty tensor. It has [0] as dims and no float_data or raw_data. Since this parameter isn't used at all the value should not matter.

The bug

When loading an onnx model in onnx-go it crashes because "No data found".

To generate the onnx I'm using this.

import os

import torch
from torchvision.transforms import transforms

torch.onnx.export(
    transforms.Resize((100, 100)),
    torch.zeros((1, 3, 200, 200)),
    "model.onnx",
    opset_version=11,
    verbose=True,
)
Output
graph(%img : Float(1, 3, 200, 200, strides=[120000, 40000, 200, 1], requires_grad=0, device=cpu),
      %12 : Long(2, strides=[1], requires_grad=0, device=cpu)):
  %2 : Long(4, strides=[1], device=cpu) = onnx::Shape(%img)
  %3 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
  %4 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
  %5 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={2}]()
  %6 : Long(2, strides=[1], device=cpu) = onnx::Slice(%2, %4, %5, %3)
  %8 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0](%6, %12)
  %9 : Float(0, strides=[1], device=cpu) = onnx::Constant[value=[ CPUFloatType{0} ]]()
  %10 : Float(0, strides=[1], device=cpu) = onnx::Constant[value=[ CPUFloatType{0} ]]()
  %11 : Float(*, *, *, *, strides=[30000, 10000, 100, 1], requires_grad=0, device=cpu) = onnx::Resize[coordinate_transformation_mode="pytorch_half_pixel", cubic_coeff_a=-0.75, mode="linear", nearest_mode="floor"](%img, %9, %10, %8) # /home/pieter/projects/orbisk/pytorch-image-classification/.venv/lib/python3.8/site-packages/torch/nn/functional.py:3731:0
  return (%11)

To load it I'm using

func main() {
	// Create a backend receiver
	backend := gorgonnx.NewGraph()

	// Create a model and set the execution backend
	model := onnx.NewModel(backend)

	// read the onnx model
	b, err := os.ReadFile("model.onnx")
	if err != nil {
		log.Fatal("error reading file ", err)
	}

	// Decode it into the model
	err = model.UnmarshalBinary(b)
	if err != nil {
		log.Fatal("error loading model ", err)
	}
}

Output:

2022/11/16 16:35:11 error loading model No data found

Why this happens

The onnx::Resize operator takes %9 and %10 as an input. These are of type Float(0) and dont have any data. These tensors cannot be read properly by onnx-go.

The error happens here: https://github.com/owulveryck/onnx-go/blob/master/internal/onnx/ir/tensor.go#L113

Solution

I think this can be solved by adding a check for dimensionality of the tensor to generateConsOptsFromFloat64Tensor and alike. If it is zero then an empty gorgonia tensor should be created.

I do have some time to work on this (work project) if this solution is acceptable.