emilianavt / OpenSeeFace

Robust realtime face and facial landmark tracking on CPU with Unity integration

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Compatibility request] Unity Barracuda ML solution support for the .onnx models in project

victorcfk opened this issue · comments

Hi emilianavt! Really impressed by the work done in the project. I was trying to use the project in a standalone mobile way, with Unity's Barracuda AI solution.

With importing the .onnx models you generated though, I believe it does not support some of the transform functions you used to create the .onnx model, the following and many similiar errors are thrown

...
Asset import failed, "Assets/models/lm_modelV_opt.onnx" > OnnxImportException: Unknown type FusedConv encountered while parsing layer 361.
...

Here are the transforms supported by Barracuda. Was wondering if you'd consider making the onnx model compatible with Barracuda! It would make it more usable by many Unity developers and bring more attention to it!

Disclosure-- I'm building an open-source SDK, native Unity for face-tracking to Avatars and I'd like to use the tech you've developed!

Hi! The FusedConv op was probably introduced by onnxruntime's model optimization. Here are some of the unoptimized models. Does they work? In the past, I've found that the Upsample layer used in the models is not well supported by many libraries other than onnxruntime.

models.zip

Hi emilianavt Thank you for the super quick response!

I've imported the models in the zip into Unity 2021.3.8f1 with Barracuda 3.0.0, they load correctly :) !

I'm trying to run them with the Unity barracuda functions, and I'm getting an assertion error from kernelCount and Channels
AssertionException: Assertion failure. Values are not equal. Expected: 198 == 792
from BarracudaPrecompiledCompute.cs

public override Tensor DepthwiseConv2D(Tensor X, Tensor K, Tensor B, int[] stride, int[] pad, Layer.FusedActivation fusedActivation)
{
        //...
        Assert.AreEqual(K.kernelCount, X.channels); //this line throws the error.
        //...
}

Wondering if this looks like a possible mismatch of the model with Unity Barracuda? Could be an input error on my end, still a noob with Deep learning models. If so, which file should I pay closer attention to for parsing the inputs correctly?

It does look like it might be either a bug/missing support in Barracuda or incompatibility of the model's structure with Barracuda's ONNX implementation, but I'm not sure. I'm not really familiar with the insides of ONNX beyond exporting a model from Pytorch and trying to load it with different libraries.

Got it, thank you for the clarification and sharing the models! Was thinking it might be the image being processed incorrectly before being fed into the model. Would you have the details of exactly what format the image must be in before it is fed into the model?

The image needs have a resolution of 224x224, RGB, with a channel order of (1, 3, 224, 224). The colors need to be normalized in a way similar to this:

        self.mean = np.float32(np.array([0.485, 0.456, 0.406]))
        self.std = np.float32(np.array([0.229, 0.224, 0.225]))
        self.mean = self.mean / self.std
        self.std = self.std * 255.0
        self.mean = - self.mean
        self.std = 1.0 / self.std
        image = image * self.std + self.mean

Thank you @emilianavt! The converted onnx model does not seem to be compatible with barracuda even after the fix. Will try other approaches.