emilianavt / OpenSeeFace

Robust realtime face and facial landmark tracking on CPU with Unity integration

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[FR] VMC support

vivi90 opened this issue · comments

How about to add VMC Assistant support to OpenSeeFace?

I have read the docu of the protocol.

Currently iam writing an python pip package for vmc.
But i have some questions about bone transform:

/VMC/Ext/Bone/Pos <boneName> <p.x> <p.y> <p.z> <q.x> <q.y> <q.z> <q.w>

After running an OSC sniffer with VSeeFace in "transmitter mode" and covered webcam i get for each bone the T-Pose values of my model.
That's clear.
Now the question: How do you apply the tracking data to these T-Pose to get the correct position and quaternion?

In short:
So if i want to add VMC support to OpenSeeFace, then how to apply the tracking data from the webcam to the initial T-Pose of the model bones?

Adding VMC protocol support to OpenSeeFace's face tracker itself would be difficult, because it requires the full skeleton from a 3D model, which in turn would require IK to animate from the head target, which is the only information the tracker itself actually provides. You would either have to provide a way for the user to specify a skeleton or load a 3D model and then perform inverse kinematics to animate the model from the head target.

If you are using the Unity components, you can just use this BonesSend.cs in combination with OpenSeeFaceSample. But since you mentioned writing a pip package, that's probably not an option.

Ah i see.

Okay, let's forget that we need inverse kinematics just to simplify it.
So for example if i have the skeleton of the 3D model and also additional tracking data for all other bones (or at least for the tracker points [/VMC/Ext/Tra/Pos ...]).

In this case: How to apply it to the T-Pose the easy way?

If you have /VMC/Ext/Bone/Pos information, you just set the rotations and positions you receive to every bone's rotation and position value. For sending, you take each bone's rotation and position value and send it.

To apply the face tracking's head position and pose (or a /VMC/Ext/Tra/Pos ...) to the model however, you need IK. It can't really be simplified out. The IK tells you "given these known position/rotation values of a number of end points of certain bone chains (e.g. wrist->lower arm->upper arm->shoulder or head->neck->chest->spine->hips), what should each bone's rotations be".

If you have /VMC/Ext/Bone/Pos information, you just set the rotations and positions you receive to every bone's rotation and position value [..]

Okay.
My consideration is, that tracking results are a little bit raw.
Also the detected body size may differ.
For me it seems, that the bone values are based on the model.
So can i really just apply the tracking results for each bone?

To apply the face tracking's head position and pose (or a /VMC/Ext/Tra/Pos ...) [..] IK tells you "given these known position/rotation values of a number of end points of certain bone chains (e.g. wrist->lower arm->upper arm->shoulder or head->neck->chest->spine->hips), what should each bone's rotations be".

Interesting 🙂
Hopefully i don't need IK for my use case:
Connection between ROMP, VOpenSeeFace and VSeeFace
I want to connect ROMP (still no VMC support) and OpenSeeFace to VSeeFace.

For me it seems, that the bone values are based on the model.
So can i really just apply the tracking results for each bone?

It depends on the model. For VRM models, all bone rotations are normalized to 0 while in a T pose. Bone positions will differ of course, but usually only applying rotations works fine. If you are working with unnormalized models, things will become much trickier and you'll have to figure out a way of rotating things.

If you are sending the head pose as a tracker to VMC itself, it should handle the IK for you and send you bone rotations.

For me it seems, that the bone values are based on the model.
So can i really just apply the tracking results for each bone?

[..] For VRM models, all bone rotations are normalized to 0 while in a T pose [..] applying rotations works fine [..]

Okay, i will test it 🙂

If you are sending the head pose as a tracker to VMC itself, it should handle the IK for you and send you bone rotations.

Do mean the official VMC software?
It's a little bit confusing, that both the protocol and the software is called VMC ^^

Do mean the official VMC software?

The protocol is called "VMC protocol", so I assumed the box labelled just "VMC" was referring to the software.

If you are sending the head pose as a tracker to VMC itself, it should handle the IK for you and send you bone rotations.

Just to clarify the things:
According to the https://protocol.vmc.info/english#glossary it seems we don't need IK for VMC protocol support at all in OpenSeeFace, if we only implement it as an 'VMC Assistent'.
Also it might anyway be used together with 'VMC Performer' (IK supporting) applications like VSeeFace or VMC software.

So i will create an PR, if my pip package is usable.

[..] Bone positions will differ of course, but usually only applying rotations works fine [..]

Which minimum and maximum ranges are typical? 🙂
Because [-180.0, 180.0] seems too huge and [-1.0, 1.0] seems too small.

According to the https://protocol.vmc.info/english#glossary it seems we don't need IK for VMC protocol support at all in OpenSeeFace, if we only implement it as an 'VMC Assistent'.

You can send the head target as a /VMC/Ext/Tra, in that case you don't need IK, but not all applications will be able to handle the received data/perform IK on VMC protocol tracker data. For example, I believe VSeeFace ignores received /VMC/Ext/Tra data. If you want to apply it to the bones and send /VMC/Ext/Bone, you do need IK.

Which minimum and maximum ranges are typical?

They are quaternions, so there isn't really a minimum or maximum range, especially if un-normalized.