uezo / ChatdollKit

ChatdollKit enables you to make your 3D model into a chatbot

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Face Expression

xuullin opened this issue · comments

Hi, I have selected Setup VRC FaceExpression Proxy, but after running unity, the character model does not make facial expressions, what is the situation? I'm using phane from Booth for the character model, if I want to make the model's mouth shape to match different audio, how should I modify the code? What are the ideas to achieve this?

What are the ideas to implement such a model in this project, as the model captures my movements and demeanor, imitates my speaking style and actions, and can chat with me intelligently afterwards?

commented

Hi @xuullin , you have to make the snapshots of the face expressions before use the VRC FaceExpression Proxy at runtime.

  1. [Inspector] Make face expressions as the combination of shapekeys.
  2. [Inspector] Capture it with its name (e.g. "Angry", "Joy").
  3. [Script] Call ModelController#SetFace() in your script.
var faces = new List<FaceExpression>() { new FaceExpression("Angry", 3.0f) };
modelController.SetFace(faces);

Or, set face expression to the response from your skill.

response.AddFace("Angry", 3.0f);

See also the example for ChatGPT.
https://github.com/uezo/ChatdollKit/blob/master/Examples/ChatGPT/ChatGPTSkill.cs#L33

commented

if I want to make the model's mouth shape to match different audio, how should I modify the code?

Setup uLipSync or OVRLipSync correctly. You don't need to modify the code.

thank you

Thank you very much for your reply. Can you talk about some of the connections and differences between this project and intelligent digital human generation technology? What are the changes that need to be made in this project if we want to implement intelligent digital human generation?