There are 0 repository under visemes topic.
This project is a digital human that can talk and listen to you. It uses OpenAI's GPT-3 to generate responses, OpenAI's Whisper to transcript the audio, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
Audio detection with visemes in a fragment shader
VTuber application which only requires your voice and microphone, no need for a webcam or other tracking nonsense.
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
Moho - Import Adobe Character Animator [Ch] Lip-Sync Visemes Keydata into Switch Layers