This project demonstrates how to capture facial expressions from a webcam and apply them to a 3D avatar model in real time using MediaPipe, TensorFlow.js, and Three.js.
- A modern web browser that supports WebGL.
-
Clone the repository:
git clone https://github.com/vidya-hub/three_avatar.git
-
Change to the project directory:
cd three_avatar
-
Start the application using Live Server.
-
Access the application in your browser. Live Server will provide you with the URL.
-
Allow access to your webcam when prompted.
-
Observe how the 3D avatar model mimics your facial expressions in real time.
- Real-time facial expression capture using a webcam.
- Application of expressions to a 3D avatar model.
- Uses MediaPipe for face landmark detection.
- Uses Three.js for 3D rendering.