This Python script utilizes the Mediapipe library to perform multi-landmark tracking in real-time. It incorporates three main components: Pose Estimation, Face Detection, and Hand Tracking. The script captures video from the webcam, overlays detected landmarks on the face, hands, and human pose, and displays the result in a resizable window.
- Python 3.x
- OpenCV (
pip install opencv-python
) - Mediapipe (
pip install mediapipe
)
- Clone the repository:
git clone https://github.com/Mayur-ingole/Multi-Landmark-Tracking.git
cd Multi-Landmarks-Tracking
- Install the required dependencies:
pip install -r requirements.txt
- Run the script:
python Multi_Landmarks_Tracking.py
Press 'q' to exit the application.
- Pose Estimation: Tracks the human body pose.
- Face Detection: Detects and tracks faces in the video stream.
- Hand Tracking: Tracks hand landmarks and connections.
- Pose Estimation (mp_pose): Utilizes the Mediapipe Pose model to estimate human body pose, including landmark locations for various body parts.
- Face Detection (mp_face): Employs the Mediapipe Face Detection model to detect and track faces in the video stream. It extracts bounding box coordinates for each detected face.
- Hand Tracking (mp_hands): Uses the Mediapipe Hands model to track hand landmarks and connections. It provides coordinates for each hand landmark and visualizes connections between them.
- estimate_pose(frame): Estimates the pose landmarks in a single frame using the Mediapipe Pose model.
- estimate_face(frame): Estimates the face landmarks in a single frame using the Mediapipe Face Detection model.
- estimate_hands(frame): Estimates the hand landmarks in a single frame using the Mediapipe Hands model.
- draw_face_landmarks(frame, landmarks): Draws bounding boxes around detected faces on the frame.
- draw_hand_landmarks(frame, landmarks): Draws hand landmarks and connections on the frame using Mediapipe drawing utilities.
- draw_landmarks(frame, landmarks): Draws pose landmarks on the frame using Mediapipe drawing utilities.
- propagate_pose(prev_pose, current_pose): Propagates poses from the previous frame to the current frame. If the previous pose is not available, it returns the current pose; otherwise, it averages the previous and current poses.
- propagate_poses(frames): Propagates poses across frames. It estimates poses for each frame, propagates them based on the previous frame's pose, and visualizes face and hand landmarks.
Contributions are welcome! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.