itakurah / Sitting-Posture-Detection-YOLOv5

Real-time lateral sitting posture detection using a custom trained YOLOv5 model to predict good and bad postures.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Lateral Sitting Posture Detection using YOLOv5

This GitHub repository contains a posture detection program that utilizes YOLOv5, an advanced object detection algorithm, to detect and predict lateral sitting postures. The program is designed to analyze the user's sitting posture in real-time and provide feedback on whether the posture is good or bad based on predefined criteria. The goal of this project is to promote healthy sitting habits and prevent potential health issues associated with poor posture.

Key Features:

  • YOLOv5: The program leverages the power of YOLOv5, which is an object detection algorithm, to accurately detect the user's sitting posture from a webcam.
  • Real-time Posture Detection: The program provides real-time feedback on the user's sitting posture, making it suitable for use in applications such as office ergonomics, fitness, and health monitoring.
  • Good vs. Bad Posture Classification: The program uses a pre-trained model to classify the detected posture as good or bad, enabling users to improve their posture and prevent potential health issues associated with poor sitting habits.
  • Open-source: The program is released under an open-source license, allowing users to access the source code, modify it, and contribute to the project.

Built With

Python

Getting Started

Prerequisites

  • Python 3.9.x

Installation

If you have an NVIDIA graphics processor, you can activate GPU acceleration by installing the GPU requirements. Note that without GPU acceleration, the inference will run on the CPU, which can be very slow.

Windows

  1. git clone https://github.com/itakurah/SittingPostureDetection.git
  2. python -m venv venv
  3. .\venv\scripts\activate.bat
Default/NVIDIA GPU support:
  1. pip install -r ./requirements_windows.txt OR pip install -r ./requirements_windows_gpu.txt

Linux

  1. git clone https://github.com/itakurah/SittingPostureDetection.git
  2. python3 -m venv venv
  3. source venv/bin/activate
Default/NVIDIA GPU support:
  1. pip3 install -r requirements_linux.txt OR pip3 install -r requirements_linux_gpu.txt

Run the program

python application.py <optional: model_file.pt> OR python3 application.py <optional: model_file.pt>

The default model is loaded if no model file is specified.

Model

The program uses a custom trained YOLOv5s model that is trained on about 160 images per class for 146 epochs. The model has two classes: sitting_good and sitting_bad to give feedback about the current sitting posture.

Architecture

The architecture that is used for the model is the standard YOLOv5 architecture:

Fig. 1: The architecture of the YOLOv5 model, which consists of three parts: (i) Backbone: CSPDarknet, (ii) Neck: PANet, and (iii) Head: YOLO Layer. The data are initially input to CSPDarknet for feature extraction and subsequently fed to PANet for feature fusion. Lastly, the YOLO Layer outputs the object detection results (i.e., class, score, location, size)

Model Results

The validation set contains 80 images (40 sitting_good, 40 sitting_bad). The results are as follows:

Class Images Instances Precision Recall mAP50 mAP50-95
all 80 80 0.87 0.939 0.931 0.734
sitting_good 40 40 0.884 0.954 0.908 0.744
sitting_bad 80 40 0.855 0.925 0.953 0.724

Detailed graphs:

F1-Confidence Curve:

Precision-Confidence Curve:

Recall-Confidence Curve:

Precision-Recall Curve:

Confusion Matrix:

About

This project was developed by Niklas Hoefflin, Tim Spulak, Pascal Gerber & Jan Bösch and supervised by André Jeworutzki and Jan Schwarzer as part of the Train Like A Machine module.

Sources

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

Real-time lateral sitting posture detection using a custom trained YOLOv5 model to predict good and bad postures.

License:MIT License


Languages

Language:Python 99.6%Language:CSS 0.4%