Experimenting with Pose classification
- Clone the repository:
https://github.com/frh02/yolov8_pose_classification.git
Recommended
conda create -n pose python=3.9 -y
conda activate pose
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda==11.8 -c pytorch -c nvidia -y
pip3 install -r requirements.txt
OR
pip3 install -r requirements.txt
Dataset Structure:
βββ Dataset
β βββ class1
β β βββ 1.jpg
β β βββ 2.jpg
β β βββ ...
β βββ class2
β β βββ 1.jpg
β β βββ 2.jpg
β β βββ ...
. .
. .
Convert pose images into pose lankmark and save to an CSV file. So that we can train with that.
Args
-p
, --pose
: choose yolov8 pose model
Choices:
yolov8n-pose
, yolov8s-pose
, yolov8m-pose
, yolov8l-pose
, yolov8x-pose
, yolov8x-pose-p6
-i
, --data
: path to data Dir
-o
, --save
: path to save csv file, eg: dir/data.csv
Example:
python3 src/generate_csv.py --pose yolov8n-pose --data dataset/train_data --save data.csv
Create a keras model to predict human poses.
Args
-i
, --data
: path to data Dir
Example:
python3 src/train.py --data data.csv
Inference your Pose model.
- Image
- Video
- Camera
- RTSP
Args
-p
, --pose
: choose yolov8 pose model
Choices:
yolov8n-pose
, yolov8s-pose
, yolov8m-pose
, yolov8l-pose
, yolov8x-pose
, yolov8x-pose-p6
-m
, --model
: path to saved keras model
-s
, --source
: video path/cam-id/RTSP
-c
, --conf
: model prediction confidence (0<conf<1)
--save
: to save video
--hide
: hide video window
Example:
python3 src/inference.py --pose yolov8n-pose --model /runs/train4/ckpt_best.pth --source /test/video.mp4 --conf 0.66 # video
--source /test/sample.jpg --conf 0.5 --save # Image save
--source /test/video.mp4 --conf 0.75 --hide # to save and hide video window
--source 0 --conf 0.45 # Camera