1.1. First of all, create skeleton dataset from videos by following two commands (you can also skip this option because the output of this step is already present in ‘data’ folder):
python create_dataset_2.py
Change Topology according to your need.
# For MultipleCameraFall
python extract_video_frames.py
keypoint_data_mcf
Change Topology according to your need.
# For UR Dataset
keypoint_data_ur
1.2. Change the value of the variable dataset
and topology
in stgcn_train.py
according to you needs. Train the skeleton model (pytorch environment) by running the following command in terminal:
python stgcn_train.py
1.3. Change the value of the variable dataset
and topology
in stgcn_train.py
according to you needs. Extract skeleton features by command:
python skeleton_features.py
1.4. For training the multistage LSTM model, first convert the videos into frames by following command (Skip this if you have frames):
python mkframes.py
1.5. Change the dataset
and topology
accordingly. Train action model by running the following two commands
python action_context_train.py --model-type context_aware --save-model data/model_weights/context_best.h5 --device 0
python action_context_train.py --model-type action_aware --save-model data/model_weights/action_best.h5 --device 1
1.6. Change the dataset
and topology
accordingly. Extract action-aware and context-aware features by following commands:
python action_aware_features.py
python make_split.py
python ms_lstm.py --device 0 --classes 9 --workers 4 --batch-size 64
python test_mslstm.py
1.10 Change the value of dataset
and topology
according to your needs. To generate the metrics and report run:
python metrics.py
source = 'data/Coffe_room/Videos/video (40).avi'
save_out = 'results/cf_video_40_stgcnn.avi'
label_out_csv = 'results/cf_vid_40.csv'
actual_fall_frame = 258
and run following commands:
For original st-gcn model results
python stgcn_test.py
For papers approach
python main_action_context.py