liuzurang / Pedestrian-Intention-Classification

Provide Guide to Autonomous Vehicle for Pedestrian Crossing Detection

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pedestrian Crossing Classifier

Demo

Click below to play the demo video:

IMAGE ALT TEXT

Dataset

We used JAAD [2] in our project. JAAD provides tags for a specific pedestrian of his/her behaviors during several time slices. We obtain “Non-Crossing” sequence as follows: a non-crossing tagged pedestrian in JAAD [2] has other behaviors such as wondering around the curbs, looking at the traffics, waiting the buses etc. Each of these behaviors also has a frame range and associated bounding boxes in JAAD. Similarly, we split each frame range of these behaviors in order in step 15 and tag all the obtained frame segments being “Non-Crossing”. Finally, we get 1860 “Non-Crossing” sequences.

Feature Extraction

For a single frame, we have two kinds of features: human pose and environment signals. We use the state-of-art algorithm AlphaPose [3] to extract human pose, it can provide (x, y) coordinates for 16 joints with prediction confidence. As illustrated in Fig.1, the 16 joints include nose, eyes, knees and ankles, etc. Based on our task, we focus on joints which highly relevant to postures and movement in based on Yolocrossing and non-crossing behaviors. We identify the relative angles between parts of limbs and the ground are most critical for C/NC behaviors. Therefore, we extract the angles between the ground and the forearms, the upper arms, the tights and the calves. In addition, the angles between tight and calf of each leg are particular informative as they codes the human’s moving status. Please see the right part of Fig.1 to see the highlighted parts of limbs which we use to compute these 10 angles. For each frame we obtain a 10 dimensions feature, thus a sequence with 15 frames is associated a big-feature with 15×10=150 dimensions.


Fig.1. Human Pose Extraction

Our second feature is the environment signals. We utilized PSPNet to extract segmentation information of Road/sidewalk to identify the location of the pedestrians for improved accuracy. (Fig.2, PSPNET)

Fig.2. Road Segmentation

Random Forest

We use the random forest module provided by scikit-learn library. We set hyper parameters to use 100 decision trees and tree-depth up to 6. Our training sequences are extracted from the first 280 videos of JAAD [2] while testing sequences from the last 66 ones. Although we obtain an unbalance “Crossing vs Non-Crossing” (5125 vs 1860) sample set, we try to make our training/testing sets to be balance. Finally, our training set contains 1699 samples while testing set 444 samples. Please refer experiment part for the performance of our random forest model.

Experiment

In this section, we illustrate the experiment of our random forestmodel. Fig.3 Illustrates the result of the classification on video. Fig.4 give the learning curve of our random forest model, note the validation accuracy of our model on sequential data reach 88%. It is the best accuracy reported in [1]. Check Fig.5 and Fig.6 below to see our random forest model making correct predictions for both C/NC sequences respectively. Please check [4], [5] for our demo.


Fig.3. Prediction Result on Video


Fig.4. Learning Curve


Fig.5. Ground True: Crossing; Prediction: Crossing


Fig.6. Ground True: Non-Crossing; Prediction: Non-Crossing

Reference

[1]. Zhijie Fang and A.M.Lopez, “Is the Pedestrian going to Cross Answering by 2D Pose Estimation” in IV, 2018.
[2]. I. Kotseruba, A. Rasouli, J. K. Tsotsos. "Joint Attention in Autonomous Driving (JAAD)."arXiv preprint arXiv:1609.04741 (2016).
[3]. Fang H, Xie S, Tai Y W, et al. Rmpe: Regional multi-person pose estimation[C]//The IEEE International Conference on Computer Vision (ICCV). 2017, 2.
[4]. https://slack-files.com/TDDA4RLBW-FE59N4LMC-85708314f9
[5]. https://slack-files.com/TDDA4RLBW-FE39JCA0H-ed2d235da3 \

About

Provide Guide to Autonomous Vehicle for Pedestrian Crossing Detection


Languages

Language:Python 74.8%Language:MATLAB 24.9%Language:Forth 0.2%