The focus of this research project is to develop the pedestrian detection capabilities of Zeus, the self driving car at aUToronto. As a member of the Perception Team at aUToronto I work to meet the milestone goals set by my team leads and the advising professors including but not limited to Dr. Angela Schoellig and Dr. Tim Barfoot from the University Of Toronto.
My task along with Brian Cheong and Davendra Maharaj was to improve Zeus's existing pedestrian detection system which was based off of Squeezdet and replace it with a YoloV3 model with newly trained weights and fine tuned hyperparameters. YoloV3 is the latest variant of a popular object detection algorithm YOLO – You Only Look Once. It is extremely powerful and useful for real time object detection tasks and so was employed for the purpose of pedestrian detection in this project.
These are some of the trials with the best results. Code and specific hyperparameters to be shared here once I get approval about what can and can't be made open source. Since we are currently still competing with our car in the SAE Autodrive Challenge, this may take some time. Thanks for understanding!
Dataset | Hyperparameters | Max mAP |
---|---|---|
JAAD | ... | 83.70% |
JAAD | ... | 81.30% |
JAAD+Scale | ... | 83.58% |
JAAD+Scale | ... | 83.23% |
JAAD+NuScenes+Scale | ... | 78.97% |
These are some videos of a test run of these trained models on Zeus, our self driving car at the University of Toronto Institute for Aerospace Studies (UTIAS). The videos were taken by the blackfly cameras on Zeus.
The trained YoloV3 model detects pedestrians well, however, there are false positives on dark objects in the white snow and it qualitatively appears as if detection performance is less confident on objects that are far away.
Additionally, it appears as if the recall is quite good, however, the precision suffers in cases where the model classifies the dummy of the deer as a pedestrian.