Pandas-Team / Autonomous-Vehicle-Environment-Perception

An Intelligent Modular Real-Time Vision-Based System for Environment Perception (NeurIPS 2022 Workshop)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

An Intelligent Modular Real-Time Vision-Based System for Environment Perception

A significant portion of driving hazards is caused by human error and disregard for local driving regulations; consequently, an intelligent assistance system can be beneficial. Hence, we propose a vision-based modular package to ensure drivers’ safety by perceiving the environment. Each module is designed based on accuracy and inference time to deliver real-time performance. As a result, the proposed system can be implemented on a wide range of vehicles with minimum hardware requirements. Our modular package comprises four main sections: lane detection, object detection, segmentation, and monocular depth estimation. Each section is accompanied by novel techniques to improve the accuracy of others along with the entire system. Furthermore, a GUI is developed to display perceived information to the driver.

overall_diagram

Citation

@article{kazerouni2023intelligent,
  title={An intelligent modular real-time vision-based system for environment perception},
  author={Kazerouni, Amirhossein and Heydarian, Amirhossein and Soltany, Milad and Mohammadshahi, Aida and Omidi, Abbas and Ebadollahi, Saeed},
  journal={arXiv preprint arXiv:2303.16710},
  year={2023}

Updates

  • October 20, 2022: Accepted in NeurIPS 2022 Workshop on Machine Learning for Autonomous Driving! 🔥
  • February 5, 2021: Won 1st place in the National Rahneshan competition 2020-2021 for autonomous vehicles! 🎉
  • January 10, 2021: First release.

Results

Results on BDD100K dataset

bdd

Results on our local dataset

local

Inference

To run the program, first install the requirements using the code below:

$ pip install -r requirements.txt

Then create a folder named 'weights' in the main directory and download all the weights in this shared google drive folder.

Then, place your video in the main folder of this repo and then run the following command.

$ python main.py --video yourvideoname.mp4 [--save] [--noshow] [--output-name myoutputvideo.mp4] [--fps]

--save argument will save the output video.

--noshow will not show you a preview of the output.

--output-name will determine the name you want for your output video

--fps will plot the fps results on the output frames

"yourvideoname.mp4" is the name of your video file added to the main folder. "myoutputvideo.mp4" is the name you want for your output video.

Afterwards, the program starts running and the output video will be saved in the specified directory. To view the output while running, do not use '--no-show' argument.

There you have it.

Colab Notebook

You can also use the provided colab notebook to automatically download all the weights and sample video, and run the program in a matter of seconds!

Simply open the following colab notebook

Open In Colab

Cited Works

  1. Yolov5 (Github)
  2. SGDepth (Github)
  3. PINet (Github)

Datasets

Test Videos:

Please download from here.

Sign Datasets:

  1. Traffic-Sign Detection and Classification in the Wild Link
  2. DFG Traffic Sign Data Set Link

Our Team

We as Team Pandas won 1st place in the National Rahneshan competition 2020-2021 for autonomous vehicles. This contest has been one of the most competitive and challenging contests in the Rahneshan tournaments with more than 15 teams competing from top universities in Iran. Pandas6

Contact us

Feel free to contact us via email or connect with us on linkedin.

About

An Intelligent Modular Real-Time Vision-Based System for Environment Perception (NeurIPS 2022 Workshop)

License:MIT License


Languages

Language:Jupyter Notebook 89.5%Language:Python 10.5%