monjurulkarim / risky_object

This is the implementation code for the paper, "An Attention-guided Multistream Feature Fusion Network for Early Localization of Risky Traffic Agents in Driving Videoss", IEEE Transaction on Intelligent Vehicles, 2023.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AM-Net

This is the implementation code for the paper, "An Attention-guided Multistream Feature Fusion Network for Early Localization of Risky Traffic Agents in Driving Videoss", IEEE Transaction on Intelligent Vehicles, 2023.

The objective of this project is to determine a riskiness score for all traffic agents within a driving scene. In other words, the goal of this paper is to achieve early localization of potentially risky traffic agents in driving videos.

Dataset Preparation

The code currently supports ROL dataset:

  • Please refer to the ROL Official repo for downloading and deployment.

Installation Guide

1. Setup Python Environment

The code is implemented and tested with Python=3.7.9 and PyTorch=1.2.0 with CUDA=10.2. We highly recommend using Anaconda to create virtual environment to run this code.

After setting up the python environment, please use the following command to verify that everything is correctly configured:

python main.py --phase=check

2. Train and Test

To train the AM-Net model using ROL dataset, please ensure that you have downloaded and placed the extracted CNN features of ROL dataset in the directory. Then, run the following command:

python main.py --phase=train

Use the following command to test the trained model:

python main.py --phase=test

To save the riskiness score of the traffic agents use the following command:

python demo.py

3. Pre-trained Models

  • Pre-trained AM-Net Models: You can also download and use the pre-trained weights of AM-Net. The pretrained model weights are intended for testing and demo purposes. To utilize the pre-trained model for testing, please download it and ensure it is placed in the appropriate directory.

4. Reproducing DoTA Results:

  • You can find our processed DoTA datasets Here

To reproduce the results reported in our paper on the DoTA dataset, please fine-tune the entire network using the train data in the above link. Then, use the model at epoch 16 for evaluating the performance. Additionally, consider experimenting with other epochs as well, as this may lead to even better results.

Please note that the statement in our paper regarding achieving the results on DoTA dataset,
'Specifically, the final fully connected layers were fine-tuned,' is incorrect.
Following the above steps will ensure that you can replicate our reported results.

Citation

Please cite our paper if you find the code useful.

@ARTICLE{karim_am_net2023,
  author={Karim, Muhammad Monjurul and Yin, Zhaozheng and Qin, Ruwen},
  journal={IEEE Transactions on Intelligent Vehicles}, 
  title={An Attention-guided Multistream Feature Fusion Network for Early Localization of Risky Traffic Agents in Driving Videos}, 
  year={2023},
  volume={},
  number={},
  pages={1-12},
  doi={10.1109/TIV.2023.3275543}}

About

This is the implementation code for the paper, "An Attention-guided Multistream Feature Fusion Network for Early Localization of Risky Traffic Agents in Driving Videoss", IEEE Transaction on Intelligent Vehicles, 2023.

License:MIT License


Languages

Language:Python 100.0%