xingchenshanyao / YOLOP-E

YOLOP-E: You Only Look Once for Expressway Panoptic Driving Perception

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

YOLOP-E: You Only Look Once for Expressway Panoramic Driving Perception

Our paper: YOLOP-E: You Only Look Once for Expressway Panoramic Driving Perception(Submitted).

Our datasets and weights: SDExpressway, yolop.pth, yolope.pth.


The Illustration of YOLOP-E

0

The Illustration of ELAN-X

ELAN-X


Contributions

  • This study has produced the expressway multi-task dataset, SDExpressway, encompassing 5603 images captured under various weather conditions, including sunny, dark, rainy and foggy scenarios. Each image in the dataset has been meticulously labeled with drivable areas, lane lines, and traffic object information.

  • This research endeavors include the optimization of the ELAN module, resulting in the creation of a more efficient aggregated network structure known as ELAN-X. This innovation facilitates the amalgamation of feature information from various depths for parallel processing, enhancing the sensory field and feature expression of the model. These enhancements bolster the accuracy of the multi-task model's detection capabilities.

  • This paper introduce an efficient multi-task network, YOLOP-E, tailored for expressway scenarios and built upon the YOLOP framework. YOLOP-E is engineered to jointly handle three critical tasks in autonomous driving: traffic object detection, drivable area segmentation, and lane line segmentation.

  • The proposed network undergoes extensive evaluation on both the SDExpressway dataset and the widely recognized BDD100k dataset, including ablation experiments and state-of-the-art (SOTA) comparison experiments to demonstrate the efficacy of the various improvements integrated into the model. Notably, the proposed model showcases robustness and strong generalization abilities, even in challenging environmental conditions.


Results

On the SDExpressway

Traffic Object Detection Result
Network R(%) mAP50(%) mAP50:95(%) FPS(fps)
YOLOP(baseline) 86.8 74.4 38.7 232
HybridNets 90.1 76.4 42.1 110
YOLOP-E(ours) 92.1(+5.3) 83.8(+9.4) 53.3(+14.6) 127

3

Drivable Area Segmentation Result
Network mIoU(%) FPS(fps)
YOLOP(baseline) 97.7 232
HybridNets 97.5 110
YOLOP-E(ours) 98.1(+0.4) 127
Lane Detection Result
Network Acc(%) IoU(%) FPS(fps)
YOLOP(baseline) 90.8 72.8 232
HybridNets 92.0 75.7 110
YOLOP-E(ours) 92.1(+1.3) 76.2(+3.4) 127

On the BDD100k

Traffic Object Detection Result
Network R(%) mAP50(%) mAP50:95(%) FPS(fps)
YOLOP(baseline) 89.5 76.3 43.1 230
MultiNet 81.3 60.2 33.1 51
DLT-Net 89.4 68.4 38.6 56
HybridNets 92.8 77.3 45.8 108
YOLOP-E(ours) 92.0(+2.5) 79.7(+3.4) 46.7(+3.6) 120
Drivable Area Segmentation Result
Network mIoU(%) FPS(fps)
YOLOP(baseline) 91.3 230
MultiNet 71.6 51
DLT-Net 71.3 56
HybridNets 90.5 108
YOLOP-E(ours) 92.1(+0.8) 120
Lane Detection Result
Network Acc(%) IoU(%) FPS(fps)
YOLOP(baseline) 70.5 26.2 230
HybridNets 85.4 31.6 108
YOLOP-E(ours) 73.0 (+2.5) 27.3(+1.1) 120

The evaluation of effificient experiments

1

The comparison of performance effects of adding SimAM attention mechanisms at different locations

2


Visualization

NOTE:YOLOP (left), HybridNets (center), and YOLOP-E (right)

Traffic Object Detection Result

1

Drivable Area Segmentation Result

2

Lane Detection Result

3


Getting Started

Installation

a. Clone this repository
git clone git@github.com:xingchenshanyao/YOLOP-E.git && cd /YOLOP-E
b. Install the environment
conda create -n YOLOPE python=3.8
conda activate  YOLOPE
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 # install the pytorch
pip install -r requirements.txt
c. Prepare the datasets and weights

Download the datasets and weights: SDExpressway, yolop.pth, yolope.pth.

We recommend the weight directory structure to be the following :

#root directory
├─weights
│ ├─yolop.pth
│ ├─yolope.pth

Add the true dataset path in lib/config/default.py.

_C.DATASET.DATAROOT = '/home/xingchen/Study/datasets/SDExpressway/images'       # the path of images folder
_C.DATASET.LABELROOT = '/home/xingchen/Study/datasets/SDExpressway/traffic object labels'      # the path of det_annotations folder
_C.DATASET.MASKROOT = '/home/xingchen/Study/datasets/SDExpressway/drivable area labels'                # the path of da_seg_annotations folder
_C.DATASET.LANEROOT = '/home/xingchen/Study/datasets/SDExpressway/lane line labels'               # the path of ll_seg_annotations folder

Demo Test

python tools/demo.py --weights weights/yolop.pth --source inference/images/8.jpg --save-dir inference/output_yolope

Training

python tools/train.py

Evaluation

python tools/test.py --weights --weights weights/yolope.pth

Demonstration

Output1 Output2

Acknowledgements:

This work is built upon the YOLOP, YOLOv7, YOLOv5.

This work receives assistance from 华夏街景 as well.

About

YOLOP-E: You Only Look Once for Expressway Panoptic Driving Perception

License:MIT License


Languages

Language:Python 77.9%Language:C++ 18.6%Language:Cuda 3.1%Language:CMake 0.3%