This repository is the official implementation of the paper "LVLane: Lane Detection and Classification in Challenging Conditions", accpeted in 2023 IEEE International Conference on Intelligent Trabsportation Systems (ITSC).
- Introduction
- Benchmark and model zoo
- Installation
- Getting Started
- Contributing
- Licenses
- Acknowledgement
Supported backbones:
- ResNet
- ERFNet
- VGG
- MobileNet
Supported detectors:
This repository is a modified version of lanedet; so, it you installed that, no need to install this one. Just clone this and use the same conda environment.
git clone https://github.com/zillur-av/LVLane.git
We call this directory as $LANEDET_ROOT
conda create -n lanedet python=3.8 -y
conda activate lanedet
# Install pytorch firstly, the cudatoolkit version should be same in your system.
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.1 -c pytorch
# Or you can install via pip
pip install torch==1.8.0 torchvision==0.9.0
# Install python packages
python setup.py build develop
Download Tusimple. Then extract them to $DATASETROOT
. Create link to data
directory.
cd $LANEDET_ROOT
mkdir -p data
ln -s $DATASETROOT data/tusimple
For Tusimple, you should have structure like this:
$DATASETROOT/clips # data folders
$DATASETROOT/lable_data_xxxx.json # label json file
$DATASETROOT/test_label.json # test label json file
Download LVLaneV1. Then extract them to $DATASETROOT
just like TuSimple dataset. This link contains class annotations for TuSimple dataset, so replace the orginal labels ones with the new ones. Lane annotations and class labels of Caltech dataset are also available in TuSimple format. Download the dataset from original site and resize them to 1280x720 to use with this model.
$DATASETROOT/clips/0531/
.
.
$DATASETROOT/clips/LVLane_train_sunny/
$DATASETROOT/label_data_xxxx.json
$DATASETROOT/test_label.json
$DATASETROOT/LVLane_test_sunny.json
$DATASETROOT/LVLane_train_sunny.json
If you want to create a dataset in tusimple format, please follow instructions on tusimple-annotation We need to generate segmentation from the json annotation.
python tools/generate_seg_tusimple.py --root $DATASETROOT --filename 'LVLane_test_sunny'
# this will generate seg_label directory
Then you will find new json
annotations files that have both lane location and class id in $DATASETROOT/seg_label/list/
. Replace the old annotation files in $DATASETROOT
by these new files.
If we want just detection, no lane classification, switch to detection
branch by running git checkout detection
.
For training, run
python main.py [configs/path_to_your_config] --gpus [gpu_ids]
For example, run
python main.py configs/ufld/resnet18_tusimple.py --gpus 0
Modifications before you run training script:
- Check image resolution in here
LVLane/configs/ufld/resnet18_tusimple.py
Lines 56 to 61 in f89d53d
- Modify batch size, number of training samples, epochs in
LVLane/configs/ufld/resnet18_tusimple.py
Lines 40 to 43 in f89d53d
For testing, run
python main.py [configs/path_to_your_config] --test --load_from [path_to_your_model] [gpu_num]
For example, run
python main.py configs/ufld/resnet18_tusimple.py --test --load_from ufld_tusimple.pth --gpus 0
Currently, this code can output the visualization result when testing, just add --view
.
We will get the visualization result in work_dirs/xxx/xxx/visualization
.
I am providing a sample weights for quick testing. You can download it from here and put it on $LANEDET_ROOT
. If you want to test your own images, create the json file and image folders following above instructions. Then edit val
and test
in
LVLane/lanedet/datasets/tusimple.py
Line 21 in 943dbd3
LVLane/configs/ufld/resnet18_tusimple.py
Line 120 in 943dbd3
For example, run
python main.py configs/ufld/resnet18_tusimple.py --test --load_from best-ufld.pth --gpus 0 --view
See tools/detect.py
for detailed information.
python tools/detect.py --help
usage: detect.py [-h] [--img IMG] [--show] [--savedir SAVEDIR]
[--load_from LOAD_FROM]
config
positional arguments:
config The path of config file
optional arguments:
-h, --help show this help message and exit
--img IMG The path of the img (img file or img_folder), for
example: data/*.png
--show Whether to show the image
--savedir SAVEDIR The root of save directory
--load_from LOAD_FROM
The path of model
To run inference on example images in ./images
and save the visualization images in vis
folder:
python tools/detect.py configs/ufld/resnet18_tusimple.py --img images\
--load_from best-ufld.pth --savedir ./show
We appreciate all contributions to improve LVLane. Any pull requests or issues are welcomed.
This project is released under the Apache 2.0 license.
If you use our work or dataset, please cite the following paper:
@article{rahman2023lvlane,
title={LVLane: Deep Learning for Lane Detection and Classification in Challenging Conditions},
author={Rahman, Zillur and Morris, Brendan Tran},
journal={2023 IEEE International Conference on Intelligent Trabsportation Systems (ITSC)},
year={2023}
}