zillur-av / LVLane

This repository implements a lane detection and classification model.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LVLane

Introduction

This repository is the official implementation of the paper "LVLane: Lane Detection and Classification in Challenging Conditions", accpeted in 2023 IEEE International Conference on Intelligent Trabsportation Systems (ITSC).

demo image

Table of Contents

Benchmark and model zoo

Supported backbones:

  • ResNet
  • ERFNet
  • VGG
  • MobileNet

Supported detectors:

Installation

This repository is a modified version of lanedet; so, it you installed that, no need to install this one. Just clone this and use the same conda environment.

Clone this repository

git clone https://github.com/zillur-av/LVLane.git

We call this directory as $LANEDET_ROOT

Create a conda virtual environment and activate it (conda is optional)

conda create -n lanedet python=3.8 -y
conda activate lanedet

Install dependencies

# Install pytorch firstly, the cudatoolkit version should be same in your system.

conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.1 -c pytorch

# Or you can install via pip
pip install torch==1.8.0 torchvision==0.9.0

# Install python packages
python setup.py build develop

Data preparation

Tusimple

Download Tusimple. Then extract them to $DATASETROOT. Create link to data directory.

cd $LANEDET_ROOT
mkdir -p data
ln -s $DATASETROOT data/tusimple

For Tusimple, you should have structure like this:

$DATASETROOT/clips # data folders
$DATASETROOT/lable_data_xxxx.json # label json file
$DATASETROOT/test_label.json # test label json file

LVLane

Download LVLaneV1. Then extract them to $DATASETROOT just like TuSimple dataset. This link contains class annotations for TuSimple dataset, so replace the orginal labels ones with the new ones. Lane annotations and class labels of Caltech dataset are also available in TuSimple format. Download the dataset from original site and resize them to 1280x720 to use with this model.

$DATASETROOT/clips/0531/
.
.
$DATASETROOT/clips/LVLane_train_sunny/
$DATASETROOT/label_data_xxxx.json
$DATASETROOT/test_label.json 
$DATASETROOT/LVLane_test_sunny.json
$DATASETROOT/LVLane_train_sunny.json

If you want to create a dataset in tusimple format, please follow instructions on tusimple-annotation We need to generate segmentation from the json annotation.

Generate masks

python tools/generate_seg_tusimple.py --root $DATASETROOT --filename 'LVLane_test_sunny'
# this will generate seg_label directory

Then you will find new json annotations files that have both lane location and class id in $DATASETROOT/seg_label/list/. Replace the old annotation files in $DATASETROOT by these new files.

Getting Started

If we want just detection, no lane classification, switch to detection branch by running git checkout detection.

Training

For training, run

python main.py [configs/path_to_your_config] --gpus [gpu_ids]

For example, run

python main.py configs/ufld/resnet18_tusimple.py --gpus 0

Modifications before you run training script:

  • Check image resolution in here
    ori_img_h = 720
    ori_img_w = 1280
    img_h = 288
    img_w = 800
    cut_height= 0
    sample_y = range(710, 150, -10)
    If your images have different resolution, try to resize them to 1280x720. Modify the annotations proportionately as well. This will be the best way to handle that situation.
  • Modify batch size, number of training samples, epochs in
    epochs = 2
    batch_size = 4
    total_training_samples = 3626
    total_iter = (total_training_samples // batch_size + 1) * epochs

Testing

For testing, run

python main.py [configs/path_to_your_config] --test --load_from [path_to_your_model] [gpu_num]

For example, run

python main.py configs/ufld/resnet18_tusimple.py --test --load_from ufld_tusimple.pth --gpus 0

Currently, this code can output the visualization result when testing, just add --view. We will get the visualization result in work_dirs/xxx/xxx/visualization.

I am providing a sample weights for quick testing. You can download it from here and put it on $LANEDET_ROOT. If you want to test your own images, create the json file and image folders following above instructions. Then edit val and test in

'trainval': ['label_data_0313.json', 'label_data_0601.json', 'label_data_0531.json'],
and in configs file
test_json_file='data/tusimple/test_label.json'
.

For example, run

python main.py configs/ufld/resnet18_tusimple.py --test --load_from best-ufld.pth --gpus 0 --view

Inference

See tools/detect.py for detailed information.

python tools/detect.py --help

usage: detect.py [-h] [--img IMG] [--show] [--savedir SAVEDIR]
                 [--load_from LOAD_FROM]
                 config

positional arguments:
  config                The path of config file

optional arguments:
  -h, --help            show this help message and exit
  --img IMG             The path of the img (img file or img_folder), for
                        example: data/*.png
  --show                Whether to show the image
  --savedir SAVEDIR     The root of save directory
  --load_from LOAD_FROM
                        The path of model

To run inference on example images in ./images and save the visualization images in vis folder:

python tools/detect.py configs/ufld/resnet18_tusimple.py --img images\
          --load_from best-ufld.pth --savedir ./show

Contributing

We appreciate all contributions to improve LVLane. Any pull requests or issues are welcomed.

Licenses

This project is released under the Apache 2.0 license.

Acknowledgement

Citation

If you use our work or dataset, please cite the following paper:

@article{rahman2023lvlane,
  title={LVLane: Deep Learning for Lane Detection and Classification in Challenging Conditions},
  author={Rahman, Zillur and Morris, Brendan Tran},
  journal={2023 IEEE International Conference on Intelligent Trabsportation Systems (ITSC)},
  year={2023}
}

About

This repository implements a lane detection and classification model.

License:Other


Languages

Language:Python 96.5%Language:Cuda 2.4%Language:C++ 0.8%Language:Dockerfile 0.3%