danishnazir / SemAttNet

Official Repository of "SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SemAttNet

Welcome to the official repository of SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion Arxiv

Contents

  1. Dependency
  2. Data
  3. Training
  4. Models
  5. Commands
  6. Citation

Dependency

We include a list of required packages in req.txt. We reccomend that you create a conda environment with all of the packages by running the following command.

conda create -n <environment-name> --file req.txt

Please chose environment-name of of your choice and replace it in the command.

Uptill now, we have tested our model on the following GPU's.

  1. NVIDIA GTX RTX3090
  2. NVIDIA A100
  3. NVIDIA A100-80GB
  4. NVIDIA RTXA6000

Data

Validation and Test Dataset

Download the KITTI depth Validation and Test set from this URL. Please unzip the dataset folder. The overall data directory of the dataset is structured as follows:

├── data_depth_selection
|   ├── test_depth_completion_anonymous
|   |   |── image
|   |   |── intrinsics
|   |   |── semantic
|   |   |── velodyne_raw
|   |── test_depth_prediction_anonymous
|   |   |── image
|   |   |── intrinsics
|   |── val_selection_cropped
|   |   |── groundtruth_depth
|   |   |── image
|   |   |── intrinsics
|   |   |── semantics
|   |   |── velodyne_raw

Training Dataset

Please download KITTI Depth Completion Training Dataset from this URL. It is organized as follows.

├── kitti_depth
|   ├── depth
|   |   ├──data_depth_annotated
|   |   |  ├── train
|   |   |  ├── val
|   |   ├── data_depth_velodyne
|   |   |  ├── train
|   |   |  ├── val
|   |   ├── data_depth_selection
|   |   |  ├── test_depth_completion_anonymous
|   |   |  |── test_depth_prediction_anonymous
|   |   |  ├── val_selection_cropped

Please download RGB images i.e. KITTI Raw data from this URL. It is structured as follows.

├── kitti_raw
|   ├── 2011_09_26
|   ├── 2011_09_28
|   ├── 2011_09_29
|   ├── 2011_09_30
|   ├── 2011_10_03

Please visit this URL to download the semanitc maps, which are also required to train our model. They are organized as follows

├── semantic_maps
|   |── depth
|   |   |── data_depth_selection
|   |   |   |── val_selection_cropped
|   |   |   |── test_depth_completion_anonymous
|   ├── 2011_09_26
|   ├── 2011_09_28
|   ├── 2011_09_29
|   ├── 2011_09_30
|   ├── 2011_10_03

Training

Please select Training branch (from the top) to see the training instructions.

Models

Please download the pre-trained model from this URL.

Commands

After you have downloaded the model and installed all of the required packages listed in req.txt, our results on KITTI validation dataset and test dataset can be validated by running the following commands.

Validation Dataset

CUDA_VISIBLE_DEVICES="0"  python main.py -n sem_att -e [path of pre-trained model i.e. model_best_backup.pth.tar] --data-folder [path of data_depth_selection folder] --val_results "val_results/"

After successfull run, you can see the quantiative results summary in the table and the qualitative results can be viewed in the folder named "val_results/". Each image file inside the "val_results/" consists of Sparse, Groundtruth and Refined Depth stitched together. For example,

The first image (from left) represents the LiDAR sparse depth, second represent LiDAR sparse ground truth map and third represent the output of SemAttNet.

Test Dataset

CUDA_VISIBLE_DEVICES="0"  python main.py -n sem_att -e [path of pre-trained model i.e. model_best_backup.pth.tar] --data-folder [path of data_depth_selection folder] --test-save "test_results/" --test

The results on test dataset are saved as 16bit depth maps in "test_results/" folder. The depths maps can be uploaded to KITTI depth completion benchmark to validate our claims on KITTI benchmark.

Citation

If you use our code or method in your work, please cite the following:

@ARTICLE{9918022,
author={Nazir, Danish and Pagani, Alain and Liwicki, Marcus and Stricker, Didier and Afzal, Muhammad Zeshan}, 
journal={IEEE Access}, 
title={SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion},
year={2022}, 
volume={}, 
number={}, 
pages={1-1},  
doi={10.1109/ACCESS.2022.3214316}
}

About

Official Repository of "SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion"


Languages

Language:Python 100.0%