Angelina8120 / Low-light-Small-SOD-baseline

A new low light small SOD dataset and a baseline for low light small SOD

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Catching Small Pedestrian and Vehicle in Low Light: A New SOD Benchmark

by Xin Xu, Shiqin Wang, Zheng Wang, KeTang, Chia-Wen Lin

  • Since our paper is still under review, we will release our dataset and code once the paper is accepted. Thanks for your understanding.

Low lIght Salient Pedestrian/vehicle (LISP) dataset

  • Recent years have witnessed rapid progress if Salient Object Detection (SOD). However, relatively few efforts have been dedicated to modeling salient object detection in low-light scenes with small pedestrians/vehicles. Furthmore, realistic applications of salient pedestrian/vehicle detection at long distances in low-light environments commonly exist in nighttime surveillance and nighttime autonomous driving. In particular, for autonomous driving at night, detecting pedestrians/vehicles with high reliability is paramount for safety. To fill the gap, we elaborately collect a new Low lIght Salient Pedestrian/vehicle (LISP) dataset, which consists of 3,100 high-resolution images containing low-light small pedestrian/vehicles, and covers diverse challenging cases (e.g., low-light, non-uniform illumination environment, and small objects).

  • LISP dataset are not openly available due to human data and are available upon resonable request for academic use and within the limitations of the provided informer consent upon acceptance. By downloading the dataset, you guarantee that you will use this dataset for academic work only. Furthmore, our data processing pipeline involves dedicated measures for data protection which pertain to the entire life cycle of the data, i.e., in the collection, storage, annotation, anonymization, and distribution of the data. However, if you find yourself or your personal belongings in the dataset, you can contact us and we will immediately delete the respective images in the LISP.

  • Approval was obtained from the School of Computer Science and Technology, Wuhan University of Science and Technology. And the research was performed under the oversight of the School of Computer Science and Technology, Wuhan University of Science and Technology. The procedures used in this study adhere to the tenets of the Declaration of Helsinki. You can find detail materials in the Licence folder.

  • Comparison of LISP with existing SOD datasets

comparison

  • Representative images and corresponding ground-truth masks in the LISP dataset

representative

Introduction

framework Architecture of Edge and Illumination-Guided Network (EIGNet). It consists of an encoder and three decoders, i.e., a Shared Encoder for feature extraction, an Illumination-Guided Network (IGN), a Saliency Decoder, and an Edge-Guided Network (EGN). The latter three decoders generate the Illumination Map, Saliency Map, and Edge Map, respectively. The decoder progressively integrates IGN and EGN to guide the Saliency Decoder to generate saliency maps in a supervised manner. Among them, IGN applies the Illumination Guidance Layer (IGL) to augment salient features with illumination features.

Prerequisites

Clone repository

git clone https://github.com/Angelina8120/Low-light-Small-SOD-baseline.git
cd Low-light-Small-SOD-baseline/

Download dataset

Download the following datasets and unzip them into data folder

Most of the existing RGB datasets contain multi-scale salient object images, but a large-scale dataset particularly designed for addressing small SOD problems is still missing. To address this issue, we propose a Zoom Out Salient Object (ZOSO) strategy to generate a synthetic normal-light small object (small DUTS-TR) dataset for training.

  • small DUTS-TR

We conduct experiments on our proposed LISP dataset and eight widely used datasets, ECSSD, PASCAL-S, DUTS, DUT-OMRON, SOD, HKU-IS, HRSOD, and UHRSD.

Morever, to validate the importance of LISP and the effectiveness of EIGNet for real low-light scenes, we randomly select 500 images from LISP as the training set (LISP-Train), and the other 500 images as the testing set (LISP-Test).

Training & Evaluation

  • Awaiting soon...

Testing & Evaluation

  • Awaiting soon...

Saliency maps

performace

  • Qualitative comparisons

sample

About

A new low light small SOD dataset and a baseline for low light small SOD


Languages

Language:MATLAB 85.3%Language:Python 14.7%