Paper: CVPR2023
$\color{red}{New!}$ We provide two versions trained in size 256 * 256 * 512 and size 128 * 128 * 512. The pre-trained models are available now. The inference memory is about 22G~23G.
Reconstructed hidden scenes from the real-world measurements captured by FK.
Reconstructed hidden scenes from the real-world measurements captured by our NLOS imaging system.
We built an NLOS system working in a confocal manner. A 532 nm laser emits pulses at 50 ps pulse width and 11 MHz repetition frequency with a typical 250 mW average power. The pulses pass through a two-axis raster-scanning galvo mirror, and transmit to the visible wall. The direct and indirect diffuse photons are collected by the other two-axis galvo mirror and then coupled to a multimode fiber directed to a free-running single-photon avalanche diode (SPAD) detector with a detection efficiency about 40%. A time-correlated single photon counter records the sync signals from the laser and the photon-detection signals from the SPAD. The temporal resolution of the overall system is measured to be approximately 95 ps.
In data collection, the illuminated point and sampling point keep the same direction but are slightly misaligned (to avoid the first bouncing signals) during scanning. We raster scan the square grid of points across a 2m * 2m area on the visible wall. The acquisition time for each scanning point is set to be about 8 ms. The histogram length of each transient measurement is 512, with a bin width of 32 ps.
The NLOS scenes include a ladder with letters on the ladder, sculptures of people, and deer made of white foam, which are placed about 0.8 m to 1.5 m away from the visible wall. The scene thumbnails of the captured measurements are shown below.
The real-world measurements captured by our imaging system can be downloaded at googledisk.
-
For evaluation on the real-world data, we trained the models on the synthetic data (~3000 motorbike dataset) provided by LFE.
You can download Here -
We also utilized the real-world data provided by FK.
You can download the preprocessed data Here. -
run ' bash train.sh'
-
The pre-trained models are listed in './pretain'
-
run 'bash test.sh'
-
For Unseen data, you can download Here.
For questions, feel free to contact YueLi (yueli65@mail.ustc.edu.cn).
We thank the authors who shared the code of their works. Particularly Wenzhen Chen LFE and Fangzhou Mu PTR.
If you find it useful, please cite our paper.
@inproceedings{li2023nlost,
title={NLOST: Non-Line-of-Sight Imaging with Transformer},
author={Li, Yue and Peng, Jiayong and Ye, Juntian and Zhang, Yueyi and Xu, Feihu and Xiong, Zhiwei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13313--13322},
year={2023}
}