MissPenguin / hawp

Holistically-Attracted Wireframe Parsing

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Holistically-Attracted Wireframe Parsing (CVPR 2020)

This is the official implementation of our CVPR paper.

[News] The pretrained model is released.

[News] The previous pretrained model is wrong, please download it again from Google Cloud.

[News] The description of how to train and test our HAWP is updated. You may need to update the code if you cloned my repo before May 21, 2020.

Highlights

  • We propose a fast and parsimonious parsing method HAWP to accurately and robustly detect a vectorized wireframe in an input image with a single forward pass.
  • The proposed HAWP is fully end-to-end.
  • The proposed HAWP does not require the squeeze module.
  • State-of-the-art performance on the Wireframe dataset and YorkUrban dataset.
  • The proposed HAWP achieves 29.5 FPS on a GPU (Tesla V100) for 1-batch inference.

Quantitative Results

Wireframe Dataset

Method Wireframe Dataset FPS
sAP5sAP10sAP15 msAPmAPJAPHFH
LSD / ////55.262.5 49.6
AFM 18.5 24.4 27.5 23.5 23.3 69.2 77.2 13.5
DWP 3.7 5.1 5.9 4.9 40.9 67.8 72.2 2.24
L-CNN 58.9 62.9 64.9 62.2 59.3 80.3 76.9 15.6
82.8 81.3
L-CNN (re-trained) 59.7 63.6 65.3 62.9 60.2 81.6 77.9 15.6
83.7 81.7
HAWP (Ours) 62.5 66.5 68.2 65.7 60.2 84.5 80.3 29.5
86.1 83.1

YorkUrban Dataset

Method YorkUrban Dataset FPS
sAP5sAP10sAP15 msAPmAPJAPHFH
LSD / ////50.960.1 49.6
AFM 7.3 9.4 11.1 9.3 12.4 48.2 63.3 13.5
DWP 1.5 2.1 2.6 2.1 13.4 51.0 61.6 2.24
L-CNN 24.3 26.4 27.5 26.1 30.4 58.5 61.8 15.6
59.6 65.3
L-CNN (re-trained) 25.0 27.1 28.3 26.8 31.5 58.3 62.2 15.6
59.3 65.2
HAWP (Ours) 26.1 28.5 29.7 28.1 31.6 60.6 64.8 29.5
61.2 66.3

Installation (tested on Ubuntu-18.04, CUDA 10.0, GCC 7.4.0)

conda create -n hawp python=3.6
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch 

cd hawp 
conda develop .

pip install -r requirement.txt
python setup.py build_ext --inplace

Quickstart with the pretrained model (Google Drive)

  • Download the pretrained model and unzip the model to "ROOT_DIR/outputs/hawp"
python script/predict.py --config-file config-files/hawp.yaml --img figures/example.png

Training & Testing

Data Preparation

  • Download the Wireframe dataset and the YorkUrban dataset from their project pages.
  • Download the JSON-format annotations (Google Drive).
  • Place the images to "hawp/data/wireframe/images/" and "hawp/data/york/images/".
  • Unzip the json-format annotations to "hawp/data/wireframe" and "hawp/data/york".

The structure of the data folder should be

data/
   wireframe/images/*.png
   wireframe/train.json
   wireframe/test.json
   ------------------------
   york/images/*.png
   york/test.json

Training

CUDA_VISIBLE_DEVICES=0, python scripts/train.py --config-file config-files/hawp.yaml

The best model is manually selected from the model files after 25 epochs.

Testing

CUDA_VISIBLE_DEVICES=0, python scripts/test.py --config-file config-files/hawp.yaml [optional] --display

The output results will be saved to OUTPUT_DIR/$dataset_name.json. The dataset_name should be wireframe_test or york_test.

Structural-AP Evaluation

  • Run scripts/test.py to get the wireframe parsing results.
  • Run scripts/eval_sap.py to get the sAP results
# example on the Wireframe dataset
python scripts/eval_sap.py --path outputs/hawp/wireframe_test.json --threshold 10

Citations

If you find our work useful in your research, please consider citing:

@inproceedings{HAWP,
title = "Holistically-Attracted Wireframe Parsing",
author = "Nan Xue and Tianfu Wu and Song Bai and Fu-Dong Wang and Gui-Song Xia and Liangpei Zhang and Philip H.S. Torr
",
booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
year = {2020},
}

Acknowledgment

We acknowledge the effort from the authors of the Wireframe dataset and the YorkUrban dataset. These datasets make accurate line segment detection and wireframe parsing possible.

About

Holistically-Attracted Wireframe Parsing


Languages

Language:Python 96.2%Language:Cuda 3.1%Language:C++ 0.6%Language:Makefile 0.0%