1jsingh / semantic-guidance

Code for our CVPR-2021 paper on Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Semantic-Guidance: Distilling Object Awareness into Paintings Tweet

This repository contains code for our CVPR-2021 paper on Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings.

The Semantic Guidance pipeline distills different forms of object awareness (semantic segmentation, object localization and guided backpropagation maps) into the painting process itself. The resulting agent is able to paint canvases with increased saliency of foreground objects and enhanced granularity of key image features.

Contents

Demo

Traditional reinforcement learning based methods for the "learning to paint" problem, show poor performance on real world datasets with high variance in position, scale and saliency of the foreground objects. To address this we propose a semantic guidance pipeline, which distills object awareness knowledge into the painting process, and thereby learns to generate semantically accurate canvases under adverse painting conditions.

Target Image Baseline (Huang et al. 2019) Semantic Guidance (Ours)

Environment Setup

  • Set up the python environment for running the experiments.
conda env update --name semantic-guidance --file environment.yml
conda activate semantic-guidance

Dataset and Preprocessing

  • Download CUB-200-2011 Birds dataset and place it in the data/cub200/CUB_200_2011/ folder.
mkdir -p data/cub200 && cd data/cub200
gdown https://drive.google.com/uc?id=1hbzc_P1FuxMkcabkgn9ZKinBwW683j45
tar -xvzf CUB_200_2011.tgz
* The final data folder looks as follows,
```bash
data
├── cub200/
│   └── CUB_200_2011/
│       └── images/
│             └── ...
│       └── images.txt
```
  • Download differentiable neural renderer: renderer.pkl and place it in the data/. folder.
cd data
gdown https://drive.google.com/uc?id=1VloSGAWYRiVYv3bRfBuB0uKj2m7Cyzu8
  • Download combined model for object localization and semantic segmentation from here, and place it in place it in the data/. folder.
cd data
gdown https://drive.google.com/uc?id=14CIdpem-85y53KkkW2oBspXh-2PPXtTs
  • Choose one of the following options to get preprocessed data predictions (preprocessing helps faciliate faster training),

    • Option 1: run the preprocessing script to generate object localization, semantic segmentation and bounding box predictions.
    cd semantic_guidance
    python preprocess.py
    • Option 2: you can also directly download the preprocessed birds dataset from here, and place the prediction folders in the original data directory.
    cd data/cub200/CUB_200_2011/
    gdown https://drive.google.com/uc?id=1s3lvo0Dn538lPghpXY1gEOTAZOTsojxJ
    unzip preprocessed-cub200-2011.zip
    mv preprocessed-cub200-2011/* .
    • The final data directory should look like:
    data
    ├── cub200/
    │   └── CUB_200_2011/
    │       └── images/
    │             └── ...
    │       └── segmentations_pred/
    │             └── ...
    │       └── gbp_global/
    │             └── ...
    │       └── bounding_boxes_pred.txt
    │       └── images.txt
    └── renderer.pkl
    └── birds_obj_seg.pkl

Training

cd semantic_guidance
python train.py \
--dataset cub200 \
--debug \
--batch_size=96 \
--max_eps_len=50  \
--bundle_size=5 \
--exp_suffix baseline
  • Train the deep reinforcement learning based painting agent using Semantic Guidance pipeline.
cd semantic_guidance
python train.py \
--dataset cub200 \
--debug \
--batch_size=96 \
--max_eps_len=50  \
--bundle_size=5 \
--use_bilevel \
--use_gbp \
--exp_suffix semantic-guidance

Testing using Pretrained Models

  • Download the pretrained models for the Baseline and Semantic Guidance agents. Place the downloaded models in ./semantic_guidance/pretrained_models directory.
cd semantic_guidance
mkdir pretrained_models && cd pretrained_models
gdown https://drive.google.com/uc?id=1OvN7yRia44nhD16KmjcAvxG8xICWl42p
gdown https://drive.google.com/uc?id=173p2rUQlNpp8fLA3u5s24QKJLU68QTkw
* The final directory structure should look as follows,
```bash
semantic-guidance
├── semantic_guidance/
│   └── pretrained_models/
│       └── actor_baseline.pkl
│       └── actor_semantic_guidance.pkl
```
  • Generate the painting sequence using pretrained baseline agent.
cd semantic_guidance
python test.py \
--img ../input/target_bird_4648.png \
--actor pretrained_models/actor_baseline.pkl \
--use_baseline
  • Use the pretrained Semantic Guidance agent to paint canvases.
cd semantic_guidance
python test.py \
--img ../input/target_bird_4648.png \
--actor pretrained_models/actor_semantic_guidance.pkl 
  • The test script stores the final canvas state in the ./output folder and saves a video for the painting sequence in the ./video directory.

Citation

If you find this work useful in your research, please cite our paper:

@inproceedings{singh2021combining,
  title={Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings},
  author={Singh, Jaskirat and Zheng, Liang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={16387--16396},
  year={2021}
}

About

Code for our CVPR-2021 paper on Combining Semantic Guidance and Deep Reinforcement Learning For Generating Human Level Paintings.


Languages

Language:Python 100.0%