blackyang / neural-style

Torch implementation of neural style algorithm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

neural-style

This is a torch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.

The paper presents an algorithm for combining the content of one image with the style of another image using convolutional neural networks. Here's an example that maps the artistic style of The Starry Night onto a night-time photograph of the Stanford campus:

Applying the style of different images to the same content image gives interesting results. Here we reproduce Figure 2 from the paper, which renders a photograph of the Tubingen in Germany in a variety of styles:

Here are the results of applying the style of various pieces of artwork to this photograph of the golden gate bridge:

The algorithm allows the user to trade-off the relative weight of the style and content reconstruction terms, as shown in this example where we port the style of Picasso's 1907 self-portrait onto Brad Pitt:

By resizing the style image before extracting style features, we can control the types of artistic features that are transfered from the style image; you can control this behavior with the -style_scale flag. Below we see three examples of rendering the Golden Gate Bridge in the style of The Starry Night. From left to right, -style_scale is 2.0, 1.0, and 0.5.

<img src="https://raw.githubusercontent.com/jcjohnson/neural-style/master/examples/outputs/golden_gate_starry_scale2.png" height=175px"> <img src="https://raw.githubusercontent.com/jcjohnson/neural-style/master/examples/outputs/golden_gate_starry_scale1.png" height=175px"> <img src="https://raw.githubusercontent.com/jcjohnson/neural-style/master/examples/outputs/golden_gate_starry_scale05.png" height=175px">

Setup:

Dependencies:

Optional dependencies:

NOTE: If your machine does not have CUDA installed, then you may need to install loadcaffe manually like this:

git clone https://github.com/szagoruyko/loadcaffe.git
# Edit the file loadcaffe/loadcaffe-1.0-0.rockspec
# Delete lines 21 and 22 that mention cunn and inn
luarocks install loadcaffe/loadcaffe-1.0-0.rockspec

After installing dependencies, you'll need to run the following script to download the VGG model:

sh models/download_models.sh

This will download the original VGG-19 model. Leon Gatys has graciously provided the modified version of the VGG-19 model that was used in their paper; this will also be downloaded. By default the original VGG-19 model is used.

Usage

Basic usage:

th neural_style.lua -style_image <image.jpg> -content_image <image.jpg>

Options:

  • -image_size: Maximum side length (in pixels) of of the generated image. Default is 512.
  • -gpu: Zero-indexed ID of the GPU to use; for CPU mode set -gpu to -1.

Optimization options:

  • -content_weight: How much to weight the content reconstruction term. Default is 5e0.
  • -style_weight: How much to weight the style reconstruction term. Default is 1e2.
  • -tv_weight: Weight of total-variation (TV) regularization; this helps to smooth the image. Default is 1e-3. Set to 0 to disable TV regularization.
  • -num_iterations: Default is 1000.
  • -init: Method for generating the generated image; one of random or image. Default is random which uses a noise initialization as in the paper; image initializes with the content image.

Output options:

  • -output_image: Name of the output image. Default is out.png.
  • -print_iter: Print progress every print_iter iterations. Set to 0 to disable printing.
  • -save_iter: Save the image every save_iter iterations. Set to 0 to disable saving intermediate results.

Other options:

  • -style_scale: Scale at which to extract features from the style image. Default is 1.0.
  • -proto_file: Path to the deploy.txt file for the VGG Caffe model.
  • -model_file: Path to the .caffemodel file for the VGG Caffe model. Default is the original VGG-19 model; you can also try the normalized VGG-19 model used in the paper.
  • -pooling: The type of pooling layers to use; one of max or avg. Default is max. The VGG-19 models uses max pooling layers, but the paper mentions that replacing these layers with average pooling layers can improve the results. I haven't been able to get good results using average pooling, but the option is here.
  • -backend: nn or cudnn. Default is nn. cudnn requires cudnn.torch and may reduce memory usage.

Speed

On a GTX Titan X, running 1000 iterations of gradient descent with -image_size=512 takes about 2 minutes. In CPU mode on an Intel Core i7-4790k, running the same takes around 40 minutes. Most of the examples shown here were run for 2000 iterations, but with a bit of parameter tuning most images will give good results within 1000 iterations.

Implementation details

Images are initialized with white noise and optimized using L-BFGS.

We perform style reconstructions using the conv1_1, conv2_1, conv3_1, conv4_1, and conv5_1 layers and content reconstructions using the conv4_2 layer. As in the paper, the five style reconstruction losses have equal weights.

About

Torch implementation of neural style algorithm

License:MIT License


Languages

Language:Lua 97.5%Language:Shell 2.5%