vlordier / Neural-IMage-Assessment

A PyTorch Implementation of Neural IMage Assessment

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

NIMA: Neural IMage Assessment

This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment by Hossein Talebi and Peyman Milanfar. You can learn more from this post at Google Research Blog.

Implementation Details

  • The model was trained on the AVA (Aesthetic Visual Analysis) dataset, which contains roughly 255,500 images. You can get it from here. Note: there may be some corrupted images in the dataset, remove them first before you start training.

  • I split the dataset into 229,981 images for training, 12,691 images for validation and 12,818 images for testing.

  • I used a VGG16 pretrained on ImageNet as the base network of the model, for which I got a ~0.075 EMD loss on the 12,691 validation images. Haven't tried the other two options (MobileNet and Inception-v2) in the paper yet. # TODO

  • The learning rate setting differs from the original paper. I can't seem to get the model to converge with momentum SGD using an lr of 3e-7 for the conv base and 3e-6 for the dense block. Also I didn't do much hyper-param tuning therefore you could probably get better results. Other settings are all directly mirrored from the paper.

  • The code now only supports python3.

Usage

  • Set --train=True and run python main.py to start training. The average training time for one epoch with --batch_size=128 is roughly 1 hour on a Titan Xp GPU. For evaluation, refer to test.py for usage.

  • I found https://lera.ai/ a very handy tool to monitor training in PyTorch in real time. You can check it out on how to use it. Remember do pip install lera first if you are inclined to use it.

Training Statistics

Training is done with early stopping monitoring. Here I set patience=5. loss

Pretrained Model

Google Drive

Annotation CSV Files

Train Validation Test

Example Results

  • Here shows the predicted mean scores of some images from the validation set. The ground truth is in the parenthesis.

  • Also some failure cases...

  • The predicted aesthetic ratings from training on the AVA dataset are sensitive to contrast adjustments. Below images from left to right in a row-major order are with progressively sharper contrast. Upper rightmost is the original input.

Requirements

  • PyTorch 0.4.0+
  • torchvision
  • numpy
  • Pillow
  • pandas (for reading the annotations csv file)

About

A PyTorch Implementation of Neural IMage Assessment


Languages

Language:Python 100.0%