woshidandan / Image-Color-Aesthetics-Assessment

[ICCV 2023, Official Code] for paper "Thinking Image Color Aesthetics Assessment: Models, Datasets and Benchmarks". Official Weights and Demos provided. 首个面向图像色彩主观美学评估的数据集、算法和benchmark.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

License Framework

Thinking Image Color Aesthetics Assessment: Models, Datasets and Benchmarks

Shuai He, Anlong Ming, Yaqi Li, Jinyuan Sun, ShunTian Zheng, Huadong Ma

Beijing University of Posts and Telecommunications

[国内的小伙伴请看更详细的中文说明]This repo contains the official implementation and the new dataset ICAA17K of the ICCV 2023 paper. [Our refined model of this work]

Largest Color-oriented Dataset: ICAA17K  

  • Existing IAA datasets primarily focus on evaluating the holistic quality of images and lack of detailed color annotations, with limited color types or combinations. Specifically, these datasets exhibit serious selection bias. e.g., about 50% images of the AVA dataset are “black and white” images, which outnumber other colors by 10 to 100 times, and the PCCD and SPAQ datasets have few images of “pink” and “violet” colors. Therefore, these IAA datasets are not suitable for ICAA tasks and cannot support the generalization of ICAA models well. To address the above issue, we try to develop a specialized and color-oriented dataset the first time. To the best of our knowledge, our ICAA17K dataset is the largest, as well as most densely annotated ICAA dataset with a diverse range of color types and image collection devices. You can download dataset and label from google drive, or download from:baidu drive

ICAA17K dataset

Delegate Transformer  

  • Traditional quantization methods are based on statistical quantitative information of image pixels, ignoring how spatial and semantic content affect color aesthetics. Although these methods can give qualitative analysis results, they cannot quantify the aesthetic differences brought about by a tiny change in color. Data-driven methods typically extract holistic aesthetic features and lack of prior color knowledge, which take themselves harder to perceive the spatial distribution and composition of different colors in an image, then leading to diffuse attention against perceiving color space. On the other hand, they cannot assign different attention weights based on color importance, which leads to in poor fine-grained perception for color. The proposed Delegate Transformer learns to segment color space from dedicated deformable attention rather than static pixel values, and thus captures spatial information of color. Furthermore, different color spaces are assigned different levels of attention by the Delegate Transformer, which exactly matches human behavior for color space segmentation. Note: Because of the current project reasons, our weights can not be made public (please wait.), but the training code are available, you can train one yourself. Download weights from: google drive.

网络结构

Largest Benchmark of Image Color Aesthetics Assessment  

  • Previously, there was no benchmark designed for subjective color aesthetics assessment. Based on ICAA17K, we release two large-scale benchmarks of 15 methods for ICAA, the most comprehensive one thus far based on two datasets, SPAQ and ICAA17K. Benchmark

Environment Installation

  • pandas==0.22.0
  • nni==1.8
  • requests==2.18.4
  • torchvision==0.8.2+cu101
  • numpy==1.13.3
  • scipy==0.19.1
  • tqdm==4.43.0
  • torch==1.7.1+cu101
  • scikit_learn==1.0.2
  • tensorboardX==2.5

How to Run the Code

  • Note: before train on ICAA17K or SPAQ, please load the pre-trained weights on the AVA, you can download the weights from link,or you can pre-train by yourself.
  • We used the hyperparameter tuning tool nni, maybe you should know how to use this tool first (it will only take a few minutes of your time), because our training and testing will be in this tool.
  • Train or test, please run: nnictl create --config config.yml -p 8999
  • The Web UI urls are: http://127.0.0.1:8999 or http://172.17.0.3:8999
  • Note: nni is not necessary, if you don't want to use this tool, just make simple modifications to our code, such as changing param_group['lr'] to param_group.lr, etc.

If you find our work is useful, pleaes cite our paper:

@article{hethinking,
  title={Thinking Image Color Aesthetics Assessment: Models, Datasets and Benchmarks},
  author={He, Shuai and Ming, Anlong and Yaqi, Li and Jinyuan, Sun and ShunTian, Zheng and Huadong, Ma},
  journal={ICCV},
  year={2023},
}

Our other works:

  • "EAT: An Enhancer for Aesthetics-Oriented Transformers.", [pdf] [code] ACMMM2023.
  • "Rethinking Image Aesthetics Assessment: Models, Datasets and Benchmarks.", [pdf] [code] IJCAI 2022.

About

[ICCV 2023, Official Code] for paper "Thinking Image Color Aesthetics Assessment: Models, Datasets and Benchmarks". Official Weights and Demos provided. 首个面向图像色彩主观美学评估的数据集、算法和benchmark.


Languages

Language:Python 100.0%