BeyondYourself / TextLogoLayout

[CVPR 2022] Aesthetic Text Logo Synthesis via Content-aware Layout Inferring

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


This is the official Pytorch implementation of the paper:

Aesthetic Text Logo Synthesis via Content-aware Layout Inferring. CVPR 2022.

Paper: arxiv Supplementary: link


Our model takes glyph images and their corresponding texts as input and synthesizes aesthetic layouts for them automatically.

English Results:

Chinese Results:


TextLogo3K Dataset

We construct a text logo dataset named as TextLogo3K by collecting data from Tencent Video, one of the leading online video platforms in China. The dataset consists of 3,470 carefully-selected text logo images that are extracted from the posters/covers of the movies, TV series, and comics.

We manually annotate the bounding box, pixel-level mask, and category for each character in those text logos.

Download link: Google Drive, PKU Disk (Password: 1VEn)

Please download it, unzip it, and put the folder 'TextLogo3K' under './dataset/'.

Please note that this dataset CAN ONLY be used for academic purpose.

In addition to the layout synthesis problem addressed in our paper, our dataset can also be adopted in many tasks, such as (1) text detection/segmentation, (2) texture transfer, (3) artistic text recognition, and (4) artistic font generation.

English Dataset

The English dataset we used is from TextSeg (Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach, CVPR 2021). Please follow the instructions in its homepage to request the dataset.



  • python 3.8
  • Pytorch 1.9.0 (it may work on some lower or higher versions, but not tested)

Please use Anaconda to build the environment:

conda create -n tll python=3.8
source activate tll

Install pytorch via the instructions.

  • Others
conda install tensorboardX scikit-image jieba

Training and Testing


To train our model:

python --experiment_name base_model 

The training log will be written in ./experiments/base_model/logs, which can be visualized by Tensorboard. The checkpoints will be saved in ./experiments/base_model/checkpoints. All hyper-parameters can be found in

Our code supports multi-gpu training, if your single GPU's memory is not enough, check multi_gpu in is True and run:

CUDA_VISIBLE_DEVICES=0,1,2...,n python --experiment_name base_model 

Pretrained models

Our trained checkpoints (at epoch 600) can be found in Google Drive and PKU Disk. We find checkpoints at different steps may give different styles, it is encouraged to train the model by yourself and test more checkpoints.


To test our model on TextLogo3K testing dataset:

python --experiment_name base_model --test_sample_times 10 --test_epoch 600

The results will be saved in ./experiments/base_model/results.

Testing on your own data

(This function is being developed, will be upgraded soon) To test our model on your own cases: First, download the Chinese embeddings from Chinese-Word-Vectors, i.e., sgns.baidubaike.bigram-char, put it under './dataset/Embeddings'.

Then, generate the data from input texts and font files:

python --input_text 你好世界 --ttf_path ./dataset/ttfs/FZShengSKSJW.TTF --output_dir ./dataset/YourDataSet/

Last, use our model to infer:

python --experiment_name base_model --test_sample_times 10 --test_epoch 500 --data_name YourDataSet --mode test

The results will be written to ./experiments/base_model/results/500/YourDataSet/



If you use this code or find our work is helpful, please consider citing our work:

  title={Aesthetic Text Logo Synthesis via Content-aware Layout Inferring},
  author={Wang, Yizhi and Pu, Gu and Luo, Wenhan and Wang, Yexin ans Xiong, Pengfei and Kang, Hongwen and Wang, Zhonghao and Lian, Zhouhui},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},


[CVPR 2022] Aesthetic Text Logo Synthesis via Content-aware Layout Inferring

License:MIT License


Language:Python 100.0%