jingkl / UMD

Official implementation of "User Attention-guided Multimodal Dialog Systems"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

UMD

If you have any questions about the code, please feel free to create an issue or contact me by email chentsuei@gmail.com.

Official implementation of "User Attention-guided Multimodal Dialog Systems"

The code is under refactoring and the new code will be published soon.

Data

The crawled images can be downloaded here (corresponding url2img.txt). Other data provided by MMD can be downloaded here.

Prerequisite

  • Python 3.5+
  • PyTorch 1.0
  • NLTK 3.4
  • PIL 5.3.0

How to run

Please place the data files to the appropriate path and set it in options/dataset_option.py, then run python train <task> <saved_model_file>.

Evaluation

Perl script mteval-v14.pl is used to evaluate the text result. You should first extract the result from the log files. And convert them into XML file. For convenience, the convert.py is provided.

About

Official implementation of "User Attention-guided Multimodal Dialog Systems"

License:MIT License


Languages

Language:Python 99.8%Language:Shell 0.2%