PlusLabNLP / Dialogue_System_Hackathon

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Dialogue_System_Hackathon

This repository consists of code, processed data and trained models leveraged in DiSCoL: Toward Engaging Dialogue Systems through Conversational Line Guided Response Generation. Please cite this work as:

@article{ghazarian2021discol, 
  title={DiSCoL: Toward Engaging Dialogue Systems through Conversational Line Guided Response Generation}, 
  author={Sarik Ghazarian and Zixi Liu and Tuhin Chakrabarty and Xuezhe Ma and Aram Galstyan and Nanyun Peng}, 
  booktitle={2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Demonstrations Track}, 
  pages={26–34}, 
  year={2021} 
}

Please feel free to contact me for any suggestions or issues.

Steps to run the demo

Create a new Environment

Create a new environment from the webdemo.yml file which includes the necessary libraries and packages to run the demo, then install fairseq using the library in this repository.

Load the models

We have four models that should be loaded to run the DiSCoL:

  1. ent_kwd: is a finetuned BART (Lewis et al., 2019) model that predicts convlines given the dialogue context utterance, entities and topics.
  2. topic_cls: is a finetuned BERT (Devlin et al., 2019) that predicts a topic label for each given dialogue utterance.
  3. bartgen: is a finetuned BART (Lewis et al., 2019) model that generates next utterance (response) given dialogue context utterance, predicted convlines and topic.
  4. baseline: is the pretrained DialoGPT (Zhang et al., 2019) model that we consider as the baseline model to generate responses (it doesn't take the predicted keywords and topic as the input to generate the response).

Download all these models from here and put them in a folder (eg. ./Models). Then try to update all the paths in the first four lines of webdemo/SETTING.py file accordingly such that DiSCoL would be able to locate and load them correctly.

Run demo

It is encouraged to run DiSCoL on a machine with GPUs. If your machine does not have a GPU, you can remotely access to a machine with GPUs using ssh -L PORT_NUMBER:127.0.0.1:PORT_NUMBER MACHINE_NAME.

On the connected server run the DiSCoL on a GPU: python webdemo/app.py

Converse with DiSCoL!

In your local browser, try to connect to the server: http://127.0.0.1:PORT_NUMBER

The DiCoL should be ready to converse. Enjoy conversing with DiSCoL!

About


Languages

Language:Python 91.6%Language:JavaScript 1.9%Language:Cuda 1.6%Language:Jupyter Notebook 1.4%Language:CSS 0.8%Language:Shell 0.7%Language:C++ 0.7%Language:HTML 0.7%Language:OpenEdge ABL 0.4%Language:Lua 0.2%