OhadRubin / SmBop

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SmBoP: Semi-autoregressive Bottom-up Semantic Parsing

Author implementation of this NAACL 2021 paper.

Install & Configure

  1. Install pytorch 1.8.1 that fits your CUDA version

  2. Install the rest of required packages

    pip install -r requirements.txt
    
  3. Run this command to install NLTK punkt.

    python -c "import nltk; nltk.download('punkt'); nltk.download('stopwords')"
    
  4. Download the Spider dataset with the following command:

    bash scripts/download_spider.sh 
    

Training the parser

Use the following command to train:

python exec.py 

First time loading of the dataset might take a while (a few hours) since the model first loads values from tables and calculates similarity features with the relevant question. It will then be cached for subsequent runs. Use the disable_db_content argument to reduce the pre-processing time in exchange of not performing IR on some (incredibly large) tables.

Evaluation

To create predictions run the following command:

python eval.py --archive_path {model_path} --output preds.sql

To run the evalutation with the official spider script:

python smbop/eval_final/evaluation.py --gold dataset/dev_gold.sql --pred preds.sql --etype all --db  dataset/database  --table dataset/tables.json

Pretrained model

You can download a pretrained model from here. It achieves the following results on the offical script:

                     easy                 medium               hard                 extra                all                 
count                248                  446                  174                  166                  1034                
=====================   EXECUTION ACCURACY     =====================
execution            0.883                0.791                0.684                0.530                0.753             

====================== EXACT MATCHING ACCURACY =====================
exact match          0.883                0.791                0.655                0.512                0.746

Demo

You can run SmBoP on a Google Colab notebook here.

Docker

You could also use the demo with docker:

docker build -t smbop .
docker run -it --gpus=all smbop:latest

This will create a infrence terminal similar to the Google Colab demo, you could run for example:

>>>inference("Which films cost more than 50 dollars or less than 10?","cinema")
SELECT film.title FROM schedule JOIN film ON schedule.film_id = film.film_id WHERE schedule.price > 50 OR schedule.price<10

About

License:MIT License


Languages

Language:Python 96.8%Language:Jsonnet 2.7%Language:Shell 0.3%Language:Dockerfile 0.2%