shujuner / NSpM

πŸ€– Neural SPARQL Machines translate natural language into SPARQL queries.

Home Page:http://aksw.org/Projects/NeuralSPARQLMachines

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

πŸ€– Neural SPARQL Machines

A LSTM-based Machine Translation Approach for Question Answering.

British flag. Seq2Seq neural network. Semantic triple flag.

Code

Install git-lfs in your machine, then fetch all files and submodules.

git lfs fetch
git lfs checkout
git submodule update --init

Install TensorFlow (e.g., pip install tensorflow).

Data preparation

Generation

The template used in the paper can be found in a file such as annotations_monument.tsv. To generate the training data, launch the following command.

mkdir data/monument_300
python generator.py --templates data/annotations_monument.csv  --output data/monument_300

Build the vocabularies for the two languages (i.e., English and SPARQL) with:

python build_vocab.py data/monument_300/data_300.en > data/monument_300/vocab.en
python build_vocab.py data/monument_300/data_300.sparql > data/monument_300/vocab.sparql

Count lines in data_.*

NUMLINES=$(echo awk '{ print $1}' | cat data/monument_300/data_300.sparql |  wc -l)
echo $NUMLINES
# 7097

Split the data_.* files into train_.*, dev_.*, and test_.* (usually 80-10-10%).

cd data/monument_300/
python ../../split_in_train_dev_test.py --lines $NUMLINES  --dataset data_300.sparql

Pre-generated data

Alternatively, you can extract pre-generated data from data/monument_300.zip and data/monument_600.zip in folders having the respective names.

Training

Now go back to the initail directory and launch train.sh to train the model. The first parameter is the prefix of the data directory and the second parameter is the number of training epochs.

sh train.sh data/monument_300 120000

This command will create a model directory called data/monument_300_model.

Inference

Predict the SPARQL sentence for a given question with a given model.

sh ask.sh data/monument_300 "where is edward vii monument located in?"
 

Chatbots, Integration & Cia

Papers

Soru and Marx et al., 2017

@inproceedings{soru-marx-2017,
    author = "Tommaso Soru and Edgard Marx and Diego Moussallem and Gustavo Publio and Andr\'e Valdestilhas and Diego Esteves and Ciro Baron Neto",
    title = "{SPARQL} as a Foreign Language",
    year = "2017",
    journal = "13th International Conference on Semantic Systems (SEMANTiCS 2017) - Posters and Demos",
    url = "http://w3id.org/neural-sparql-machines/soru-marx-semantics2017.html",
}

Soru et al., 2018

@inproceedings{soru-marx-nampi2018,
    author = "Tommaso Soru and Edgard Marx and Andr\'e Valdestilhas and Diego Esteves and Diego Moussallem and Gustavo Publio",
    title = "Neural Machine Translation for Query Construction and Composition",
    year = "2018",
    journal = "ICML Workshop on Neural Abstract Machines \& Program Induction (NAMPI v2)",
    url = "https://arxiv.org/abs/1806.10478",
}

Contact

About

πŸ€– Neural SPARQL Machines translate natural language into SPARQL queries.

http://aksw.org/Projects/NeuralSPARQLMachines

License:MIT License


Languages

Language:Python 95.3%Language:C++ 2.4%Language:Shell 2.3%