This is the code for Learning a natural-language to LTL executable semantic parser for grounded robotics
Attention mechanism implementation from [1]
- Python 3.7
- Torch 1.2.0
- OpenAI Baselines
- Spot 2.9.3
The execution environment and other python libraries are needed. We recommend creating a new virtual environment before proceeding. In the cloned directory, run the following:
git clone --recursive https://github.com/czlwang/grounded_LTL_parser.git
cd ltl-environment-dev
git checkout stable
pip install -e ltl-environment-dev
pip install -r requirements.txt
The compressed data is in the data
directory.
- The
1k_10_env_1_track_no_neg_human
data contains 1,000 examples, with 10 separate demonstrations per example. The sentences are generated by humans. - The
1k_10_env_1_track_no_neg
data contains 1,000 examples, with 10 separate demonstrations per example. The sentences are generated by a grammar.
NOTE for most of our experiments, we only use 3 of the 10 demonstrations during training.
./reinforce.py configs/rl_parse_config.yml
(This will run for a long time. If you just want to check if everything is working, you set train_split
to be smaller in the config.)
data_dir
should be set to wherever you put the datatrain_mode
is eitheriml
,rl
, orml
for Iterative Maximum Likelihood, Reinforcement Learning, or Maximum Likelihood respectivelyeval_only
is True if running evaluation only. In that case, make sure to setload_pretrained
andpretrained
as well.
- reinforce.py contains our code for training
iter_ml()
contains the Iterative Maximum Likelihood procedure
- model.py contains our sequence-to-sequence model
- rewards.py contains our rewards methods
compute_ltl_rewards()
contains the reward computation as described in Section 4
[1] J. Bastings. 2018. The Annotated Encoder-Decoder with Attention. https://bastings.github.io/annotated_encoder_decoder/