- written by Sammie Katt (katt.s at husky dot neu dot edu)
- find it @ https://github.com/samkatt/fba-pomdp
A code base to run (Bayes-Adaptive) reinforcement learning experiments on partially observable domains. This project is meant for reinforcement learning researchers to compare different methods. It contains various different environments to test the methods on, of which all partially observable and discrete. Note that this project has mostly been written for personal use, research, and thus may lack the documentation that one would typically expect from open source projects.
- BA-POMCP paper ICML 2017 http://proceedings.mlr.press/v70/katt17a/katt17a.pdf
- FBA-POMDP paper AAMAS 2019 http://ifaamas.org/Proceedings/aamas2019/pdfs/p7.pdf
mkdir wherever/you/want && cd wherever/you/want
cmake -DCMAKE_BUILD_TYPE=Release /path/to/root/of/this/project
make
./planning -D episodic-tiger -v 2 -f results.txt
cat results.txt
./planning --help
Tabular BA-POMDP:
./bapomdp --help
Or Factored BA-POMDP:
./fbapomdp --help
See analysis/README.md
After installation to generate the documentation in the 'doc' folder, run
cd to/the/build/directory
make docs
- automate clang-tidy static analysis
- formatting
make clang-format
- static analysis
scan-build make
make ccpcheck
python run-clang-tidy.py -checks=clang-analyzer-*,cppcoreguidlines-*,misc-*,modernize-*,performance-*,readability-*,-readability-named-parameter -header-filter=src/
- dynamic analysis (and running tests)
valgrind ./tests
(do not forget to first compile with-DCMAKE_BUILD_TYPE=Debug
)