Run-Skeleton-Run
Reason8.ai PyTorch solution for 3rd place NIPS RL 2017 challenge.
Additional thanks to Mikhail Pavlov for collaboration.
Agent policies
no-flip-state-action
flip-state-action
How to setup environment?
sh setup_conda.sh
source activate opensim-rl
Would like to test baselines? (Need MPI support)
sudo apt-get install openmpi-bin openmpi-doc libopenmpi-dev
3+.sh setup_env_mpi.sh
OR like DDPG agents?
3. sh setup_env.sh
- Congrats! Now you are ready to check our agents.
Run DDPG agent
CUDA_VISIBLE_DEVICES="" PYTHONPATH=. python ddpg/train.py \
--logdir ./logs_ddpg \
--num-threads 4 \
--ddpg-wrapper \
--skip-frames 5 \
--fail-reward -0.2 \
--reward-scale 10 \
--flip-state-action \
--actor-layers 64-64 --actor-layer-norm --actor-parameters-noise \
--actor-lr 0.001 --actor-lr-end 0.00001 \
--critic-layers 64-32 --critic-layer-norm \
--critic-lr 0.002 --critic-lr-end 0.00001 \
--initial-epsilon 0.5 --final-epsilon 0.001 \
--tau 0.0001
Evaluate DDPG agent
CUDA_VISIBLE_DEVICES="" PYTHONPATH=./ python ddpg/submit.py \
--restore-actor-from ./logs_ddpg/actor_state_dict.pkl \
--restore-critic-from ./logs_ddpg/critic_state_dict.pkl \
--restore-args-from ./logs_ddpg/args.json \
--num-episodes 10
Run TRPO/PPO agent
CUDA_VISIBLE_DEVICES="" PYTHONPATH=. python ddpg/train.py \
--agent ppo \
--logdir ./logs_baseline \
--baseline-wrapper \
--skip-frames 5 \
--fail-reward -0.2 \
--reward-scale 10