EPFL-VILAB / palmer

PALMER: Perception-Action Loop with Memory for Long-Horizon Planning, NeurIPS 2022

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PALMER: Perception-Action Loop with Memory for Long-Horizon Planning

Onur Beker, Mohammad Mohammadi, Amir Zamir

Website | arXiv | BibTeX

Experiment Visualizations

TL;DR: We introduce PALMER, a long-horizon planning method that directly operates on high dimensional sensory input observable by an agent on its own (e.g., images from an onboard camera). Our key idea is to retrieve previously observed trajectory segments from a replay buffer and restitch them into approximately optimal paths to connect any given pair of start and goal states. This is achieved by combining classical sampling-based planning algorithms (e.g., PRM, RRT) with learning-based perceptual representations that are informed of actions and their consequences.1

Summary

To achieve autonomy in a priori unknown real-world scenarios, agents should be able to:

  1. act directly from their own sensory observations, without assuming auxiliary instrumentation in their environment (e.g., a precomputed map, or an external mechanism to compute rewards).
  2. learn from past experience to continually adapt and improve after deployment.
  3. be capable of long-horizon planning.

Classical planning algorithms (e.g. PRM, RRT) are proficient at handling long-horizon planning. Deep learning based methods in turn can provide the necessary representations to address the others, by modeling statistical contingencies between sensory observations.2

In this direction, we introduce a general-purpose planning algorithm called PALMER that combines classical sampling-based planning algorithms with learning-based perceptual representations.

  • For training these representations, we combine Q-learning with contrastive representation learning to create a latent space where the distance between the embeddings of two states captures how easily an optimal policy can traverse between them.
  • For planning with these perceptual representations, we re-purpose classical sampling-based planning algorithms to retrieve previously observed trajectory segments from a replay buffer and restitch them into approximately optimal paths that connect any given pair of start and goal states.

This creates a tight feedback loop between representation learning, memory, reinforcement learning, and sampling-based planning. The end result is an experiential framework for long-horizon planning that is more robust and sample efficient compared to existing methods.

Main Take-Aways

  • How to retrieve past trajectory segments from a replay-buffer / memory? → by using offline reinforcement learning for contrastive representation learning.
  • How to restitch these trajectory segments into a new path? → by repurposing the main subroutines of classical sampling-based planning algorithms.
  • What makes PALMER robust and sample-efficient? → it explicitly checks back with a memory / training-dataset whenever it makes test-time decisions.

How to Navigate this Codebase?

Please see SETUP.md for instructions.

Citation

@article{beker2022palmer,
  author    = {Onur Beker and Mohammad Mohammadi and Amir Zamir},
  title     = {{PALMER}: Perception-Action Loop with Memory for Long-Horizon Planning},
  journal   = {arXiv preprint arXiv:coming soon!},
  year      = {2022},
}

Footnotes

  1. For a more elaborate discussion around these motivations [ref1,ref2].

  2. For a conceptual discussion around statistical contingencies [ref,wikipedia] .

About

PALMER: Perception-Action Loop with Memory for Long-Horizon Planning, NeurIPS 2022

License:MIT License


Languages

Language:Jupyter Notebook 97.5%Language:Python 2.5%