mikel-brostrom / My_Bibliography_for_Research_on_Autonomous_Driving

Personal notes about scientific and research works on "Decision-Making for Autonomous Driving"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

My Bibliography for Research on Autonomous Driving

Note: Some articles may be missing at the bottom of this preview page (due to length). Open README.md to get all the articles!

Motivation

In this document, I would like to share some personal notes about the latest exciting trends in research about decision making for autonomous driving. I keep on updating it 👷 🚧 😃

Template:

"title" [ Year ] [📝 (paper)] [:octocat: (code)] [🎞️ (video)] [ 🎓 University X ] [ 🚗 company Y ] [ related, concepts ]

Categories:

Besides, I reference additional publications in some parallel works:

Looking forward your reading suggestions!



Architecture and Map


"BARK : Open Behavior Benchmarking in Multi-Agent Environments"

  • [ 2020 ] [📝] [:octocat:] [:octocat:] [ 🎓 Technische Universität München ] [ 🚗 Fortiss, AID ]

  • [ behavioural models, robustness, open-loop simulation, behavioural simulation, interactive human behaviors ]

Click to expand
Source.
The ObservedWorld model, reflects the world that is perceived by an agent. Occlusions and sensor noise can be introduced in it. The simultaneous movement makes simulator planning cycles entirely deterministic. Source.
Source.
Two evaluations. Left: Robustness of the planning model against the transition function. The scenario's density is increased by reducing the time headway IDM parameters of interacting vehicles. Inaccurate prediction model impacts the performance of an MCTS (2k, 4k, and 8k search iterations) and RL-based (SAC) planner. Right: an agent from the dataset is replaced with various agent behaviour models. Four different parameter sets for the IDM. Agent sets A0, A1, A2, A6 are not replaced with the IDM since this model cannot change lane. Maintaining a specific order is key for merging, but without fine-tuning model parameters, most behaviour models fail to coexist next to replayed agents. Source.

Authors: Bernhard, J., Esterle, K., Hart, P., & Kessler, T.

  • BARK is an acronym for Behaviour BenchmARK and is open-source under the MIT license.

  • Motivations:

    • 1- Focus on driving behaviour models for planning, prediction, and simulation.
      • "BARK offers a behavior model-centric simulation framework that enables fast-prototyping and the development of behavior models. Behavior models can easily be integrated — either using Python or C++. Various behavior models are available ranging from machine learning to conventional approaches."

    • 2- Benchmark interactive behaviours.
      • "To model interactivity, planners must employ some kind of prediction model of other agents."

  • Why existing simulation frameworks are limiting?

    • "Most simulations rely on datasets and simplistic behavior models for traffic participants and do not cover the full variety of real-world, interactive human behaviors. However, existing frameworks for simulating and benchmarking behavior models rarely provide sophisticated behavior models for other agents."

    • CommonRoad: only pre-recorded data are used for the other agents, i.e. only enabling non-interactive behaviour planning.
    • CARLA: A CARLA-BARK interface is available.
      • "Being based on the Unreal Game Engine, problems like non-determinism and timing issues are introduced, that we consider undesirable when developing and comparing behavior models."

    • SUMO: Microscopic traffic simulators can model flow but neglect interactions with other vehicles and does not track the accurate motion of each agent.
  • Concept of simultaneous movement.

    • Motivation: Make simulator planning cycles entirely deterministic. This enables the simulation and experiments to be reproducible.
    • "BARK models the world as a multi-agent system with agents performing simultaneous movements in the simulated world."

    • "At fixed, discrete world time-steps, each agent plans using an agent-specific behavior model in a cloned world – the agent’s observed world."

    • Hence the other agents can actively interact with the ego vehicle.
  • Implemented behaviour models:

    • IDM + MOBIL.
    • RL (SAC).
      • "The reward r is calculated using Evaluators. These modules are available in our Machine Learning module. As it integrates the standard OpenAi Gym-interface, various popular RL libraries, such as TF-Agents can be easily integrated used with BARK."

    • MCTS. Single-agent or multi-agent.
      • [multi-agent] "Adapted to interactive driving by using information sets assuming simultaneous, multi-agent movements of traffic participants. They apply it to the context of cooperative planning, meaning that they introduce a cooperative cost function, which minimizes the costs for all agents."

    • Dataset Tracking Model.
      • The agent model tracks recorded trajectories as close as possible.
  • Two evaluations (benchmark) of the behavioural models.

    • "Prediction (a discriminative task) deals with what will happen, whereas simulation (often a generative task) deals with what could happen. Put another way, prediction is a tool for forecasting the development of a given situation, whereas simulation is a tool for exploring a wide range of potential situations, often with the goal of probing the robot’s planning and control stack for weaknesses that can be addressed by system developers." (Brown, Driggs-Campbell, & Kochenderfer, 2020).

    • 1- Behaviour prediction:
      • What is the effect of an inaccurate prediction model on the performance of an MCTS and RL-based planner?
      • MCTS requires an explicit generative model for each transition. This prediction model used internally is evaluated here.
      • [Robustness also tested for RL] "RL can be considered as an offline planning algorithm – not relying on a prediction model but requiring a training environment to learn an optimal policy beforehand. The inaccuracy of prediction relates to the amount of behavior model inaccuracy between training and evaluation."

    • 2- Behaviour simulation.
      • How planners perform when replacing human drivers in recorded traffic scenarios?
      • Motivation: combine data-driven (recorded -> fixed trajectories) and interactive (longitudinally controlled) scenarios.
      • "A planner is inserted into recorded scenarios. Others keep the behavior as specified in the dataset, yielding an open-loop simulation."

      • The INTERACTION Dataset​ is used since it provides maps, which are essential for most on-road planning approaches.
  • Results and future works.

    • [RL] "When the other agent’s behavior is different from that used in training, the collision rate rises more quickly."

    • "We conclude that current rule-based models (IDM, MOBIL) perform poorly in highly dense, interactive scenarios, as they do not model obstacle avoidance based on prediction or future interaction. MCTS can be used, but without an accurate model of the prediction, it also leads to crashes."

    • "A combination of classical and learning-based methods is computationally fast and achieves safe and comfortable motions."

    • The authors find imitation learning also promising.

"LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving"

Click to expand
Source.
A bridge is selected based on the user AD stack’s runtime framework: Autoware.AI and Autoware.Auto, which run on ROS and ROS2, can connect through standard open source ROS and ROS2 bridges, while for Baidu’s Apollo platform, which uses a custom runtime framework called Cyber RT, a custom bridge is provided to the simulator. Source.

Authors: Boise, E., Uhm, G., Gerow, M., Mehta, S., Agafonov, E., Kim, T. H., … Kim, S.

  • Motivations (Yet another simulator?):
    • "The LGSVL Simulator is a simulator that facilitates testing and development of autonomous driving software systems."

    • The main use case seems to be the integration to AD stacks: Autoware.AI, Autoware.Auto, Apollo 5.0, Apollo 3.0.
    • Compared to CARLA for instance, it seems more focused on development rather than research.
  • The simulation engine serves three functions:
    • Environment simulation
    • Sensor simulation
    • Vehicle dynamics and control simulation.
  • Miscellaneous:
    • LGSVL = LG Silicon Valley Lab.
    • Based on Unity engine.
    • A openAI-gym environment is provided for reinforcement learning: gym-lgsvl.
      • Default action space: steering and braking/throttle.
      • Default observation space: single camera image from the front camera. Can be enriched.
    • For perception training, kitti_parser.py enables to generate labelled data in KITTI format.
    • A custom License is defined.
      • "You may not sell or otherwise transfer or make available the Licensed Material, any copies of the Licensed Material, or any information derived from the Licensed Material in any form to any third parties for commercial purposes."

      • It makes it hard to compare to other simulators and AD software: for instance Carla, AirSim and DeepDrive are all under MIT License while code for Autoware and Apollo is protected by the Apache 2 License.

"Overview of Tools Supporting Planning for Automated Driving"

Click to expand
Source.
The authors group tools that support planning in sections: maps, communication, traffic rules, middleware, simulators and benchmarks. Source.
Source.
About simulators and dataset. And how to couple between tools, either with co-simulation software or open interfaces. Source.
Source.
About maps. ''The planning tasks with different targets entail map models with different level of details. HD map provides the most sufficient information and can be generally categorized into three layers: road model, lane model and localization model''. Source.

Authors: Tong, K., Ajanovic, Z., & Stettinger, G.

  • Motivations:

    • 1- Help researchers to make full use of open-source resources and reduce effort of setting up a software platform that suites their needs.
      • [example] "It is a good option to choose open-source Autoware as software stack along with ROS middleware, as Autoware can be further transferred to a real vehicle. During the development, he or she can use Carla as a simulator, to get its benefits of graphic rendering and sensor simulation. To make the simulation more realistic, he or she might adopt commercial software CarMaker for sophisticated vehicle dynamics and open-source SUMO for large-scale traffic flow simulation. OpenDRIVE map can be used as a source and converted into the map format of Autoware, Carla and SUMO. Finally, CommonRoad can be used to evaluate the developed algorithm and benchmark it against other approaches."

    • 2- Avoid reinventing the wheel.
      • Algorithms are available/adapted from robotics.
      • Simulators are available/adapted from gaming.
  • Mentioned software libraries for motion planning:

  • How to express traffic rules in a form understandable by an algorithm?

    • 1- Traffic rules can be formalized in higher order logic (e.g. using the Isabelle theorem prover) to check the compliance of traffic rules unambiguously and formally for trajectory validation.
    • 2- Another approach is to represent traffic rules geometrically as obstacles in a configuration space of motion planning problem.
    • "In some occasions, it is necessary to violate some rules during driving for achieving higher goals (i.e. avoiding collision) [... e.g. with] a rule book with hierarchical arrangement of different rules."

  • About data visualization?

    • RViz is a popular tool in ROS for visualizing data flow, as I also realized at IV19.
    • Apart from that, it seems each team is having its own specific visualization tool, sometimes released, as AVS from UBER and GM Cruise.
  • What is missing for the research community?

    • Evaluation tools for quantitative comparison.
    • Evaluation tools incorporating human judgment, not only from the vehicle occupants but also from other road users.
    • A standard format for motion datasets.
  • I am surprised INTERACTION dataset was not mentioned.


"Decision-making for automated vehicles using a hierarchical behavior-based arbitration scheme"

  • [ 2020 ] [📝] [ 🎓 FZI, KIT ]

  • [ hierarchical behavioural planning, cost-based arbitration, behaviour components ]

Click to expand
Source.
Both urban and highway behaviour options are combined using a cost-based arbitrator. Together with Parking and AvoidCollisionInLastResort, these four arbitrators and the SafeStop fallback are composed together to the top-most priority-based AutomatedDriving arbitrator. Source.
Source.
Top-right: two possible options. The arbitrator generally prefers the follow lane behaviour as long as it matches the route. Here, a lane change is necessary and selected by the cost-based arbitration: ChangeLaneRight has lower cost than FollowEgoLane, mainly due to the routing term in the cost expression. Bottom: the resulting behaviour selection over time. Source.

Authors: Orzechowski, P. F., Burger, C., & Lauer, M.

  • Motivation:

    • Propose an alternative to FSMs (finite state machines) and behaviour-based systems (e.g. voting systems) in hierarchical architectures.
    • In particular, FSMs can suffer from:
      • poor interpretability: why is one behaviour executed?
      • maintainability: effort to refine existing behaviour.
      • scalability: effort to achieve a high number of behaviours and to combine a large variety of scenarios.
      • options selection: "multiple behaviour options are applicable but have no clear and consistent priority against each other."
        • "How and when should an automated vehicle switch from a regular ACC controller to a lane change, cooperative zip merge or parking planner?"

      • multiple planners: Each behaviour component can compute its manoeuvre command with any preferred state-of-the-art method.
        • "How can we support POMDPs, hybrid A* and any other planning method in our behaviour generation?".

  • Main idea:

    • cost-based arbitration between so-called "behaviour components".
    • The modularity of these components brings several advantages:
      • Various scenarios can be handled within a single framework: four-way intersections, T-junctions, roundabout, multilane bypass roads, parking, etc.
      • Hierarchically combining behaviours, complex behaviour emerges from simple components.
      • Good efficiency: the atomic structure allows to evaluate behaviour options in parallel.
  • About arbitration:

    • "An arbitrator contains a list of behavior options to choose from. A specific selection logic determines which option is chosen based on abstract information, e.g., expected utility or priority."

    • [about cost] "The cost-based arbitrator selects the behavior option with the lowest expected cost."

    • Each behaviour option is evaluated based on its expected average travel velocity, incorporating routing costs and penalizing lane changes.
      • The resulting behaviour can thus be well explained:
      • "The selection logic of arbitrators is comprehensive."

    • About hierarchy:
      • "To generate even more complex behaviours, an arbitrator can also be a behaviour option of a hierarchically higher arbitrator."

  • About behaviour components.

    • There are the smallest building blocks, representing basic tactical driving manoeuvres.
    • Example of atomic behaviour components for simple tasks in urban scenarios:
      • FollowLead
      • CrossIntersection
      • ChangeLane
    • They can be specialized:
      • Dense scenarios behaviours: ApproachGap, IndicateIntention and MergeIntoGap to refine ChangeLane (multi-phase behaviour).
        • Note: an alternative could be to use one single integrated interaction-aware behaviour such as POMDP.
      • Highway behaviours (structured but high speed): MergeOntoHighway, FollowHighwayLane, ChangeHighwayLane, ExitFromHighway.
      • Parking behaviours: LeaveGarage, ParkNearGoal.
      • Fail-safe emergency behaviours: EmergenyStop, EvadeObject, SafeStop.
    • For a behaviour to be selected, it should be applicable. Hence a behaviour is defined together with:
      • invocation condition: when does it become applicable.
        • "[example:] The invocation condition of CrossIntersection is true as long as the current ego lane intersects other lanes within its planning horizon."

      • commitment condition: when does it stay applicable.
    • This reminds me the concept of macro actions, sometimes defined by a tuple <applicability condition, termination condition, primitive policy>.
    • It also makes me think of MODIA framework and other scene-decomposition approaches.
  • A mid-to-mid approach:

    • "[input] The input is an abstract environment model that contains a fused, tracked and filtered representation of the world."

    • [output] The selected high-level decision is passed to a trajectory planner or controller.
    • What does the "decision" look like?
      • One-size-fits-all is not an option.
      • It is distinguished between maneuvers in a structured or unstructured environment:
      • 1- unstructured: a trajectory, directly passed to a trajectory planner.
      • 2- structured: a corridor-based driving command, i.e. a tuple <maneuver corridor, reference line, predicted objects, maneuver variant>. It requires both a trajectory planner and a controller.
  • One distinction:

    • 1- top-down knowledge-based systems.
      • "The action selection in a centralized, in a top-down manner using a knowledge database."

      • "The engineer designing the action selection module (in FSMs the state transitions) has to be aware of the conditions, effects and possible interactions of all behaviors at hand."

    • 2- bottom-up behaviour-based systems.
      • "Decouple actions into atomic simple behaviour components that should be aware of their conditions and effects."

      • E.g. voting systems.
    • Here the authors combine atomic behaviour components (bottom/down) with more complex behaviours using generic arbitrators (top/up).

"A Review of Motion Planning for Highway Autonomous Driving"

  • [ 2019 ] [📝] [ 🎓 French Institute of Science and Technology for Transport ] [ 🚗 VEDECOM Institute ]
Click to expand
The review divides motion planning into five unavoidable parts. The decision making part contains risk evaluation, criteria minimization, and constraint submission. In the last part, a low-level reactive planner deforms the generated motion from the high-level planner. Source.
The review divides motion-planning into five parts. The decision-making part contains risk evaluation, criteria minimization, and constraint submission. In the last part, a low-level reactive planner deforms the generated motion from the high-level planner. Source.
The review offers two detailed tools for comparing methods for motion planning for highway scenarios. Criteria for the generated motion include: feasible, safe, optimal, usable, adaptive, efficient, progressive and interactive. The authors stressed the importance of spatiotemporal consideration and emphasize that highway motion-planning is highly structured. Source.
The review offers two detailed tools for comparing methods for motion planning for highway scenarios. Criteria for the generated motion include: feasible, safe, optimal, usable, adaptive, efficient, progressive and interactive. The authors stressed the importance of spatiotemporal consideration and emphasize that highway motion-planning is highly structured. Source.
Contrary to solve-algorithms methods, set-algorithm methods require a complementary algorithm should be added to find the feasible motion. Depending on the importance of the generation (iv) and deformation (v) part, approaches are more or less reactive or predictive. Finally, based on their work on AI-based algorithms, the authors define four subfamilies to compare to human: logic, heuristic, approximate reasoning, and human-like. Source.
Contrary to solve-algorithms methods, set-algorithm methods require a complementary algorithm should be added to find the feasible motion. Depending on the importance of the generation (iv) and deformation (v) part, approaches are more or less reactive or predictive. Finally, based on their work on AI-based algorithms, the authors define four subfamilies to compare to human: logic, heuristic, approximate reasoning, and human-like. Source.
The review also offers overviews for possible space configurations, i.e. the choices for decomposition of the evolution space (Sampling-Based Decomposition, Connected Cells Decomposition and Lattice Representation) as well as Path-finding algorithms (e.g. Dijkstra, A*, and RRT). Attractive and Repulsive Forces, Parametric and Semi-Parametric Curves, Numerical Optimization and Artificial Intelligence are also developed. Source.
The review also offers overviews for possible space configurations, i.e. the choices for decomposition of the evolution space (sampling-based decomposition, connected cells decomposition and lattice representation) as well as path-finding algorithms (e.g. Dijkstra, A*, and RRT). attractive and repulsive forces, parametric and semi-parametric curves, numerical optimization and artificial intelligence are also developed. Source.

Authors: Claussmann, L., Revilloud, M., Gruyer, D., & Glaser, S.


"A Survey of Deep Learning Applications to Autonomous Vehicle Control"

  • [ 2019 ] [📝] [ 🎓 University of Surrey ] [ 🚗 Jaguar Land Rover ]
Click to expand
Challenges for learning-based control methods. Source.
Challenges for learning-based control methods. Source.

Authors: Kuutti, S., Bowden, R., Jin, Y., Barber, P., & Fallah, S.

  • Three categories are examined:
    • lateral control alone.
    • longitudinal control alone.
    • longitudinal and lateral control combined.
  • Two quotes:
    • "While lateral control is typically achieved through vision, the longitudinal control relies on measurements of relative velocity and distance to the preceding/following vehicles. This means that ranging sensors such as RADAR or LIDAR are more commonly used in longitudinal control systems.".

    • "While lateral control techniques favour supervised learning techniques trained on labelled datasets, longitudinal control techniques favour reinforcement learning methods which learn through interaction with the environment."


"Longitudinal Motion Planning for Autonomous Vehicles and Its Impact on Congestion: A Survey"

  • [ 2019 ] [📝] [ 🎓 Georgia Institute of Technology ]
Click to expand
mMP refers to machine learning methods for longitudinal motion planning. Source.
mMP refers to machine learning methods for longitudinal motion planning. Source.

Authors: Zhou, H., & Laval, J.

  • This review has been completed at a school of "civil and environmental engineering".
    • It does not have any scientific contribution, but offers a quick overview about some current trends in decision-making.
    • The authors try to look at industrial applications (e.g. Waymo, Uber, Tesla), i.e. not just focussing on theoretical research. Since companies do no communicate explicitly about their approaches, most of their publications should be considered as research side-projects, rather than "actual state" of the industry.
  • One focus of the review: the machine learning approaches for decision-making for longitudinal motion.
    • About the architecture and representation models. They mention the works of DeepDriving and (H. Xu, Gao, Yu, & Darrell, 2016).
      • Mediated perception approaches parse an entire scene to make a driving decision.
      • Direct perception approaches first extract affordance indicators (i.e. only the information that are important for driving in a particular situation) and then map them to actions.
        • "Only a small portion of detected objects are indeed related to the real driving reactions so that it would be meaningful to reduce the number of key perception indicators known as learning affordances".

      • Behavioural reflex approaches directly map an input image to a driving action by a regressor.
        • This end-to-end paradigm can be extended with auxiliary tasks such as learning semantic segmentation (this "side task" should further improves the model), leading to Privileged training.
    • About the learning methods:
      • BC, RL, IRL and GAIL are considered.
      • The authors argue that their memory and prediction abilities should make them stand out from the rule-based approaches.
      • "Both BC and IRL algorithms implicitly assume that the demonstrations are complete, meaning that the action for each demonstrated state is fully observable and available."

      • "We argue that adopting RL transforms the problem of learnt longitudinal motion planning from imitating human demonstrations to searching for a policy complying a hand-crafted reward rule [...] No studies have shown that a genuine reward function for human driving really exists."

  • About congestion:

"Design Space of Behaviour Planning for Autonomous Driving"

  • [ 2019 ] [📝] [ 🎓 University of Waterloo ]
Click to expand

Some figures:

The focus is on the BP module, together with its predecessor (environment) and its successor (LP) in a modular architecture. Source.
The focus is on the BP module, together with its predecessor (environment) and its successor (LP) in a modular architecture. Source.
Classification for Question 1 - environment representation. A combination is possible. In black, my notes giving examples. Source.
Classification for Question 1 - environment representation. A combination is possible. In black, my notes giving examples. Source.
Classification for Question 2 - on the architecture. Source.
Classification for Question 2 - on the architecture. Source.
Classification for Question 3 - on the decision logic representation. Source.
Classification for Question 3 - on the decision logic representation. Source.

Authors: Ilievski, M., Sedwards, S., Gaurav, A., Balakrishnan, A., Sarkar, A., Lee, J., Bouchard, F., De Iaco, R., & Czarnecki K.

The authors divide their review into three sections:

  • Question 1: How to represent the environment? (relation with predecessor of BP)
    • Four representations are compared: raw data, feature-based, grid-based and latent representation.
  • Question 2: How to communicate with other modules, especially the local planner (LP)? (relation with successor (LP) of BP)
    • A first sub-question is the relevance of separation BP / LP.
      • A complete separation (top-down) can lead to computational redundancy (both have a collision checker).
      • One idea, close to sampling techniques, could be to invert the traditional architecture for planning, i.e. generate multiple possible local paths (~LP) then selects the best manoeuvre according to a given cost function (~BP). But this exasperates the previous point.
    • A second sub-question concerns prediction: Should the BP module have its own dedicated prediction module?
      • First, three kind of prediction are listed, depending on what should be predicted (marked with ->):
        • Physics-based (-> trajectory).
        • Manoeuvre-based (-> low-level motion primitives).
        • Interaction-aware (-> intent).
      • Then, the authors distinguish between explicitly-defined and implicitly-defined prediction models:
        • Explicitly-defined can be:
          • Integrated with the motion planning process (called Internal prediction models) such as belief-based decision making (e.g. POMDP). Ideal for planning under uncertainty.
          • Decoupled from the planning process (called External prediction models). There is a clear interface between prediction and planning, which aids modularity.
        • Implicitly-defined, such as RL techniques.
  • Question 3: How to make BP decisions? (BP itself)
    • A first distinction in representation of decision logic is made depending based on non-learnt / learnt:
      • Using a set of explicitly programmed production rules can be divided into:
        • Imperative approaches, e.g. state machines.
        • Declarative approaches often based on some probabilistic system.
          • The decision-tree structure and the (PO)MDP formulation makes it more robust to uncertainty.
          • Examples include MCTS and online POMDP solvers.
      • Logic representation can also rely on mathematical models with parameters learned a priori.
        • A distinction is made depending on "where does the training data come from and when is it created?".
        • In other words, one could think of supervised learning (learning from example) versus reinforcement learning (learning from interaction).
        • The combination of both seems beneficial:
          • An initial behaviour is obtained through imitation learning (learning from example). Also possible with IRL.
          • But improvements are made through interaction with a simulated environment (learning from interaction).
            • By the way, the learning from interaction techniques raise the question of the origin of the experience (e.g. realism of the simulator) and its sampling efficiency.
        • Another promising direction is hierarchical RL where the MDP is divided into sub-problems (the lower for LP and the higher for BP)
          • The lowest level implementation of the hierarchy approximates a solution to the control and LP problem ...
          • ... while the higher level selects a manoeuvre to be executed by the lower level implementations.
    • As mentioned in my the section on Scenarios and Datasets, the authors mention the lack of benchmark to compare and evaluate the performance of BP technologies.

One quote about the representation of decision logic:

  • As identified in my notes about IV19, the combination of learnt and non-learnt approaches looks the most promising.
  • "Without learning, traditional robotic solutions cannot adequately handle complex, dynamic human environments, but ensuring the safety of learned systems remains a significant challenge."

  • "Hence, we speculate that future high performance and safe behaviour planning solutions will be hybrid and heterogeneous, incorporating modules consisting of learned systems supervised by programmed logic."


"A Behavioral Planning Framework for Autonomous Driving"

  • [ 2014 ] [📝] [ 🎓 Carnegie Mellon University ] [ 🚗 General Motor ]

  • [ behavioural planning, sampling-based planner, decision under uncertainty, TORCS ]

Click to expand

Some figures:

Comparison and fusion of the hierarchical and parallel architectures. Source.
Comparison and fusion of the hierarchical and parallel architectures. Source.
The PCB algorithm implemented in the BP module. Source.
The PCB algorithm implemented in the BP module. Source.
Related work by (Xu, Pan, Wei, & Dolan, 2014) - Grey ellipses indicate the magnitude of the uncertainty of state. Source.
Related work by (Xu, Pan, Wei, & Dolan, 2014) - Grey ellipses indicate the magnitude of the uncertainty of state. Source.

Authors: Wei, J., Snider, J. M., & Dolan, J. M.

Note: I find very valuable to get insights from the CMU (Carnegie Mellon University) Team, based on their experience of the DARPA Urban Challenges.

  • Related works:
    • A prediction- and cost function-based algorithm for robust autonomous freeway driving. 2010 by (Wei, Dolan, & Litkouhi, 2010).
      • They introduced the "Prediction- and Cost-function Based (PCB) algorithm" used.
      • The idea is to generate-forward_simulate-evaluate a set of manoeuvres.
      • The planner can therefore take surrounding vehicles’ reactions into account in the cost function when it searches for the best strategy.
      • At the time, the authors rejected the option of a POMDP formulation (computing the control policy over the space of the belief state, which is a probability distribution over all the possible states) deemed as computationally expensive. Improvements in hardware and algorithmic have been made since 2014.
    • Motion planning under uncertainty for on-road autonomous driving. 2014 by (Xu, Pan, Wei, & Dolan, 2014).
      • An extension of the framework to consider uncertainty (both for environment and the others participants) in the decision-making.
      • The prediction module is using a Kalman Filter (assuming constant velocity).
      • For each candidate trajectory, the uncertainty can be estimated using a Linear-Quadratic Gaussian (LQG) framework (based on the noise characteristics of the localization and control).
      • Their Gaussian-based method gives some probabilistic safety guaranty (e.g. likelihood 2% of collision to occur).
  • Proposed architecture for decision-making:
    • First ingredient: Hierarchical architecture.
      • The hierarchy mission -> manoeuvre -> motion 3M concept makes it very modular but can raise limitations:
      • "the higher-level decision making module usually does not have enough detailed information, and the lower-level layer does not have authority to re-evaluate the decision."

    • Second ingredient: Parallel architecture.
      • This is inspired from ADAS engineering.
      • The control modules (ACC, Merge Assist, Lane Centreing) are relatively independent and work in parallel.
      • In some complicated cases needing cooperation, this framework may not perform well.
        • This probably shows that just extending the common ADAS architectures cannot be enough to reach the level-5 of autonomy.
    • Idea of the proposed framework: combine the strengths of the hierarchical and parallel architectures.
      • This relieves the path planner and the control module (the search space is reduced).
      • Hence the computational cost shrinks (by over 90% compared to a sample-based planner in the spatio-temporal space).
  • One module worth mentioning: Traffic-free Reference Planner.
    • Its input: lane-level sub-missions from the Mission Planning.
    • Its output: kinematically and dynamically feasible paths and a speed profile for the Behavioural Planner (BP).
      • It assumes there is no traffic on the road, i.e. ignores dynamic obstacles.
      • It also applies traffic rules such as speed limits.
    • This guides the BP layer which considers both static and dynamic obstacles to generate so-called "controller directives" such as:
      • The lateral driving bias.
      • The desired leading vehicle to follow.
      • The aggressiveness of distance keeping.
      • The maximum speed.


Behavioural Cloning End-To-End and Imitation Learning


"Learning to drive by imitation: an overview of deep behavior cloning methods"

  • [ 2020 ] [📝] [ 🎓 University of Moncton ]

  • [ overview ]

Click to expand
Source.
Some simulators and datasets for supervised learning of end-to-end driving. Source.
Source.
Instead of just single front-view camera frames (top and left), other sensor modalities can be used as inputs, for instance event-based cameras (bottom-right). Source.
Source.
The temporal evolution of the scene can be captured by considering a sequence of past frames. Source.
Source.
Other approaches also address the longitudinal control (top and right), while some try to exploit intermediate representations (bottom-left). Source.
Source.
Source.

Authors: Ly, A. O., & Akhloufi, M.

  • Motivation:

    • An overview of the current state of the art deep behaviour cloning methods for lane stable driving.
    • [No RL] "By end-to-end, we mean supervised methods that map raw visual inputs to low-level (steering angle, speed, etc) or high-level (driving path, driving intention, etc.) of actuation commands using almost only deep networks."

  • Five classes of methods:

    • 1- Pure imitative methods that make use of vanilla CNNs and take standard camera frames only as input.

      • The loss can be computed using the Mean Squared Error (MSE) between predictions and steering labels.
      • "Recovery from mistakes is made possible by adding synthesized data during training via simulations of car deviation from the center of the lane."

      • "Data augmentation was performed using a basic viewpoint transformation with the help of the left and right cameras."

    • 2- Models that use other types of perceptual sensors such as events or fisheye cameras etc.

      • "A more realistic label augmentation is achieved with the help of the wide range of captures from the front fisheye camera compared to previous methods using shearing with side (right and left) cameras."

      • "Events based cameras consist of independent pixels that record intensity variation in an asynchronous way. Thus, giving more information in a time interval than traditional video cameras where changes taking place between two consecutive frames are not captured."

    • 3- Methods that consider previous driving history to estimate future driving commands.

    • 4- Systems that predicts both lateral and longitudinal control commands.

      • "It outputs the vehicle curvature instead of the steering angle as generally found in the literature, which is justified by the fact that curvature is more general and does not vary from vehicle to vehicle."

    • 5- Techniques that leverage the power of mid-level representations for transfer learning or give more explanation in regards to taken actions.

      • "The motivation behind using a VAE architecture is to automatically mitigate the bias issue which occurs because generally the driving scenes in the datasets does not have the same proportions. In previous methods, this issue is solved by manually reducing the over represented scenes such as straight driving or stops."

  • Some take-aways:

    • "Models making use of non-standard cameras or intermediate representations are showing a lot of potential in comparison to pure imitative methods that takes conventional video frames as input."

    • "The diversity in metrics and datasets used for reporting the results makes it very hard to strictly weigh the different models against each other."

    • Explainability and transparency of taken decisions is important.
      • "A common approach in the literature is to analyse the pixels that lead to the greatest activation of neurons."


"Advisable Learning for Self-driving Vehicles by Internalizing Observation-to-Action Rules"

  • [ 2020 ] [📝] [ 🎓 UC Berkeley ] [:octocat:]

  • [ attention, advisability ]

Click to expand
Source.
Source.
Source.
Source.

Authors: Kim, J., Moon, S., Rohrbach, A., Darrell, T., & Canny, J.

  • Related PhD thesis: Explainable and Advisable Learning for Self-driving Vehicles, (Kim. J, 2020)

  • Motivation:

    • An end-to-end model should be explainable, i.e. provide easy-to-interpret rationales for its behaviour:
    • 1- Summarize / the visual observations (input) in natural language, e.g. "light is red".
      • Visual attention is not enough, verbalizing is needed.
    • 2- Predict an appropriate action response, e.g. "I see a pedestrian crossing, so I stop".
      • I.e. Justify the decisions that are made and explain why they are reasonable in a human understandable manner, i.e., again, in natural language.
    • 3- Predict a control signal, accordingly.
      • The command is conditioned on the predicted high-level action command, e.g. "maintain a slow speed".
      • The output is a sequence of waypoints, hence end-to-mid.
  • About the dataset:

    • Berkeley DeepDrive-eXplanation (BDD-X) dataset (by the first author).
    • Together with camera front-views and IMU signal, the dataset provides:
      • 1- Textual descriptions of the vehicle's actions: what the driver is doing.
      • 2- Textual explanations for the driver's actions: why the driver took that action from the point of view of a driving instructor.
        • For instance the pair: ("the car slows down", "because it is approaching an intersection").

"Feudal Steering: Hierarchical Learning for Steering Angle Prediction"

  • [ 2020 ] [📝] [ 🎓 Rutgers University ] [ 🚗 Lockheed Martin ]

  • [ hierarchical learning, temporal abstraction, t-SNE embedding ]

Click to expand
Source.
Feudal learning for steering prediction. The worker decides the next steering angle conditioned on a goal (subroutine id) determined by the manager. The manager learns to predict these subroutine ids from a sequence of past states (break, steer, throttle). The ground truth subroutine ids are the centres of centroids obtained by unsupervised clustering. They should contain observable semantic meaning in terms of driving tasks. Source.

Authors: Johnson, F., & Dana, K.

  • Note: Although terms and ideas from hierarchical reinforcement learning (HRL) are used, no RL is applied here!

  • Motivation: Temporal abstraction.

    • Problems in RL: delayed rewards and sparse credit assignment.
    • Some solutions: intrinsic rewards and temporal abstraction.
    • The idea of temporal abstraction is to break down the problem into more tractable pieces:
      • "At all times, human drivers are paying attention to two levels of their environment. The first level goal is on a finer grain: don’t hit obstacles in the immediate vicinity of the vehicle. The second level goal is on a coarser grain: plan actions a few steps ahead to maintain the proper course efficiently."

  • The idea of feudal learning is to divide the task into:

    • 1- A manager network.
      • It operates at a lower temporal resolution and produces goal vectors that it passes to the worker network.
      • This goal vector should encapsulate a temporally extended action called a subroutine, skill, option, or macro-action.
      • Input: Sequence of previous steering.
      • Output: goal.
    • 2- A worker network: conditioned on the goal decided by the manager.
      • Input: goal decided by the manager, previous own prediction, sequence of frames.
      • Output: steering.
    • The subroutine ids (manager net) and the steering angle prediction (worker net) are jointly learnt.
  • What are the ground truth goal used to train the manager?

    • They are ids of the centres of centroids formed by clustering (unsupervised learning) all the training data:
      • 1- Data: Steering, braking, and throttle data are concatenated every m=10 time steps to make a vector of length 3m=30.
      • 2- Encoding: projected in a t-SNE 2d-space.
      • 3- Clustering: K-means.
      • The 2d-coordinates of centroids of the clusters are the subroutine ids, i.e. the possible goals.
        • How do they convert the 2d-coordinates into a single scalar?
    • "We aim to classify the steering angles into their temporally abstracted subroutines, also called options or macro-actions, associated with highway driving such as follow the sharp right bend, bumper-to-bumper traffic, bear left slightly."

  • What are the decision frequencies?

    • The worker considers the last 10 actions to decide the goal.
    • It seems like a smoothing process, where a window is applied?
      • It should be possible to achieve that with a recurrent net, shouldn't it?
  • About t-SNE:

    • "t-Distributed Stochastic Neighbor Embedding (t-SNE) is an unsupervised, non-linear technique primarily used for data exploration and visualizing high-dimensional data. In simpler terms, t-SNE gives you a feel or intuition of how the data is arranged in a high-dimensional space [from towardsdatascience]."

    • Here it is used as an embedding space for the driving data and as the subroutine ids themselves.

"A Survey of End-to-End Driving: Architectures and Training Methods"

  • [ 2020 ] [📝] [ 🎓 University of Tartu ]

  • [ review, distribution shift problem, domain adaptation, mid-to-mid ]

Click to expand
Source.
Left: example of end-to-end architecture with key terms. Right: difference open-loop / close-loop evaluation. Source.
Source.
Source.
Source.
Source.

Authors: Tampuu, A., Semikin, M., Muhammad, N., Fishman, D., & Matiisen, T.

  • A rich literature overview and some useful reminders about general IL and RL concepts with focus to AD applications.

  • I especially like the structure of the document: It shows what one should consider when starting an end-to-end / IL project for AD:

    • I have just noted here some ideas I find interesting. In no way an exhaustive summary!
  • 1- Learning methods: working with rewards (RL) or with losses (behavioural cloning).

    • About distribution shift problem in behavioural cloning:
      • "If the driving decisions lead to unseen situations (not present in the training set), the model might no longer know how to behave".

      • Most solutions try to diversify the training data in some way - either by collecting or generating additional data:
        • data augmentation: e.g. one can place two additional cameras pointing forward-left and forward-right and associate the images with commands to turn right and turn left respectively.
        • data diversification: addition of temporally correlated noise and synthetic trajectory perturbations. Easier on "semantic" inputs than on camera inputs.
        • on-policy learning: recovery annotation and DAgger. The expert provides examples how to solve situations the model-driving leads to. Also "Learning by cheating" by (Chen et al. 2019).
        • balancing the dataset: by upsampling the rarely occurring angles, downsampling the common ones or by weighting the samples.
          • "Commonly, the collected datasets contain large amounts of repeated traffic situations and only few of those rare events."

          • The authors claim that only the joint distribution of inputs and outputs defines the rarity of a data point.
          • "Using more training data from CARLA Town1 decreases generalization ability in Town2. This illustrates that more data without more diversity is not useful."

          • Ideas for augmentation can be taken from the field of supervised Learning where it is already an largely-addressed topic.
    • About RL:
      • Policies can be first trained with IL and then fine-tuned with RL methods.
      • "This approach reduces the long training time of RL approaches and, as the RL-based fine-tuning happens online, also helps overcome the problem of IL models learning off-policy".

    • About domain adaptation and transfer from simulation to real world (sim2real).
      • Techniques from supervised learning, such as fine tuning, i.e. adapting the driving model to the new distribution, are rarely used.
      • Instead, one can instead adapt the incoming data and keep the driving model fixed.
        • A first idea is to transform real images into simulation-like images (the opposite - generating real-looking images - is challenging).
        • One can also extract the semantic segmentation of the scene from both the real and the simulated images and use it as the input for the driving policy.
  • 2- Inputs.

    • In short:
      • Vision is key.
      • Lidar and HD-map are nice to have but expensive / tedious to maintain.
      • Additional inputs from independent modules (semantic segmentation, depth map, surface normals, optical flow and albedo) can improve the robustness.
    • About the inertia problem / causal confusion when for instance predicting the next ego-speed.
      • "As in the vast majority of samples the current [observed] and next speeds [to be predicted] are highly correlated, the model learns to base its speed prediction exclusively on current speed. This leads to the model being reluctant to change its speed, for example to start moving again after stopping behind another car or a at traffic light."

    • About affordances:
      • "Instead of parsing all the objects in the driving scene and performing robust localization (as modular approach), the system focuses on a small set of crucial indicators, called affordances."

  • 3- Outputs.

    • "The outputs of the model define the level of understanding the model is expected to achieve."

    • Also related to the time horizon:
      • "When predicting instantaneous low-level commands, we are not explicitly forcing the model to plan a long-term trajectory."

    • Three types of predictions:
      • 3-1 Low-level commands.
        • "The majority of end-to-end models yield as output the steering angle and speed (or acceleration and brake commands) for the next timestep".

        • Low-level commands may be car-specific. For instance vehicles answer differently to the same throttle / steering commands.
          • "The function between steering wheel angle and the resulting turning radius depends on the car's geometry, making this measure specific to the car type used for recording."

        • [About the regression loss] "Many authors have recently optimized speed and steering commands using L1 loss (mean absolute error, MAE) instead of L2 loss (mean squared error, MSE)".

      • 3-2 Future waypoints or desired trajectories.
        • This higher-level output modality is independent of car geometry.
      • 3-3 Cost map, i.e. information about where it is safe to drive, leaving the trajectory generation to another module.
    • About multitask learning and auxiliary tasks:
      • The idea is to simultaneously train a separate set of networks to predict for instance semantic segmentation, optical flow, depth and other human-understandable representations from the camera feed.
      • "Based on the same extracted visual features that are fed to the decision-making branch (main task), one can also predict ego-speed, drivable area on the scene, and positions and speeds of other objects".

      • It offers more learning signals - at least for the shared layers.
      • And can also help understand the mistakes a model makes:
        • "A failure in an auxiliary task (e.g. object detection) might suggest that necessary information was not present already in the intermediate representations (layers) that it shared with the main task. Hence, also the main task did not have access to this information and might have failed for the same reason."

  • 4- Evaluation: the difference between open-loop and close-loop.

    • 4-1 open-loop: like in supervised learning:
      • one question = one answer.
      • Typically, a dataset is split into training and testing data.
      • Decisions are compared with the recorded actions of the demonstrator, assumed to be the ground-truth.
    • 4-2 close-loop: like in decision processes:
      • The problem consists in a multi-step interaction with some environment.
      • It directly measures the model's ability to drive on its own.
    • Interesting facts: Good open-loop performance does not necessarily lead to good driving ability in closed-loop settings.
      • "Mean squared error (MSE) correlates with closed-loop success rate only weakly (correlation coefficient r = 0.39), so MAE, quantized classification error or thresholded relative error should be used instead (r > 0.6 for all three)."

      • About the balanced-MAE metric for open-loop evaluation, which correlates better with closed-loop performance than simple MAE.
        • "Balanced-MAE is computed by averaging the mean values of unequal-length bins according to steering angle. Because most data lies in the region around steering angle 0, equally weighting the bins grows the importance of rarely occurring (higher) steering angles."

  • 5- Interpretability:

    • 5-1 Either on the trained model ...
      • "Sensitivity analysis aims to determine the parts of an input that a model is most sensitive to. The most common approach involves computing the gradients with respect to the input and using the magnitude as the measure of sensitivity."

      • VisualBackProp: which input pixels influence the cars driving decision the most.
    • 5-2 ... or already during training.
      • "visual attention is a built-in mechanism present already when learning. Where to attend in the next timestep (the attention mask), is predicted as additional output in the current step and can be made to depend on additional sources of information (e.g. textual commands)."

  • About end-to-end neural nets and humans:

    • "[StarCraft, Dota 2, Go and Chess solved with NN]. Many of these solved tasks are in many aspects more complex than driving a car, a task that a large proportion of people successfully perform even when tired or distracted. A person can later recollect nothing or very little about the route, suggesting the task needs very little conscious attention and might be a simple behavior reflex task. It is therefore reasonable to believe that in the near future an end-to-end approach is also capable to autonomously control a vehicle."


"Efficient Latent Representations using Multiple Tasks for Autonomous Driving"

  • [ 2020 ] [📝] [ 🎓 Aalto University ]

  • [ latent space representation, multi-head decoder, auxiliary tasks ]

Click to expand
Source.
The latent representation is enforced to predict the trajectories of both the ego vehicle and other vehicles in addition to the input image, using a multi-head network structure. Source.

Authors: Kargar, E., & Kyrki, V.

  • Motivations:
    • 1- Reduce the dimensionality of the feature representation of the scene - used as input to some IL / RL policy.
      • This is to improve most mid-to-x approaches that encode and process a vehicle’s environment as multi-channel and quite high-dimensional bird view images.
      • -> The idea here is to learn an encoder-decoder.
      • The latent space has size 64 (way smaller than common 64 x 64 x N bird-views).
    • 2- Learn a latent representation faster / with fewer data.
      • A single head decoder would just consider reconstruction.
      • -> The idea here is to use have multiple heads in the decoder, i.e. make prediction of multiple auxiliary application relevant factors.
      • "The multi-head model can reach the single-head model’s performance in 20 epochs, one-fifth of training time of the single-head model, with full dataset."

      • "In general, the multi-heal model, using only 6.25% of the dataset, converges faster and perform better than single head model trained on the full dataset."

    • 3- Learn a policy faster / with fewer data.
  • Two components to train:
    • 1- An encoder-decoder learns to produce a latent representation (encoder) coupled with a multiple-prediction-objective (decoder).
    • 2- A policy use the latent representation to predict low-level controls.
  • About the encoder-decoder:
    • inputs: bird-view image containing:
      • Environment info, built from HD Maps and perception.
      • Ego trajectory: 10 past poses.
      • Other trajectory: 10 past poses.
      • It forms a 256 x 256 image, which is resized to 64 x 64 to feed them into the models
    • outputs: multiple auxiliary tasks:
      • 1- Reconstruction head: reconstructing the input bird-view image.
      • 2- Prediction head: 1s-motion-prediction for other agents.
      • 3- Planning head: 1s-motion-prediction for the ego car.
  • About the policy:
    • In their example, the authors implement behaviour cloning, i.e. supervised learning to reproduce the decision of CARLA autopilot.
    • 1- steering prediction.
    • 2- acceleration classification - 3 classes.
  • How to deal with the unbalanced dataset?
    • First, the authors note that no manual labelling is required to collect training data.
    • But the recorded steering angle is zero most of the time - leading to a highly imbalanced dataset.
    • Solution (no further detail):
      • "Create a new dataset and balance it using sub-sampling".


"Robust Imitative Planning : Planning from Demonstrations Under Uncertainty"

  • [ 2019 ] [📝] [ 🎓 University of Oxford, UC Berkeley, Carnegie Mellon University ]

  • [ epistemic uncertainty, risk-aware decision-making, CARLA ]

Click to expand
Illustration of the state distribution shift in behavioural cloning (BC) approaches. The models (e.g. neural networks) usually fail to generalize and instead extrapolate confidently yet incorrectly, resulting in arbitrary outputs and dangerous outcomes. Not to mention the compounding (or cascading) errors, inherent to the sequential decision making. Source.
Illustration of the state distribution shift in behavioural cloning (BC) approaches. The models (e.g. neural networks) usually fail to generalize and instead extrapolate confidently yet incorrectly, resulting in arbitrary outputs and dangerous outcomes. Not to mention the compounding (or cascading) errors, inherent to the sequential decision making. Source.
Testing behaviours on scenarios such as roundabouts that are not present in the training set. Source.
Testing behaviours on scenarios such as roundabouts that are not present in the training set. Source.
Above - in their previous work, the authors introduced Deep imitative models (IM). The imitative planning objective is the log posterior probability of a state trajectory, conditioned on satisfying some goal G. The state trajectory that has the highest likelihood w.r.t. the expert model q(S given φ; θ) is selected, i.e.  maximum a posteriori probability (MAP) estimate of how an expert would drive to the goal. This captures any inherent aleatoric stochasticity of the human behaviour (e.g., multi-modalities), but only uses a point-estimate of θ, thus q(s given φ;θ) does not quantify model (i.e. epistemic) uncertainty. φ denotes the contextual information (3 previous states and current LIDAR observation) and s denotes the agent’s future states (i.e. the trajectory). Bottom - in this works, an ensemble of models is used: q(s given φ; θk) where θk denotes the parameters of the k-th model (neural network). The Aggregation Operator operator is applied on the posterior p(θ given D). The previous work is one example of that, where a single θi is selected. Source.
Above - in their previous work, the authors introduced Deep imitative models (IM). The imitative planning objective is the log posterior probability of a state trajectory, conditioned on satisfying some goal G. The state trajectory that has the highest likelihood w.r.t. the expert model q(S given φ; θ) is selected, i.e. maximum a posteriori probability (MAP) estimate of how an expert would drive to the goal. This captures any inherent aleatoric stochasticity of the human behaviour (e.g., multi-modalities), but only uses a point-estimate of θ, thus q(s given φ;θ) does not quantify model (i.e. epistemic) uncertainty. φ denotes the contextual information (3 previous states and current LIDAR observation) and s denotes the agent’s future states (i.e. the trajectory). Bottom - in this works, an ensemble of models is used: q(s given φ; θk) where θk denotes the parameters of the k-th model (neural network). The Aggregation Operator operator is applied on the posterior p(θ given D). The previous work is one example of that, where a single θi is selected. Source.
To save computation and improve runtime to real-time, the authors use a trajectory library: they perform K-means clustering of the expert plan’s from the training distribution and keep 128 of the centroids. I see that as a way restrict the search in the trajectory space, similar to injecting expert knowledge about the feasibility of cars trajectories. Source.
To save computation and improve runtime to real-time, the authors use a trajectory library: they perform K-means clustering of the expert plan’s from the training distribution and keep 128 of the centroids, allegedly reducing the planning time by a factor of 400. During optimization, the trajectory space is limited to only that trajectory library. It makes me think of templates sometimes used for path-planning. I also see that as a way restrict the search in the trajectory space, similar to injecting expert knowledge about the feasibility of cars trajectories. Source.
Estimating the uncertainty is not enough. One should then forward that estimate to the planning module. This reminds me an idea of (McAllister et al., 2017) about the key benefit of propagating uncertainty throughout the AV framework. Source.
Estimating the uncertainty is not enough. One should then forward that estimate to the planning module. This reminds me an idea of (McAllister et al., 2017) about the key benefit of propagating uncertainty throughout the AV framework. Source.

Authors: Tigas, P., Filos, A., Mcallister, R., Rhinehart, N., Levine, S., & Gal, Y.

  • Previous work: "Deep Imitative Models for Flexible Inference, Planning, and Control" (see below).

    • The idea was to combine the benefits of imitation learning (IL) and goal-directed planning such as model-based RL (MBRL).
      • In other words, to complete planning based on some imitation prior, by combining generative modelling from demonstration data with planning.
      • One key idea of this generative model of expert behaviour: perform context-conditioned density estimation of the distribution over future expert trajectories, i.e. score the "expertness" of any plan of future positions.
    • Limitations:
      • It only uses a point-estimate of θ. Hence it fails to capture epistemic uncertainty in the model’s density estimate.
      • "Plans can be risky in scenes that are out-of-training-distribution since it confidently extrapolates in novel situations and lead to catastrophes".

  • Motivations here:

    • 1- Develop a model that captures epistemic uncertainty.
    • 2- Estimating uncertainty is not a goal at itself: one also need to provide a mechanism for taking low-risk actions that are likely to recover in uncertain situations.
      • I.e. both aleatoric and epistemic uncertainty should be taken into account in the planning objective.
      • This reminds me the figure of (McAllister et al., 2017) about the key benefit of propagating uncertainty throughout the AV framework.
  • One quote about behavioural cloning (BC) that suffers from state distribution shift (co-variate shift):

    • "Where high capacity parametric models (e.g. neural networks) usually fail to generalize, and instead extrapolate confidently yet incorrectly, resulting in arbitrary outputs and dangerous outcomes".

  • One quote about model-free RL:

    • "The specification of a reward function is as hard as solving the original control problem in the first place."

  • About epistemic and aleatoric uncertainties:

    • "Generative models can provide a measure of their uncertainty in different situations, but robustness in novel environments requires estimating epistemic uncertainty (e.g., have I been in this state before?), where conventional density estimation models only capture aleatoric uncertainty (e.g., what’s the frequency of times I ended up in this state?)."

  • How to capture uncertainty about previously unseen scenarios?

    • Using an ensemble of density estimators and aggregate operators over the models’ outputs.
      • "By using demonstration data to learn density models over human-like driving, and then estimating its uncertainty about these densities using an ensemble of imitative models".

    • The idea it to take the disagreement between the models into consideration and inform planning.
      • "When a trajectory that was never seen before is selected, the model’s high epistemic uncertainty pushes us away from it. During planning, the disagreement between the most probable trajectories under the ensemble of imitative models is used to inform planning."


"End-to-end Interpretable Neural Motion Planner"

  • [ 2019 ] [📝] [ 🎓 University of Toronto ] [ 🚗 Uber ]

  • [ interpretability, trajectory sampling ]

Click to expand
The visualization of 3D detection, motion forecasting as well as learned cost-map volume offers interpretability. A set of candidate trajectories is sampled, first considering the geometrical path and then then speed profile. The trajectory with the minimum learned cost is selected. Source.
The visualization of 3D detection, motion forecasting as well as learned cost-map volume offers interpretability. A set of candidate trajectories is sampled, first considering the geometrical path and then then speed profile. The trajectory with the minimum learned cost is selected. Source.
Source.
Source.

Authors: Zeng W., Luo W., Suo S., Sadat A., Yang B., Casas S. & Urtasun R.

  • Motivation is to bridge the gap between the traditional engineering stack and the end-to-end driving frameworks.

    • 1- Develop a learnable motion planner, avoiding the costly parameter tuning.
    • 2- Ensure interpretability in the motion decision. This is done by offering an intermediate representation.
    • 3- Handle uncertainty. This is allegedly achieved by using a learnt, non-parametric cost function.
    • 4- Handle multi-modality in possible trajectories (e.g changing lane vs keeping lane).
  • One quote about RL and IRL:

    • "It is unclear if RL and IRL can scale to more realistic settings. Furthermore, these methods do not produce interpretable representations, which are desirable in safety critical applications".

  • Architecture:

    • Input: raw LIDAR data and a HD map.
    • 1st intermediate result: An "interpretable" bird’s eye view representation that includes:
      • 3D detections.
      • Predictions of future trajectories (planning horizon of 3 seconds).
      • Some spatio-temporal cost volume defining the goodness of each position that the self-driving car can take within the planning horizon.
    • 2nd intermediate result: A set of diverse physically possible trajectories (candidates).
      • They are Clothoid curves being sampled. First building the geometrical path. Then the speed profile on it.
      • "Note that Clothoid curves can not handle circle and straight line trajectories well, thus we sample them separately."

    • Final output: The trajectory with the minimum learned cost.
  • Multi-objective:

    • 1- Perception Loss - to predict the position of vehicles at every time frame.
      • Classification: Distinguish a vehicle from the background.
      • Regression: Generate precise object bounding boxes.
    • 2- Planning Loss.
      • "Learning a reasonable cost volume is challenging as we do not have ground-truth. To overcome this difficulty, we minimize the max-margin loss where we use the ground-truth trajectory as a positive example, and randomly sampled trajectories as negative examples."

      • As stated, the intuition behind is to encourage the demonstrated trajectory to have the minimal cost, and others to have higher costs.
      • The model hence learns a cost volume that discriminates good trajectories from bad ones.

"Learning from Interventions using Hierarchical Policies for Safe Learning"

  • [ 2019 ] [📝] [ 🎓 University of Rochester, University of California San Diego ]
  • [ hierarchical, sampling efficiency, safe imitation learning ]
Click to expand
The main idea is to use Learning from Interventions (LfI) in order to ensure safety and improve data efficiency, by intervening on sub-goals rather than trajectories. Both top-level policy (that generates sub-goals) and bottom-level policy are jointly learnt. Source.
The main idea is to use Learning from Interventions (LfI) in order to ensure safety and improve data efficiency, by intervening on sub-goals rather than trajectories. Both top-level policy (that generates sub-goals) and bottom-level policy are jointly learnt. Source.

Authors: Bi, J., Dhiman, V., Xiao, T., & Xu, C.

  • Motivations:
    • 1- Improve data-efficiency.
    • 2- Ensure safety.
  • One term: "Learning from Interventions" (LfI).
    • One way to classify the "learning from expert" techniques is to use the frequency of expert’s engagement.
      • High frequency -> Learning from Demonstrations.
      • Medium frequency -> learning from Interventions.
      • Low frequency -> Learning from Evaluations.
    • Ideas of LfI:
      • "When an undesired state is detected, another policy is activated to take over actions from the agent when necessary."

      • Hence the expert overseer only intervenes when it suspects that an unsafe action is about to be taken.
    • Two issues:
      • 1- LfI (as for LfD) learn reactive behaviours.
        • "Learning a supervised policy is known to have 'myopic' strategies, since it ignores the temporal dependence between consecutive states".

        • Maybe one option could be to stack frames or to include the current speed in the state. But that makes the state space larger.
      • 2- The expert only signals after a non-negligible amount of delay.
  • One idea to solve both issues: Hierarchy.
    • The idea is to split the policy into two hierarchical levels, one that generates sub-goals for the future and another that generates actions to reach those desired sub-goals.
    • The motivation is to intervene on sub-goals rather than trajectories.
    • One important parameter: k
      • The top-level policy predicts a sub-goal to be achieved k steps ahead in the future.
      • It represents a trade-off between:
        • The ability for the top-level policy to predict sub-goals far into the future.
        • The ability for the bottom-level policy to follow it correctly.
    • One question: How to deal with the absence of ground- truth sub-goals ?
      • One solution is "Hindsight Experience Replay", i.e. consider an achieved goal as a desired goal for past observations.
      • The authors present additional interpolation techniques.
      • They also present a Triplet Network to train goal-embeddings (I did not understand everything).

"Urban Driving with Conditional Imitation Learning"

  • [ 2019 ] [📝] [🎞️] [ 🚗 Wayve ]

  • [ end-to-end, conditional IL, robust BC ]

Click to expand
The encoder is trained to reconstruct RGB, depth and segmentation, i.e. to learn scene understanding. It is augmented with optical flow for temporal information. As noted, such representations could be learned simultaneously with the driving policy, for example, through distillation. But for efficiency, this was pre-trained (Humans typically also have ~30 hours of driver training before taking the driving exam. But they start with huge prior knowledge). Source.
The encoder is trained to reconstruct RGB, depth and segmentation, i.e. to learn scene understanding. It is augmented with optical flow for temporal information. As noted, such representations could be learned simultaneously with the driving policy, for example, through distillation. But for efficiency, this was pre-trained (Humans typically also have ~30 hours of driver training before taking the driving exam. But they start with huge prior knowledge). Interesting idea: the navigation command is injected as multiple locations of the control part. Source.
Driving data is inherently heavily imbalanced, where most of the captured data will be driving near-straight in the middle of a lane. Any naive training will collapse to the dominant mode present in the data. No data augmentation is performed. Instead, during training, the authors sample data uniformly across lateral and longitudinal control dimensions. Source.
Driving data is inherently heavily imbalanced, where most of the captured data will be driving near-straight in the middle of a lane. Any naive training will collapse to the dominant mode present in the data. No data augmentation is performed. Instead, during training, the authors sample data uniformly across lateral and longitudinal control dimensions. Source.

Authors: Hawke, J., Shen, R., Gurau, C., Sharma, S., Reda, D., Nikolov, N., Mazur, P., Micklethwaite, S., Griffiths, N., Shah, A. & Kendall, A.

  • Motivations:
    • 1- Learn both steering and speed via Behavioural Cloning.
    • 2- Use raw sensor (camera) inputs, rather than intermediate representations.
    • 3- Train and test on dense urban environments.
  • Why "conditional"?
    • A route command (e.g. turn left, go straight) resolves the ambiguity of multi-modal behaviours (e.g. when coming at an intersection).
    • "We found that inputting the command multiple times at different stages of the network improves robustness of the model".

  • Some ideas:
    • Provide wider state observability through multiple camera views (single camera disobeys navigation interventions).
    • Add temporal information via optical flow.
      • Another option would be to stack frames. But it did not work well.
    • Train the primary shared encoders and auxiliary independent decoders for a number of computer vision tasks.
      • "In robotics, the test data is the real-world, not a static dataset as is typical in most ML problems. Every time our cars go out, the world is new and unique."

  • One concept: "Causal confusion".
    • A good video about Causal Confusion in Imitation Learning showing that "access to more information leads to worse generalisation under distribution shift".
    • "Spurious correlations cannot be distinguished from true causes in the demonstrations. [...] For example, inputting the current speed to the policy causes it to learn a trivial identity mapping, making the car unable to start from a static position."

    • Two ideas during training:
      • Using flow features to make the model use explicit motion information without learning the trivial solution of an identity mapping for speed and steering.
      • Add random noise and use dropout on it.
    • One alternative is to explicitly maintain a causal model.
    • Another alternative is to learn to predict the speed, as detailed in "Exploring the Limitations of Behavior Cloning for Autonomous Driving".
  • Output:
    • The model decides of a "motion plan", i.e. not directly the low-level control?
    • Concretely, the network gives one prediction and one slope, for both speed and steering, leading to two parameterised lines.
  • Two types of tests:
    • 1- Closed-loop (i.e. go outside and drive).
      • The number and type of safety-driver interventions.
    • 2- Open-loop (i.e., evaluating on an offline dataset).
      • The weighted mean absolute error for speed and steering.
        • As noted, this can serve as a proxy for real world performance.
    • "As discussed by [34] and [35], the correlation between offline open-loop metrics and online closed-loop performance is weak."

  • About the training data:
    • As stated, they are two levers to increase the performance:
      • 1- Algorithmic innovation.
      • 2- Data.
    • For this IL approach, 30 hours of demonstrations.
    • "Re-moving a quarter of the data notably degrades performance, and models trained with less data are almost undriveable."

  • Next steps:
    • I find the results already impressive. But as noted:
      • "The learned driving policies presented here need significant further work to be comparable to human driving".

    • Ideas for improvements include:
      • Add some predictive long-term planning model. At the moment, it does not have access to long-term dependencies and cannot reason about the road scene.
      • Learn not only from demonstration, but also from mistakes.
        • This reminds me the concept of ChauffeurNet about "simulate the bad rather than just imitate the good".
      • Continuous learning: Learning from corrective interventions would also be beneficial.
    • The last point goes in the direction of adding learning signals, which was already done here.
      • Imitation of human expert drivers (supervised learning).
      • Safety driver intervention data (negative reinforcement learning) and corrective action (supervised learning).
      • Geometry, dynamics, motion and future prediction (self-supervised learning).
      • Labelled semantic computer vision data (supervised learning).
      • Simulation (supervised learning).

"Application of Imitation Learning to Modeling Driver Behavior in Generalized Environments"

  • [ 2019 ] [📝] [ 🎓 Stanford ]

  • [ GAIL, RAIL, domain adaption, NGSIM ]

Click to expand
The IL models were trained on a straight road and tested on roads with high curvature. PS-GAIL is effective only while surrounded by other vehicles, while the RAIL policy remained stably within the bounds of the road thanks to the additional rewards terms included into the learning process.. Source.
The IL models were trained on a straight road and tested on roads with high curvature. PS-GAIL is effective only while surrounded by other vehicles, while the RAIL policy remained stably within the bounds of the road thanks to the additional rewards terms included into the learning process.. Source.

Authors: Lange, B. A., & Brannon, W. D.

  • One motivation: Compare the robustness (domain adaptation) of three IL techniques:
    • 1- Generative Adversarial Imitation Learning (GAIL).
    • 2- Parameter Sharing GAIL (PS-GAIL).
    • 3- Reward Augmented Imitation Learning (RAIL).
  • One take-away: This student project builds a good overview of the different IL algorithms and why these algorithms came out.
    • Imitation Learning (IL) aims at building an (efficient) policy using some expert demonstrations.
    • Behavioural Cloning (BC) is a sub-class of IL. It treats IL as a supervised learning problem: a regression model is fit to the state/action space given by the expert.
      • Issue of distribution shift: "Because data is not infinite nor likely to contain information about all possible state/action pairs in a continuous state/action space, BC can display undesirable effects when placed in these unknown or not well-known states."

      • "A cascading effect is observed as the time horizon grows and errors expand upon each other."

    • Several solutions (not exhaustive):
      • 1- DAgger: Ask the expert to say what should be done in some encountered situations. Thus iteratively enriching the demonstration dataset.
      • 2- IRL: Human driving behaviour is not modelled inside a policy, but rather capture into a reward/cost function.
        • Based on this reward function, an (optimal) policy can be derived with classic RL techniques.
        • One issue: It can be computationally expensive.
      • 3- GAIL (I still need to read more about it):
        • "It fits distributions of states and actions given by an expert dataset, and a cost function is learned via Maximum Causal Entropy IRL."

        • "When the GAIL-policy driven vehicle was placed in a multi-agent setting, in which multiple agents take over the learned policy, this algorithm produced undesirable results among the agents."

    • PS-GAIL is therefore introduced for multi-agent driving models (agents share a single policy learnt with PS-TRPO).
      • "Though PS-GAIL yielded better results in multi-agent simulations than GAIL, its results still led to undesirable driving characteristics, including unwanted trajectory deviation and off-road duration."

    • RAIL offers a fix for that: the policy-learning process is augmented with two types of reward terms:
      • Binary penalties: e.g. collision and hard braking.
      • Smoothed penalties: "applied in advance of undesirable actions with the theory that this would prevent these actions from occurring".
      • I see that technique as a way to incorporate knowledge.
  • About the experiment:
    • The three policies were originally trained on the straight roadway: cars only consider the lateral distance to the edge.
    • In the "new" environment, a road curvature is introduced.
    • Findings:
      • "None of them were able to fully accommodate the turn in the road."

      • PS-GAIL is effective only while surrounded by other vehicles.
      • The smoothed reward augmentation helped RAIL, but it was too late to avoid off-road (the car is already driving too fast and does not dare a hard brake which is strongly penalized).
      • The reward function should therefore be updated (back to reward engineering 😅), for instance adding a harder reward term to prevent the car from leaving the road.

"Learning by Cheating"

  • [ 2019 ] [📝] [:octocat:] [ 🎓 UT Austin ] [ 🚗 Intel Labs ]

  • [ on-policy supervision, DAgger, conditional IL, mid-to-mid, CARLA ]

Click to expand
The main idea is to decompose the imitation learning (IL) process into two stages: 1- Learn to act. 2- Learn to see. Source.
The main idea is to decompose the imitation learning (IL) process into two stages: 1- Learn to act. 2- Learn to see. Source.
mid-to-mid learning: Based on a processed bird’s-eye view map, the privileged agent predicts a sequence of waypoints to follow. This desired trajectory is eventually converted into low-level commands by two PID controllers. It is also worth noting how this privileged agent serves as an oracle that provides adaptive on-demand supervision to train the sensorimotor agent across all possible commands. Source.
mid-to-mid learning: Based on a processed bird’s-eye view map, the privileged agent predicts a sequence of waypoints to follow. This desired trajectory is eventually converted into low-level commands by two PID controllers. It is also worth noting how this privileged agent serves as an oracle that provides adaptive on-demand supervision to train the sensorimotor agent across all possible commands. Source.
Example of privileged map supplied to the first agent. And details about the lateral PID controller that produces steering commands based on a list of target waypoints. Source.
Example of privileged map supplied to the first agent. And details about the lateral PID controller that produces steering commands based on a list of target waypoints. Source.

Authors: Chen, D., Zhou, B., Koltun, V. & Krähenbühl, P

  • One motivation: decomposing the imitation learning (IL) process into two stages:
    • Direct IL (from expert trajectories to vision-based driving) conflates two difficult tasks:
      • 1- Learning to see.
      • 2- Learning to act.
  • One term: "Cheating".
    • 1- First, train an agent that has access to privileged information:
      • "This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants."

      • Goal: The agent can focus on learning to act (it does not need to learn to see because it gets direct access to the environment’s state).
    • 2- Then, this privileged agent serves as a teacher to train a purely vision-based system (abundant supervision).
      • Goal: Learning to see.
  • 1- First agent (privileged agent):
    • Input: A processed bird’s-eye view map (with ground-truth information about lanes, traffic lights, vehicles and pedestrians) together with high-level navigation command and current speed.
    • Output: A list of waypoints the vehicle should travel to.
    • Hence mid-to-mid learning approach.
    • Goal: imitate the expert trajectories.
    • Training: Behaviour cloning (BC) from a set of recorded expert driving trajectories.
      • Augmentation can be done offline, to facilitate generalization.
      • The agent is thus placed in a variety of perturbed configurations to learn how to recover
      • E.g. facing the sidewalk or placed on the opposite lane, it should find its way back onto the road.
  • 2- Second agent (sensorimotor agent):
    • Input: Monocular RGB image, current speed, and a high-level navigation command.
    • Output: A list of waypoints.
    • Goal: Imitate the privileged agent.
  • One idea: "White-box" agent:
    • The internal state of the privileged agent can be examined at will.
      • Based on that, one could test different high-level commands: "What would you do now if the command was [follow-lane] [go left] [go right] [go straight]".
    • This relates to conditional IL: all conditional branches are supervised during training.
  • Another idea: "online learning" and "on-policy supervision":
    • "“On-policy” refers to the sensorimotor agent rolling out its own policy during training."

      • Here, the decision of the second agents are directly implemented (close-loop).
      • And an oracle is still available for the newly encountered situation (hence on-policy), which also accelerates the training.
      • This is an advantage of using a simulator: it would be difficult/impossible in the physical world.
    • Here, the second agent is first trained off-policy (on expert demonstration) to speed up the learning (offline BC), and only then go on-policy:
      • "Finally, we train the sensorimotor agent on-policy, using the privileged agent as an oracle that provides adaptive on-demand supervision in any state reached by the sensorimotor student."

      • The sensorimotor agent can thus be supervised on all its waypoints and across all commands at once.
    • It resembles the Dataset aggregation technique of DAgger:
      • "This enables automatic DAgger-like training in which supervision from the privileged agent is gathered adaptively via online rollouts of the sensorimotor agent."

  • About the two benchmarks:
    • 1- Original CARLA benchmark (2017).
    • 2- NoCrash benchmark (2019).
    • Interesting idea for timeout:
      • "The time limit corresponds to the amount of time needed to drive the route at a cruising speed of 10 km/h".

  • Another idea: Do not directly output low-level commands.
    • Instead, predict waypoints and speed targets.
    • And rely on two PID controllers to implement them.
      • 1- "We fit a parametrized circular arc to all waypoints using least-squares fitting and then steer towards a point on the arc."

      • 2- "A longitudinal PID controller tries to match a target velocity as closely as possible [...] We ignore negative throttle commands, and only brake if the predicted velocity is below some threshold (2 km/h)."


"Deep Imitative Models for Flexible Inference, Planning, and Control"

  • [ 2019 ] [📝] [🎞️] [🎞️] [:octocat:] [ 🎓 Carnegie Mellon University, UC Berkeley ]

  • [ conditional IL, model-based RL, CARLA ]

Click to expand
The main motivation is to combine the benefits of IL (imitate expert demonstration) and model-based RL (i.e. planning). Source.
The main motivation is to combine the benefits of IL (to imitate some expert demonstrations) and goal-directed planning (e.g. model-based RL). Source.
φ represents the scene consisted of the current lidar scan, previous states in the trajectory as well as the current traffic light state. Source.
φ represents the scene. It consists of the current lidar scan, previous states in the trajectory as well as the current traffic light state. Source.
From left to right: Point, Line-Segment and Region (small and wide) Final State Indicators used for planning. Source.
From left to right: Point, Line-Segment and Region (small and wide) Final State Indicators used for planning. Source.
Comparison of features and implementations. Source.
Comparison of features and implementations. Source.

Authors: Rhinehart, N., McAllister, R., & Levine, S.

  • Main motivation: combine the benefits of imitation learning (IL) and goal-directed planning such as model-based RL (MBRL).

    • Especially to generate interpretable, expert-like plans with offline learning and no reward engineering.
    • Neither IL nor MBRL can do so.
    • In other words, it completes planning based on some imitation prior.
  • One concept: "Imitative Models"

    • They are "probabilistic predictive models able to plan interpretable expert-like trajectories to achieve new goals".
    • As for IL -> use expert demonstration:
      • It generates expert-like behaviors without reward function crafting.
      • The model is learnt "offline" also means it avoids costly online data collection (contrary to MBRL).
      • It learns dynamics desirable behaviour models (as opposed to learning the dynamics of possible behaviour done by MBRL).
    • As for MBRL -> use planning:
      • It achieves new goals (goals that were not seen during training). Therefore, it avoids the theoretical drift shortcomings (distribution shift) of vanilla behavioural cloning (BC).
      • It outputs (interpretable) plan to them at test-time, which IL cannot.
      • It does not need goal labels for training.
    • Binding IL and planning:
      • The learnt imitative model q(S|φ) can generate trajectories that resemble those that the expert might generate.
        • These manoeuvres do not have a specific goal. How to direct our agent to goals?
      • General tasks are defined by a set of goal variables G.
        • At test time, a route planner provides waypoints to the imitative planner, which computes expert-like paths for each candidate waypoint.
      • The best plan is chosen according to the planning objective (e.g. prefer routes avoiding potholes) and provided to a low-level PID-controller in order to produce steering and throttle actions.
      • In other words, the derived plan (list of set-points) should be:
        • Maximizing the similarity to the expert demonstrations (term with q)
        • Maximizing the probability of reaching some general goals (term with P(G)).
      • How to represent goals?
        • dim=0 - with points: Final-State Indicator.
        • dim=1 - with lines: Line-Segment Final-State Indicator.
        • dim=2 - with areas (regions): Final-State Region Indicator.
  • How to deal with traffic lights?

    • The concept of smart waypointer is introduced.
    • "It removes far waypoints beyond 5 meters from the vehicle when a red light is observed in the measurements provided by CARLA".

    • "The planner prefers closer goals when obstructed, when the vehicle was already stopped, and when a red light was detected [...] The planner prefers farther goals when unobstructed and when green lights or no lights were observed."

  • About interpretability and safety:

    • "In contrast to black-box one-step IL that predicts controls, our method produces interpretable multi-step plans accompanied by two scores. One estimates the plan’s expertness, the second estimates its probability to achieve the goal."

      • The imitative model can produce some expert probability distribution function (PDF), hence offering superior interpretability to one-step IL models.
      • It is able to score how likely a trajectory is to come from the expert.
      • The probability to achieve a goal is based on some "Goal Indicator methods" (using "Goal Likelihoods"). I must say I did not fully understand that part
    • The safety aspect relies on the fact that experts were driving safely and is formalized as a "plan reliability estimation":
      • "Besides using our model to make a best-effort attempt to reach a user-specified goal, the fact that our model produces explicit likelihoods can also be leveraged to test the reliability of a plan by evaluating whether reaching particular waypoints will result in human-like behavior or not."

      • Based on this idea, a classification is performed to recognize safe and unsafe plans, based on the planning criterion.
  • About the baselines:

    • Obviously, the proposed approach is compared to the two methods it aims at combining.
    • About MBRL:
      • 1- First, a forward dynamics model is learnt using given observed expert data.
        • It does not imitate the expert preferred actions, but only models what is physically possible.
      • 2- The model then is used to plan a reachability tree through the free-space up to the waypoint while avoiding obstacles:
        • Playing with the throttle action, the search expands each state node and retains the 50 closest nodes to the target waypoint.
        • The planner finally opts for the lowest-cost path that ends near the goal.
      • "The task of evoking expert-like behavior is offloaded to the reward function, which can be difficult and time-consuming to craft properly."

    • About IL: It used Conditional terms on States, leading to CILS.
      • S for state: Instead of emitting low-level control commands (throttle, steering), it outputs set-points for some PID-controller.
      • C for conditional: To navigate at intersections, waypoints are classified into one of several directives: {Turn left, Turn right, Follow Lane, Go Straight}.

"Conditional Vehicle Trajectories Prediction in CARLA Urban Environment"

  • [ 2019 ] [📝] [🎞️] [ 🚗 Valeo ]

  • [ conditional IL, CARLA, distributional shift problem ]

Click to expand

Some figures:

End-to-Mid approach: 3 inputs with different levels of abstraction are used to predict the future positions on a fixed 2s-horizon of the ego vehicle and the neighbours. The ego trajectory is be implemented by an external PID controller - Therefore, not end-to-end. Source.
End-to-Mid approach: 3 inputs with different levels of abstraction are used to predict the future positions on a fixed 2s-horizon of the ego vehicle and the neighbours. The ego trajectory is be implemented by an external PID controller - Therefore, not end-to-end. Source.
The past 3D-bounding boxes of the road users in the current reference are projected back in the current camera space. The past positions of ego and other vehicles are projected into some grid-map called proximity map. The image and the proximity map are concatenated to form context feature vector C. This context encoding is concatenated with the ego encoding, then fed into branches corresponding to the different high-level goals - conditional navigation goal. Source.
The past 3D-bounding boxes of the road users in the current reference are projected back in the current camera space. The past positions of ego and other vehicles are projected into some grid-map called proximity map. The image and the proximity map are concatenated to form context feature vector C. This context encoding is concatenated with the ego encoding, then fed into branches corresponding to the different high-level goals - conditional navigation goal. Source.
Illustration of the distribution shift in imitation learning. Source.
Illustration of the distribution shift in imitation learning. Source.
VisualBackProp highlights the image pixels which contributed the most to the final results - Traffic lights and their colours are important, together with highlights lane markings and curbs when there is a significant lateral deviation. Source.
VisualBackProp highlights the image pixels which contributed the most to the final results - Traffic lights and their colours are important, together with highlights lane markings and curbs when there is a significant lateral deviation. Source.

Authors: Buhet, T., Wirbel, E., & Perrotton, X.

  • Previous works:
  • One term: "End-To-Middle".
    • It is opposed to "End-To-End", i.e. it does not output "end" control signals such as throttle or steering but rather some desired trajectory, i.e. a mid-level representation.
      • Each trajectory is described by two polynomial functions (one for x, the other for y), therefore the network has to predict a vector (x0, ..., x4, y0, ..., y4) for each vehicle.
      • The desired ego-trajectory is then implemented by an external controller (PID). Therefore, not end-to-end.
    • Advantages of end-to-mid: interpretability for the control part + less to be learnt by the net.
    • This approach is also an instance of "Direct perception":
      • "Instead of commands, the network predicts hand-picked parameters relevant to the driving (distance to the lines, to other vehicles), which are then fed to an independent controller".

    • Small digression: if the raw perception measurements were first processed to form a mid-level input representation, the approach would be said mid-to-mid. An example is ChauffeurNet, detailed on this page as well.
  • About Ground truth:
    • The expert demonstrations do not come from human recordings but rather from CARLA autopilot.
    • 15 hours of driving in Town01 were collected.
    • As for human demonstrations, no annotation is needed.
  • One term: "Conditional navigation goal".
    • Together with the RGB images and the past positions, the network takes as input a navigation command to describe the desired behaviour of the ego vehicle at intersections.
    • Hence, the future trajectory of the ego vehicle is conditioned by a navigation command.
      • If the ego-car is approaching an intersection, the goal can be left, right or cross, else the goal is to keep lane.
      • That means lane-change is not an option.
    • "The last layers of the network are split into branches which are masked with the current navigation command, thus allowing the network to learn specific behaviours for each goal".

  • Three ingredients to improve vanilla end-to-end imitation learning (IL):
    • 1- Mix of high and low-level input (i.e. hybrid input):
      • Both raw signal (images) and partial environment abstraction (navigation commands) are used.
    • 2- Auxiliary tasks:
      • One head of the network predicts the future trajectories of the surrounding vehicles.
        • It differs from the primary task which should decide the 2s-ahead trajectory for the ego car.
        • Nevertheless, this secondary task helps: "Adding the neighbours prediction makes the ego prediction more compliant to traffic rules."
      • This refers to the concept of "Privileged learning":
        • "The network is partly trained with an auxiliary task on a ground truth which is useful to driving, and on the rest is only trained for IL".

    • 3- Label augmentation:
      • The main challenge of IL is the difference between train and online test distributions. This is due to the difference between
        • Open-loop control: decisions are not implemented.
        • Close-loop control: decisions are implemented, and the vehicle can end in a state absent from the train distribution, potentially causing "error accumulation".
      • Data augmentation is used to reduce the gap between train and test distributions.
        • Classical randomization is combined with label augmentation: data similar to failure cases is generated a posteriori.
      • Three findings:
      • "There is a significant gap in performance when introducing the augmentation."

      • "The effect is much more noticeable on complex navigation tasks." (Errors accumulate quicker).

      • "Online test is the real significant indicator for IL when it is used for active control." (The common offline evaluation metrics may not be correlated to the online performance).

  • Baselines:
  • One word about the choice of the simulator.
    • A possible alternative to CARLA could be DeepDrive or the LGSVL simulator developed by the Advanced Platform Lab at the LG Electronics America R&D Centre. This looks promising.

"Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control"

  • [ 2019 ] [📝] [ 🎓 Oxford University ]

  • [ uncertainty-aware decision, Bayesian inference, CARLA ]

Click to expand

One figure:

The trust or uncertainty in one decision can be measured based on the probability mass function around its mode. Source.
The trust or uncertainty in one decision can be measured based on the probability mass function around its mode. Source.
The measures of uncertainty based on mutual information can be used to issue warnings to the driver and perform safety / emergency manoeuvres. Source.
The measures of uncertainty based on mutual information can be used to issue warnings to the driver and perform safety / emergency manoeuvres. Source.
As noted by the authors: while the variance can be useful in collision avoidance, the wide variance of HMC causes a larger proportion of trajectories to fall outside of the safety boundary when a new weather is applied. Source.
As noted by the authors: while the variance can be useful in collision avoidance, the wide variance of HMC causes a larger proportion of trajectories to fall outside of the safety boundary when a new weather is applied. Source.

Authors: Michelmore, R., Wicker, M., Laurenti, L., Cardelli, L., Gal, Y., & Kwiatkowska, M

  • One related work:
    • NVIDIA’s PilotNet [DAVE-2] where expert demonstrations are used together with supervised learning to map from images (front camera) to steering command.
    • Here, human demonstrations are collected in the CARLA simulator.
  • One idea: use distribution in weights.
    • The difference with PilotNet is that the neural network applies the "Bayesian" paradigm, i.e. each weight is described by a distribution (not just a single value).
    • The authors illustrate the benefits of that paradigm, imagining an obstacle in the middle of the road.
      • The Bayesian controller may be uncertain on the steering angle to apply (e.g. a 2-tail or M-shape distribution).
      • A first option is to sample angles, which turns the car either right or left, with equal probability.
      • Another option would be to simply select the mean value of the distribution, which aims straight at the obstacle.
      • The motivation of this work is based on that example: "derive some precise quantitative measures of the BNN uncertainty to facilitate the detection of such ambiguous situation".
  • One definition: "real-time decision confidence".
    • This is the probability that the BNN controller is certain of its decision at the current time.
    • The notion of trust can therefore be introduced: the idea it to compute the probability mass in a ε−ball around the decision π(observation) and classify it as certain if the resulting probability is greater than a threshold.
      • It reminds me the concept of trust-region optimisation in RL.
      • In extreme cases, all actions are equally distributed, π(observation) has a very high variance, the agent does not know what to do (no trust) and will randomly sample an action.
  • How to get these estimates? Three Bayesian inference methods are compared:
    • Monte Carlo dropout (MCD).
    • Mean-field variational inference (VI).
    • Hamiltonian Monte Carlo (HMC).
  • What to do with this information?
    • "This measure of uncertainty can be employed together with commonly employed measures of uncertainty, such as mutual information, to quantify in real time the degree that the model is confident in its predictions and can offer a notion of trust in its predictions."

      • I did not know about "mutual information" and liked the explanation of Wikipedia about the link of MI to entropy and KL-div.
        • I am a little bit confused: in what I read, the MI is function of two random variables. What are they here? The authors rather speak about the predictive distribution exhibited by the predictive distribution.
    • Depending on the uncertainty level, several actions are taken:
      • mutual information warnings slow down the vehicle.
      • standard warnings slow down the vehicle and alert the operator of potential hazard.
      • severe warnings cause the car to safely brake and ask the operator to take control back.
  • Another definition: "probabilistic safety", i.e. the probability that a BNN controller will keep the car "safe".
    • Nice, but what is "safe"?
    • It all relies on the assumption that expert demonstrations were all "safe", and measures the how much of the trajectory belongs to this "safe set".
    • I must admit I did not fully understand the measure on "safety" for some continuous trajectory and discrete demonstration set:
      • A car can drive with a large lateral offset from the demonstration on a wide road while being "safe", while a thin lateral shift in a narrow street can lead to an "unsafe" situation.
      • Not to mention that the scenario (e.g. configuration of obstacles) has probably changed in-between.
      • This leads to the following point with an interesting application for scenario coverage.
  • One idea: apply changes in scenery and weather conditions to evaluate model robustness.
    • To check the generalization ability of a model, the safety analysis is re-run (offline) with other weather conditions.
    • As noted in conclusion, this offline safety probability can be used as a guide for active learning in order to increase data coverage and scenario representation in training data.

"Exploring the Limitations of Behavior Cloning for Autonomous Driving"

  • [ 2019 ] [📝] [🎞️] [:octocat:] [ 🎓 CVC, UAB, Barcelona ] [ 🚗 Toyota ]

  • [ distributional shift problem, off-policy data collection, CARLA, conditional imitation learning, residual architecture, reproducibility issue, variance caused by initialization and sampling ]

Click to expand

One figure:

Conditional Imitation Learning is extended with a ResNet architecture and Speed prediction (CILRS). Source.
Conditional Imitation Learning is extended with a ResNet architecture and Speed prediction (CILRS). Source.

Authors: Codevilla, F., Santana, E., Antonio, M. L., & Gaidon, A.

  • One term: “CILRS” = Conditional Imitation Learning extended with a ResNet architecture and Speed prediction.
  • One Q&A: How to include in E2E learning information about the destination, i.e. to disambiguate imitation around multiple types of intersections?
    • Add a high-level navigational command (e.g. take the next right, left, or stay in lane) to the tuple <observation, expert action> when building the dataset.
  • One idea: learn to predict the ego speed (mediated perception) to address the inertia problem stemming from causal confusion (biased correlation between low speed and no acceleration - when the ego vehicle is stopped, e.g. at a red traffic light, the probability it stays static is indeed overwhelming in the training data).
  • Another idea: The off-policy (expert) driving demonstration is not produced by a human, but rather generated from an omniscient "AI" agent.
  • One quote:

"The more common the vehicle model and color, the better the trained agent reacts to it. This raises ethical challenges in automated driving".


"Conditional Affordance Learning for Driving in Urban Environments"

  • [ 2018 ] [📝] [🎞️] [🎞️] [:octocat:] [ 🎓 CVC, UAB, Barcelona ] [ 🚗 Toyota ]

  • [ CARLA, end-to-mid, direct perception ]

Click to expand

Some figures:

Examples of affordances, i.e. attributes of the environment which limit the space of allowed actions. A1, A2 and A3 are predefined observation areas. Source.
Examples of affordances, i.e. attributes of the environment which limit the space of allowed actions. A1, A2 and A3 are predefined observation areas. Source.
The presented direct perception DP method predicts a low-dimensional intermediate representation of the environment - affordance - which is then used in a conventional control algorithm. The affordance is conditioned for goal-directed navigation, i.e. before each intersection, it receives an instruction such as go straight, turn left or turn right. Source.
The presented direct perception method predicts a low-dimensional intermediate representation of the environment - affordance - which is then used in a conventional control algorithm. The affordance is conditioned for goal-directed navigation, i.e. before each intersection, it receives an instruction such as go straight, turn left or turn right. Source.
The feature maps produced by a CNN feature extractor are stored in a memory and consumed by task-specific layers (one affordance has one task block). Every task block has its own specific temporal receptive field - decides how much of the memory it needs. This figure also illustrates how the navigation command is used as switch between trained submodules. Source.
The feature maps produced by a CNN feature extractor are stored in a memory and consumed by task-specific layers (one affordance has one task block). Every task block has its own specific temporal receptive field - it decides how much of the memory it needs. This figure also illustrates how the navigation command is used as switch between trained submodules. Source.

Authors: Sauer, A., Savinov, N., & Geiger, A.

  • One term: "Direct perception" (DP):
    • The goal of DP methods is to predict a low-dimensional intermediate representation of the environment which is then used in a conventional control algorithm to manoeuvre the vehicle.
    • With this regard, DP could also be said end-to-mid. The mapping to learn is less complex than end-to-end (from raw input to controls).
    • DP is meant to combine the advantages of two other commonly-used approaches: modular pipelines MP and end-to-end methods such as imitation learning IL or model-free RL.
    • Ground truth affordances are collected using CARLA. Several augmentations are performed.
  • Related work on affordance learning and direct perception.
  • One term: "Conditional Affordance Learning" (CAL):
    • "Conditional": The actions of the agent are conditioned on a high-level command given by the navigation system (the planner) prior to intersections. It describes the manoeuvre to be performed, e.g., go straight, turn left, turn right.
    • "Affordance": Affordances are one example of DP representation. They are attributes of the environment which limit the space of allowed actions. Only 6 affordances are used for CARLA urban driving:
      • Distance to vehicle (continuous).
      • Relative angle (continuous and conditional).
      • Distance to centre-line (continuous and conditional).
      • Speed Sign (discrete).
      • Red Traffic Light (discrete - binary).
      • Hazard (discrete - binary).
        • The Class Weighted Cross Entropy is the loss used for discrete affordances to put more weights on rare but important occurrences (hazard occurs rarely compared to traffic light red).
    • "Learning": A single neural network trained with multi-task learning (MTL) predicts all affordances in a single forward pass (~50ms). It only takes a single front-facing camera view as input.
  • About the controllers: The path-velocity decomposition is applied. Hence two controllers are used in parallel:
    • 1- throttle and brake
      • Based on the predicted affordances, a state is "rule-based" assigned among: cruising, following, over limit, red light, and hazard stop (all are mutually exclusive).
      • Based on this state, the longitudinal control signals are derived, using PID or threshold-predefined values.
      • It can handle traffic lights, speed signs and smooth car-following.
      • Note: The Supplementary Material provides details insights on controller tuning (especially PID) for CARLA.
    • 2- steering is controlled by a Stanley Controller, based on two conditional affordances: distance to centreline and relative angle.
  • One idea: I am often wondering what timeout I should set when testing a scenario with CARLA. The author computes this time based on the length of the pre-defined path (which is actually easily accessible):
    • "The time limit equals the time needed to reach the goal when driving along the optimal path at 10 km/h"

  • Another idea: Attention Analysis.
    • For better understanding on how affordances are constructed, the attention of the CNN using gradient-weighted class activation maps (Grad-CAMs).
    • This "visual explanation" reminds me another technique used in end-to-end approaches, VisualBackProp, that highlights the image pixels which contributed the most to the final results.
  • Baselines and results:
  • Where to provide the high-level navigation conditions?
    • The authors find that "conditioning in the network has several advantages over conditioning in the controller".
    • In addition, in the net, it is preferable to use the navigation command as switch between submodules rather than an input:
      • "We observed that training specialized submodules for each directional command leads to better performance compared to using the directional command as an additional input to the task networks".


"Variational Autoencoder for End-to-End Control of Autonomous Driving with Novelty Detection and Training De-biasing"

  • [ 2018 ] [📝] [🎞️] [🎞️] [ 🎓 MIT ] [ 🚗 Toyota ]

  • [ VAE, uncertainty estimation, sampling efficiency, augmentation ]

Click to expand

Some figures:

One particular latent variable ^y is explicitly supervised to predict steering control. Anther interesting idea: augmentation is based on domain knowledge - if a method used to the middle-view is given some left-view image, it should predict some correction to the right. Source.
One particular latent variable ^y is explicitly supervised to predict steering control. Another interesting idea: augmentation is based on domain knowledge - if a method used to the middle-view is given some left-view image, it should predict some correction to the right. Source.
For each new image, empirical uncertainty estimates are computed by sampling from the variables of the latent space. These estimates lead to the D statistic that indicates whether an observed image is well captured by our trained model, i.e. novelty detection. Source.
For each new image, empirical uncertainty estimates are computed by sampling from the variables of the latent space. These estimates lead to the D statistic that indicates whether an observed image is well captured by our trained model, i.e. novelty detection. Source.
In a subsequent work, the VAE is conditioned onto the road topology. It serves multiple purposes such as localization and end-to-end navigation. The routed or unrouted map given as additional input goes toward the mid-to-end approach where processing is performed and/or external knowledge is embedded. Source.
In a subsequent work, the VAE is conditioned onto the road topology. It serves multiple purposes such as localization and end-to-end navigation. The routed or unrouted map given as additional input goes toward the mid-to-end approach where processing is performed and/or external knowledge is embedded. Source. See this video temporal for evolution of the predictions.

Authors: Amini, A., Schwarting, W., Rosman, G., Araki, B., Karaman, S., & Rus, D.

  • One issue raised about vanilla E2E:
    • The lack a measure of associated confidence in the prediction.
    • The lack of interpretation of the learned features.
    • Having said that, the authors present an approach to both understand and estimate the confidence of the output.
    • The idea is to use a Variational Autoencoder (VAE), taking benefit of its intermediate latent representation which is learnt in an unsupervised way and provides uncertainty estimates for every variable in the latent space via their parameters.
  • One idea for the VAE: one particular latent variable is explicitly supervised to predict steering control.
    • The loss function of the VAE has therefore 3 parts:
      • A reconstruction loss: L1-norm between the input image and the output image.
      • A latent loss: KL-divergence between the latent variables and a unit Gaussian, providing regularization for the latent space.
      • A supervised latent loss: MSE between the predicted and actual curvature of the vehicle’s path.
  • One contribution: "Detection of novel events" (which have not been sufficiently trained for).
    • To check if an observed image is well captured by the trained model, the idea is to propagate the VAE’s latent uncertainty through the decoder and compare the result with the original input. This is done by sampling (empirical uncertainty estimates).
    • The resulting pixel-wise expectation and variance are used to compute a sort of loss metric D(x, ˆx) whose distribution for the training-set is known (approximated with a histogram).
    • The image x is classified as novel if this statistic is outside of the 95th percentile of the training distribution and the prediction can finally be "untrusted to produce reliable outputs".
    • "Our work presents an indicator to detect novel images that were not contained in the training distribution by weighting the reconstructed image by the latent uncertainty propagated through the network. High loss indicates that the model has not been trained on that type of image and thus reflects lower confidence in the network’s ability to generalize to that scenario."

  • A second contribution: "Automated debiasing against learned biases".
    • As for the novelty detection, it takes advantage of the latent space distribution and the possibility of sampling from the most representative regions of this space.
    • Briefly said, the idea it to increase the proportion of rarer datapoints by dropping over-represented regions of the latent space to accelerate the training (sampling efficiency).
    • This debiasing is not manually specified beforehand but based on learned latent variables.
  • One reason to use single frame prediction (as opposed to RNN):
    • ""Note that only a single image is used as input at every time instant. This follows from original observations where models that were trained end-to-end with a temporal information (CNN+LSTM) are unable to decouple the underlying spatial information from the temporal control aspect. While these models perform well on test datasets, they face control feedback issues when placed on a physical vehicle and consistently drift off the road.""

  • One idea about augmentation (also met in the Behavioral Cloning Project of the Udacity Self-Driving Car Engineer Nanodegree):
    • "To inject domain knowledge into our network we augmented the dataset with images collected from cameras placed approximately 2 feet to the left and right of the main centre camera. We correspondingly changed the supervised control value to teach the model how to recover from off-centre positions."

  • One note about the output:
    • "We refer to steering command interchangeably as the road curvature: the actual steering angle requires reasoning about road slip and control plant parameters that change between vehicles."

  • Previous and further works:
    • "Spatial Uncertainty Sampling for End-to-End control" - (Amini, Soleimany, Karaman, & Rus, 2018)
    • "Variational End-to-End Navigation and Localization" - (Amini, Rosman, Karaman, & Rus, 2019)
      • One idea: incorporate some coarse-grained roadmaps with raw perceptual data.
        • Either unrouted (just containing the drivable roads). Output = continuous probability distribution over steering control.
        • Or routed (target road highlighted). Output = deterministic steering control to navigate.
      • How to evaluate the continuous probability distribution over steering control given the human "scalar" demonstration?
        • "For a range of z-scores over the steering control distribution we compute the number of samples within the test set where the true (human) control output was within the predicted range."

      • About the training dataset: 25 km of urban driving data.

"ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst"

  • [ 2018 ] [📝] [🎞️] [🎞️] [ 🚗 Waymo ]

  • [ imitation learning, distributional shift problem ]

Click to expand

Two figures:

Different layers composing the mid-level representation. Source.
Different layers composing the mid-level representation. Source.
Training architecture around ChauffeurNet with the different loss terms, that can be grouped into environment and imitation losses. Source.
Training architecture around ChauffeurNet with the different loss terms, that can be grouped into environment and imitation losses. Source.

Authors: Bansal, M., Krizhevsky, A., & Ogale, A.

  • One term: "mid-level representation"
    • The decision-making task (between perception and control) is packed into one single "learnable" module.
      • Input: the representation divided into several image-like layers:
        • Map features such as lanes, stop signs, cross-walks...; Traffic lights; Speed Limit; Intended route; Current agent box; Dynamic objects; Past agent poses.
        • Such a representation is generic, i.e. independent of the number of dynamic objects and independent of the road geometry/topology.
        • I discuss some equivalent representations seen at IV19.
      • Output: intended route, i.e. the future poses recurrently predicted by the introduced ChauffeurNet model.
    • This architecture lays between E2E (from pixels directly to control) and fully decomposed modular pipelines (decomposing planning in multiple modules).
    • Two notable advantages over E2E:
      • It alleviates the burdens of learning perception and control:
        • The desired trajectory is passed to a controls optimizer that takes care of creating the low-level control signals.
        • Not to mention that different types of vehicles may possibly utilize different control outputs to achieve the same driving trajectory.
      • Perturbations and input data from simulation are easier to generate.
  • One key finding: "pure imitation learning is not sufficient", despite the 60 days of continual driving (30 million examples).
    • One quote about the "famous" distribution shift (deviation from the training distribution) in imitation learning:

    "The key challenge is that we need to run the system closed-loop, where errors accumulate and induce a shift from the training distribution."

    • The training data does not have any real collisions. How can the agent efficiently learn to avoid them if it has never been exposed during training?
    • One solution consists in exposing the model to non-expert behaviours, such as collisions and off-road driving, and in adding extra loss functions.
      • Going beyond vanilla cloning.
        • Trajectory perturbation: Expose the learner to synthesized data in the form of perturbations to the expert’s driving (e.g. jitter the midpoint pose and heading)
          • One idea for future works is to use more complex augmentations, e.g. with RL, especially for highly interactive scenarios.
        • Past dropout: to prevent using the history to cheat by just extrapolating from the past rather than finding the underlying causes of the behaviour.
        • Hence the concept of tweaking the training data in order to “simulate the bad rather than just imitate the good”.
      • Going beyond the vanilla imitation loss.
        • Extend imitation losses.
        • Add environment losses to discourage undesirable behaviour, e.g. measuring the overlap of predicted agent positions with the non-road regions.
        • Use imitation dropout, i.e. sometimes favour the environment loss over the imitation loss.

"Imitating Driver Behavior with Generative Adversarial Networks"

  • [ 2017 ] [📝] [:octocat:] [ 🎓 Stanford ]

  • [ adversarial learning, distributional shift problem, cascading errors, IDM, NGSIM, rllab ]

Click to expand

Some figures:

The state consists in 51 features divided into 3 groups: The core features include hand-picked features such as Speed, Curvature and Lane Offset. The LIDAR-like beams capture the surrounding objects in a fixed-size representation independent of the number of vehicles. Finally, 3 binary indicator features identify when the ego vehicle encounters undesirable states - collision, drives off road, and travels in reverse. Source.
The state consists in 51 features divided into 3 groups: The core features include hand-picked features such as Speed, Curvature and Lane Offset. The LIDAR-like beams capture the surrounding objects in a fixed-size representation independent of the number of vehicles. Finally, 3 binary indicator features identify when the ego vehicle encounters undesirable states - collision, drives off road, and travels in reverse. Source.
As for common adversarial approaches, the objective function in GAIL includes some sigmoid cross entropy terms. The objective is to fit ψ for the discriminator. But this objective function is non-differentiable with respect to θ. One solution is to optimize πθ separately using RL. But what for reward function? In order to drive πθ into regions of the state-action space similar to those explored by the expert πE, a surrogate reward ˜r is generated from D_ψ based on samples and TRPO is used to perform a policy update of πθ. Source.
As for common adversarial approaches, the objective function in GAIL includes some sigmoid cross entropy terms. The objective is to fit ψ for the discriminator. But this objective function is non-differentiable with respect to θ. One solution is to optimize πθ separately using RL. But what for reward function? In order to drive πθ into regions of the state-action space similar to those explored by the expert πE, a surrogate reward ˜r is generated from D_ψ based on samples and TRPO is used to perform a policy update of πθ. Source.

Authors: Kuefler, A., Morton, J., Wheeler, T., & Kochenderfer, M.

  • One term: the problem of "cascading errors" in behavioural cloning (BC).
    • BC, which treats IL as a supervised learning problem, tries to fit a model to a fixed dataset of expert state-action pairs. In other words, BC solves a regression problem in which the policy parameterization is obtained by maximizing the likelihood of the actions taken in the training data.
    • But inaccuracies can lead the stochastic policy to states that are underrepresented in the training data (e.g., an ego-vehicle edging towards the side of the road). And datasets rarely contain information about how human drivers behave in such situations.
    • The policy network is then forced to generalize, and this can lead to yet poorer predictions, and ultimately to invalid or unseen situations (e.g., off-road driving).
    • "Cascading Errors" refers to this problem where small inaccuracies compound during simulation and the agent cannot recover from them.
      • This issue is inherent to sequential decision making.
    • As found by the results:
      • "The root-weighted square error results show that the feedforward BC model has the best short-horizon performance, but then begins to accumulate error for longer time horizons."

      • "Only GAIL (and of course IDM+MOBIL) are able to stay on the road for extended stretches."

  • One idea: RL provides robustness against "cascading errors".
    • RL maximizes the global, expected return on a trajectory, rather than local instructions for each observation. Hence more appropriate for sequential decision making.
    • Also, the reward function r(s_t, a_t) is defined for all state-action pairs, allowing an agent to receive a learning signal even from unusual states. And these signals can establish preferences between mildly undesirable behaviour (e.g., hard braking) and extremely undesirable behaviour (e.g., collisions).
      • In contrast, BC only receives a learning signal for those states represented in a labelled, finite dataset.
      • Because handcrafting an accurate RL reward function is often difficult, IRL seems promising. In addition, the imitation (via the recovered reward function) extends to unseen states: e.g. a vehicle that is perturbed toward the lane boundaries should know to return toward the lane centre.
  • Another idea: use GAIL instead of IRL:
    • "IRL approaches are typically computationally expensive in their recovery of an expert cost function. Instead, recent work has attempted to imitate expert behaviour through direct policy optimization, without first learning a cost function."

    • "Generative Adversarial Imitation Learning" (GAIL) implements this idea:
      • "Expert behaviour can be imitated by training a policy to produce actions that a binary classifier mistakes for those of an expert."

      • "GAIL trains a policy to perform expert-like behaviour by rewarding it for “deceiving” a classifier trained to discriminate between policy and expert state-action pairs."

    • One contribution is to extend GAIL to the optimization of recurrent neural networks (GRU in this case).
  • One concept: "Trust Region Policy Optimization".
    • Policy-gradient RL optimization with "Trust Region" is used to optimize the agent's policy πθ, addressing the issue of training instability of vanilla policy-gradient methods.
      • "TRPO updates policy parameters through a constrained optimization procedure that enforces that a policy cannot change too much in a single update, and hence limits the damage that can be caused by noisy gradient estimates."

    • But what reward function to apply? Again, we do not want to do IRL.
    • Some "surrogate" reward function is empirically derived from the discriminator. Although it may be quite different from the true reward function optimized by expert, it can be used to drive πθ into regions of the state-action space similar to those explored by πE.
  • One finding: Should previous actions be included in the state s?
    • "The previous action taken by the ego vehicle is not included in the set of features provided to the policies. We found that policies can develop an over-reliance on previous actions at the expense of relying on the other features contained in their input."

    • But on the other hand, the authors find:
    • "The GAIL GRU policy takes similar actions to humans, but oscillates between actions more than humans. For instance, rather than outputting a turn-rate of zero on straight road stretches, it will alternate between outputting small positive and negative turn-rates".

    • "An engineered reward function could also be used to penalize the oscillations in acceleration and turn-rate produced by the GAIL GRU".

  • Some interesting interpretations about the IDM and MOBIL driver models (resp. longitudinal and lateral control).
    • These commonly-used rule-based parametric models serve here as baselines:
    • "The Intelligent Driver Model (IDM) extended this work by capturing asymmetries between acceleration and deceleration, preferred free road and bumper-to-bumper headways, and realistic braking behaviour."

    • "MOBIL maintains a utility function and 'politeness parameter' to capture intelligent driver behaviour in both acceleration and turning."



Inverse Reinforcement Learning Inverse Optimal Control and Game Theory



"Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning with Application to Autonomous Driving"

  • [ 2020 ] [📝] [ 🎓 UC Berkeley ]

  • [ max-entropy, partition function, sampling, INTERACTION ]

Click to expand
Source.
The intractable partition Z function of Max-Entropy method is approximated by a sum of sampled trajectories. Source.
Source.
Left: Prior knowledge is injected to make the sampled trajectories feasible, hence improving the efficiency of the IRL method. Middle: Along with speed-desired_speed, long-acc, lat-acc and long-jerk, two interactions features are considered. Bottom-right: Sample re-distribution is performed since generated samples are not necessarily uniformly distributed in the selected feature space. Top-right: The learned weights indicate that humans care more about longitudinal accelerations in both non-interactive and interactive scenarios. Source.

Authors: Wu, Z., Sun, L., Zhan, W., Yang, C., & Tomizuka, M.

  • Motivations:

    • 1- The trajectories of the observed vehicles satisfy car kinematics constraints.
      • This should be considered while learning reward function.
    • 2- Uncertainties exist in real traffic demonstrations.
      • The demonstrations in naturalistic driving data are not necessarily optimal or near-optimal, and the IRL algorithms should be compatible with such uncertainties.
      • Max-Entropy methods (probabilistic) can cope with this sub-optimality.
    • 3- The approach should converge quickly to scale to problems with large continuous-domain applications with long horizons.
      • The critical part in max-entropy IRL: How to estimate the intractable partition Z?
  • Some assumptions:

    • "We do not consider scenarios where human drivers change their reward functions along the demonstrations."

    • "We also do not specify the diversity of reward functions among different human drivers. Hence, the acquired reward function is essentially an averaged result defined on the demonstration set."

  • Why "sampling-based"?

    • The integral of the partition function is approximated by a sum over generated samples.
      • It reminds me the Monte Carlo integration techniques.
      • The sampled are not random. Instead they are feasible and represent long-horizon trajectories, leveraging prior knowledge on vehicle kinematics and motion planning.
    • Efficiency:
      • 1- Around 1 minute to generate all samples for the entire training set.
      • 2- The sampling process is one-shot in the algorithm through the training process (do they mean that the set needs only to be created once?).
    • Sample Re-Distribution.
      • "The samples are not necessarily uniformly distributed in the selected feature space, which will cause biased evaluation of probabilities."

      • "To address this problem, we propose to use Euclidean distance [better metrics will be explored in future works] in the feature space as a similarity metric for re-distributing the samples."

    • The sampling time of all trajectories is ∆t=0.1s.
  • Features:

    • 1- Non-interactive: speed deviation to desired_speed, longitudinal and lateral accelerations, longitudinal jerk.
    • 2- Interactive:
      • future distance: minimum spatial distance of two interactive vehicles within a predicted horizon τ-predict assuming that they are maintaining their current speeds.
      • future interaction distance: minimum distance between their distances to the collision point.
    • All are normalized in (0, 1).
  • Metrics:

    • 1- Deterministic: feature deviation from the ground truth.
    • 2- Deterministic: mean Euclidean distance to the ground truth.
    • 3- Probabilistic: the likelihood of the ground truth.
  • Baselines:

    • They all are based on the principle of maximum entropy, but differ in the estimation of Z:
      • 1- Continuous-domain IRL (CIOC).
        • Z is estimated in a continuous domain via Laplace approximation: the reward at an arbitrary trajectory ξ˜ can be approximated by its second-order Taylor expansion at a demonstration trajectory ˆξD.
      • 2- Optimization-approximated IRL (Opt-IRL).
        • An optimal trajectory ξopt can be obtained by minimizing the updated reward function. Then, Zexp(βR(θ,ξopt)).
        • "In the forward problem at each iteration, it directly solves the optimization problem and use the optimal trajectories to represent the expected feature counts."

      • 3- Guided cost learning (GCL).
        • This one is not model-based: it does not need manually crafted features, but automatically learns features via neural networks.
        • It uses rollouts (samples) of the policy network to estimate Z in each iteration.
        • However, all these samples must be re-generated in every training iteration, while the proposed method only needs to generate all samples once.

"Analyzing the Suitability of Cost Functions for Explaining and Imitating Human Driving Behavior based on Inverse Reinforcement Learning"

  • [ 2020 ] [📝] [ 🎓 FZI, KIT, UC Berkeley ]

  • [ max-entropy ]

Click to expand
Source.
Left: Definition of the features retrieved from trajectory demonstrations and the evaluation function. Right: max-Entropy IRL enable only requires locally optimal demonstrations because the gradient and Hessian of the reward function is only considered in proximity of the demonstration. Note that the features are only based on the states, while the actions remain disregarded. And that their approach assumes that the cost function is parameterized as a linear combination of cost terms. Source.
Source.
General cost function structures and commonly used trajectory features. Only one work considers crossing scenarios. To account for the right of way at intersections, the time that elapses between one vehicle leaving a conflict zone, i.e. an area where paths overlap, and another vehicle entering this zone, is considered: tTZC = dsecond/vsecond. Bottom: Due to the similarity of the variance-mean-ratio under different evaluation functions, the authors limit their experiments to the consideration of sum[f(t)²], which is most used. Source.

Authors: Naumann, M., Sun, L., Zhan, W., & Tomizuka, M.

  • Motivations:

    • 1- Overview of trajectory features and cost structures.
    • 2- About demonstration selection: What are the requirements when entire trajectories are not available and trajectory segments must be used?
      • Not very clear to me.
      • "Bellman’s principle of optimality states that parts of optimal decision chains are also optimal decisions. Optimality, however, always refers to the entire decision chain. [...] Braking in front of a stop sign is only optimal as soon as the stop sign is considered within the horizon."

      • "The key insight is that selected segments have to end in a timestep that is optimal, independent of the weights that are to be learned."

      • "Assuming a non-negative cost definition, this motivates the choice of arbitrary trajectory segments ending in a timestep T such that cT−d+1 ... cT+d (depending on xT−2d+1...xT+2d) are zero, i.e. optimal, independent of θ."

      • "While this constraint limits the approach to cost functions that yield zero cost for some sections, it also yields the meaningful assumption that humans are not driven by a permanent dissatisfaction through their entire journey, but reach desirable states from time to time."

  • Miscellaneous: about cost function structures in related works:

    • Trajectory features can depend on:
      • 1- A single trajectory only. They are based on ego- acceleration, speed and position for instance.
      • 2- Trajectory ensembles. I.e. they describe quality of one trajectory with respect to the trajectories of other traffic participants. For instance TTC.
    • As most approaches did not focus on crossings, the traffic rule features were not used by the related approaches.
    • All approaches use a convenience term to prevent that being at a full stop is an optimal state with zero cost.
      • "In order to prevent that being at a full stop is beneficial, progress towards the target must be rewarded, that is, costs must be added in case of little progress. This can be done by considering the deviation from the desired velocity or the speed limit, or via the deviation from a reference position."

      • "For stop signs, similarly, the deviation from a complete stop, i.e. the driven velocity at the stop line, can be used as a feature."

    • All approaches incorporate both smoothness (longitudinal) and curve comfort (lateral).
    • The lane geometry is incorporated in the cost, unless it was already incorporated by using a predefined path.
  • What feature for interaction with others traffic participants?

    • Simply relative speeds and positions (gap).
    • Most approaches assume that the future trajectory of others is known or provided by an upstream prediction module. The effect of the ego vehicle on the traffic participant can then be measured. For instance the induced cost, such as deceleration.
    • "Other approaches do not rely on an upstream prediction, but incorporate the prediction of others into the planning by optimizing a global cost functional, which weights other traffic participants equally, or allows for more egoistic behavior based on a cooperation factor."

  • Some findings when applying IRL on INTERACTION dataset on three scenarios: in-lane driving, right turn and stop:

    • 1- Among all scenarios, human drivers weight longitudinal acceleration higher than longitudinal jerks.
    • 2- The weight for longitudinal and lateral acceleration are similar per scenario, such that neither seems to be preferred over the other. If implied by the scenario, as in the right turn, the weight decreases.
    • 3- In the right turn scenario, the weight of the lateral deviation from the centerline is very large.
      • "Rather than assuming that the centerline is especially important in turns, we hypothesize that a large weight on d-cl is necessary to prefer turning over simply going straight, which would cause less acceleration cost."

    • "We found that the key features and human preferences differ largely, even in different single lane scenarios and disregarding interaction with other traffic participants."


"Modeling Human Driving Behavior through Generative Adversarial Imitation Learning"

  • [ 2020 ] [📝] [ 🎓 Stanford ]

  • [ GAIL, PS-GAIL, RAIL, Burn-InfoGAIL, NGSIM ]

Click to expand
Source.
Different variations of Generative Adversarial Imitation Learning (GAIL) are used to model human drivers. These augmented GAIL-based models capture many desirable properties of both rule-based (IDM+MOBIL) and machine learning (BC predicting single / multiple Gaussians) methods, while avoiding common pitfalls. Source.
Source.
In Reward Augmented Imitation Learning (RAIL), the imitation learning agent receives a second source of reward signals which is hard-coded to discourage undesirable driving behaviours. The reward can be either binary, receiving penalty when the collision actually occurs, or smoothed, via increasing penalties as it approaches an undesirable event. This should address the credit assignment problem in RL. Source.

Authors: Bhattacharyya, R., Wulfe, B., Phillips, D., Kuefler, A., Morton, J., Senanayake, R., & Kochenderfer, M.

  • Related work:

  • Motivation: Derive realistic models of human drivers.

    • Example of applications: populate surrounding vehicles with human-like behaviours in the simulation, to learn a driving policy.
  • Ingredients:

    • 1- Imitation learning instead of RL since the cost function is unknown.
    • 2- GAIL instead of apprenticeship learning to not restrict the class of cost functions and avoid computationally expensive RL iterations.
    • 3- Some variations of GAIL to deal with the specificities of driver modelling.
  • Challenges and solutions when modelling the driving task as a sequential decision-making problem (MDP formulation):

    • 1- Continuous state and action spaces. And high dimensionality of the state representation.
    • 2- Non-linearity in the desired mapping from states to actions.
      • For instance, large corrections in steering are applied to avoid collisions caused by small changes in the current state.
      • Solution to 1-+2-: Neural nets.
        • "The feedforward MLP is limited in its ability to adequately address partially observable environments. [...] By maintaining sufficient statistics of past observations in memory, recurrent policies disambiguate perceptually similar states by acting with respect to histories of, rather than individual observations."

        • GRU layers are used: fewer parameters and still good performances.
    • 3- Stochasticity: humans may take different actions each time they encounter a given traffic scene.
      • Solution: Predicting a [Gaussian] distribution and sampling from it: at πθ(at | st).
    • 4- The underlying cost function is unknown. Direct RL is not applicable.
      • Solution: Learning from demonstrations (imitation learning). E.g. IRL+RL or BC.
      • "The goal is to infer this human policy from a dataset consisting of a sequence of (state, action) tuples."

    • 5- Interaction between agents needs to be modelled, i.e. it is a multi-agent problem.
      • Solution: GAIL extension. A parameter-sharing GAIL (PS-GAIL) to tackle multi-agent driver modelling.
    • 6- GAIL and PS-GAIL are domain agnostic, making it difficult to encode specific knowledge relevant to driving in the learning process.
      • Solution: GAIL extension. Reward Augmented Imitation Learning (RAIL).
    • 7- The human demonstrations dataset is a mixture of different driving styles. I.e. human demonstrations are dependent upon latent factors that may not be captured by GAIL.
      • Solution: GAIL extension. [Burn-]Information Maximizing GAIL (Burn-InfoGAIL) to disentangle the latent variability in demonstrations.
  • Issues with behavioural cloning (BC) (supervised version of imitation learning).

    • "BC trains the policy on the distribution of states encountered by the expert. During testing, however, the policy acts within the environment for long time horizons, and small errors in the learned policy or stochasticity in the environment can cause the agent to encounter a different distribution of states from what it observed during training. This problem, referred to as covariate shift, generally results in the policy making increasingly large errors from which it cannot recover."

    • "BC can be effective when a large number of demonstrations are available, but in many environments, it is not possible to obtain sufficient quantities of data."

    • Solutions to the covariate shift problem:
      • 1- Dataset Aggregation (DAgger), assuming access to an expert.
      • 2- Learn a replacement for the cost function that generalizes to unobserved states.
        • Inverse reinforcement learning (IRL) and apprenticeship learning.
        • "The goal in apprenticeship learning is to find a policy that performs no worse than the expert under the true [unknown] cost function."

  • Issues with apprenticeship learning:

    • A class of cost functions is used.
      • 1- It is often defined as the span of a set of basis functions that must be defined manually (as opposed to learned from the observations).
      • 2- This class may be restricting. I.e. no guarantee that the learning agent will perform no worse than the expert, and the agent can fail at imitating the expert.
      • "There is no reason to assume that the cost function of the human drivers lies within a small function class. Instead, the cost function could be quite complex, which makes GAIL a suitable choice for driver modeling."

    • 3- It generally involves running RL repeatedly, hence large computational cost.
  • About Generative Adversarial Imitation Learning (GAIL):

    • Recommended video: This CS285 lecture of Sergey Levine.
    • It is derived from an alternative approach to imitation learning called Maximum Causal Entropy IRL (MaxEntIRL).
    • "While apprenticeship learning attempts to find a policy that performs at least as well as the expert across cost functions, MaxEntIRL seeks a cost function for which the expert is uniquely optimal."

    • "While existing apprenticeship learning formalisms used the cost function as the descriptor of desirable behavior, GAIL relies instead on the divergence between the demonstration occupancy distribution and the learning agent’s occupancy distribution."

    • Connections to GAN:
      • It performs binary classification of (state, action) pairs drawn from the occupancy distributions ρπ and ρπE.
      • "Unlike GANs, GAIL considers the environment as a black box, and thus the objective is not differentiable with respect to the parameters of the policy. Therefore, simultaneous gradient descent [for D and G] is not suitable for solving the GAIL optimization objective."

      • "Instead, optimization over the GAIL objective is performed by alternating between a gradient step to increase the objective function with respect to the discriminator parameters D, and a Trust Region Policy Optimization (TRPO) step (Schulman et al., 2015) to decrease the objective function with respect to the parameters θ of the policy πθ."

  • Advantages of GAIL:

    • 1- It removes the restriction that the cost belongs to a highly limited class of functions.
      • "Instead allowing it to be learned using expressive function approximators such as neural networks".

    • 2- It scales to large state / action spaces to work for practical problems.
      • TRPO for GAIL works with direct policy search as opposed to finding intermediate value functions.
    • "GAIL proposes a new cost function regularizer. This regularizer allows scaling to large state action spaces and removes the requirement to specify basis cost functions."

  • Three extensions of GAIL to account for the specificities of driver modelling.

    • 1- Parameter-Sharing GAIL (PS-GAIL).
      • Idea: account for the multi-agent nature of the problem resulting from the interaction between traffic participants.
      • "We formulate multi-agent driving as a Markov game (Littman, 1994) consisting of M agents and an unknown reward function."
      • It combines GAIL with PS-TRPO.
      • "PS-GAIL training procedure encourages stabler interactions between agents, thereby making them less likely to encounter extreme or unlikely driving situations."

    • 2- Reward Augmented Imitation Learning (RAIL).
      • Idea: reward augmentation during training to provide domain knowledge.
      • It helps to improve the state space exploration of the learning agent by discouraging bad states such as those that could potentially lead to collisions.
      • "These include penalties for going off the road, braking hard, and colliding with other vehicles. All of these are undesirable driving behaviors and therefore should be discouraged in the learning agent."

      • Two kinds of penalties:
        • 2.1- Binary penalty.
        • 2.2- Smoothed penalty.
          • "We hypothesize that providing advanced warning to the imitation learning agent in the form of smaller, increasing penalties as the agent approaches an event threshold will address the credit assignment problem in RL."

          • "For off-road driving, we linearly increase the penalty from 0 to R when the vehicle is within 0.5m of the edge of the road. For hard braking, we linearly increase the penalty from 0 to R/2 when the acceleration is between −2m/s2 and −3m/s2."

      • "PS-GAIL and RAIL policies are less likely to lead vehicles into collisions, extreme decelerations, and off-road driving."

      • It looks like now a combination of cloning and RL now: the agent receives rewards for imitating the actions and gets hard-coded rewards/penalties defined by the human developer.
    • 3- Information Maximizing GAIL (InfoGAIL).
      • Idea: assume that the expert policy is a mixture of experts.
      • [Different driving style are present in the dataset] "Aggressive drivers will demonstrate significantly different driving trajectories as compared to passive drivers, even for the same road geometry and traffic scenario. To uncover these latent factors of variation, and learn policies that produce trajectories corresponding to these latent factors, InfoGAIL was proposed."

      • To ensure that the learned policy utilizes the latent variable z as much as possible, InfoGAIL tries to enforce high mutual information between z and the state-action pairs in the generated trajectory.
      • Extension: Burn-InfoGAIL.
        • Playback is used to initialize the ego vehicle: the "burn-in demonstration".
        • "If the policy is initialized from a state sampled at the end of a demonstrator’s trajectory (as is the case when initializing the ego vehicle from a human playback), the driving policy’s actions should be consistent with the driver’s past behavior."

        • "To address this issue of inconsistency with real driving behavior, Burn-InfoGAIL was introduced, where a policy must take over where an expert demonstration trajectory ends."

      • When trained in a simulator, different parameterizations are possible, defining the style z of each car:
        • Aggressive: High speed and large acceleration + small headway distances.
        • Speeder: same but large headway distances.
        • Passive: Low speed and acceleration + large headway distances.
        • Tailgating: same but small headway distances.
  • Experiments.

    • NGSIM dataset:
      • "The trajectories were smoothed using an extended Kalman filter on a bicycle model and projected to lanes using centerlines extracted from the NGSIM roadway geometry file."

    • Metrics:
      • 1- Root Mean Square Error (RMSE) metrics.
      • 2- Metrics that quantify undesirable traffic phenomena: collisions, hard-braking, and offroad driving.
    • Baselines:
      • BC with single or mixture Gaussian regression.
      • Rule-based controller: IDM+MOBIL.
        • "A small amount of noise is added to both the lateral and longitudinal accelerations to make the controller nondeterministic."

    • Simulation:
      • "The effectiveness of the resulting driving policy trained using GAIL in imitating human driving behavior is assessed by validation in rollouts conducted on the simulator."

    • Some results:
      • [GRU helps GAIL, but not BC] "Thus, we find that recurrence by itself is insufficient for addressing the detrimental effects that cascading errors can have on BC policies."

      • "Only GAIL-based policies (and of course IDM+MOBIL) stay on the road for extended stretches."

  • Future work: How to refine the integration modelling?

    • "Explicitly modeling the interaction between agents in a centralized manner through the use of Graph Neural Networks."


"Deep Reinforcement Learning for Human-Like Driving Policies in Collision Avoidance Tasks of Self-Driving Cars"

  • [ 2020 ] [📝] [ 🎓 University of the Negev ]

  • [ data-driven reward ]

Click to expand
Source.
Note that the state variables are normalized in [0, 1] or [-1, 1] and that the previous actions are part of the state. Finally, both the previous and the current observations (only the current one for the scans) are included in the state, in order to appreciate the temporal evolution. Source.
Source.
Left: throttle and steering actions are not predicted as single scalars but rather as distributions. In this case a mixture of 3 Gaussian, each of them parametrized by a mean and a standard deviation. Weights are also learnt. This enable modelling multimodal distribution and offers better generalization capabilities. Right: the reward function is designed to make the agent imitate the expert driver's behaviour. Therefore the differences in term of mean speed and mean track position between the agent and expert driver are penalized. The mean speed and position of the expert driver is obtained from the learnt GP model. It also contains a non-learnable part: penalties for collision and action changes are independent of human driver observations. Source.
Source.
Human speeds and lateral positions on the track are recorded and modelled using a GP regression. It is used to define the human-like behaviours in the reward function (instead of IRL) as well as for comparison during test. Source.

Authors: Emuna, R., Borowsky, A., & Biess, A.

  • Motivations:
    • Learn human-like behaviours via RL without traditional IRL.
    • Imitation should be considered in term of mean but also in term of variability.
  • Main idea: hybrid (rule-based and data-driven) reward shaping.
    • The idea is to build a model based on observation of human behaviours.
      • In this case a Gaussian Process (GP) describes the distribution of speed and lateral position along a track.
    • Deviations from these learnt parameters are then penalized in the reward function.
    • Two variants are defined:
      • 1- The reward function is fixed, using the means of the two GPs are reference speeds and positions.
      • 2- The reward function varies by sampling each time a trajectory from the learnt GP models and using its values are reference speeds and positions.
        • The goal here is not only to imitate mean human behaviour but to recover also the variability in human driving.
    • "Track position was recovered better than speed and we concluded that the latter is related to an agent acting in a partially observable environment."

    • Note that the weights of feature in the reward function stay arbitrary (they are not learnt, contrary to IRL).
  • About the dynamic batch update.
    • "To improve exploration and avoid early termination, we used reference state initialization. We initialized the speed by sampling from a uniform distribution between 30 to 90km/h. High variability in the policy at the beginning of training caused the agent to terminate after a few number of steps (30-40). A full round of the track required about 2000 steps. To improve learning we implemented a dynamic batch size that grows with the agent’s performance."


"Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic"

  • [ 2020 ] [📝] [ 🎓 Stanford ] [ 🚗 Honda, Toyota ]

  • [ curriculum learning, level-k reasoning ]

Click to expand
Source.
Curriculum learning: the RL agent solves MDPs with iteratively increasing complexity. At each step of the curriculum, the behaviour of the cars in the environment is sampled from the previously learnt k-levels. Bottom left: 3 or 4 iterations seem to be enough and larger reasoning levels might not be needed for this merging task. Source.

Authors: Bouton, M., Nakhaei, A., Isele, D., Fujimura, K., & Kochenderfer, M. J.

  • Motivations:
    • 1- Training RL agents more efficiently for complex traffic scenarios.
      • The goal is to avoid standard issues with RL: sparse rewards, delayed rewards, and generalization.
      • Here the agent should merge in dense traffic, requiring interaction.
    • 2- Cope with dense scenarios.
      • "The lane change model MOBIL which is at the core of this rule-based policy has been designed for SPARSE traffic conditions [and performs poorly in comparison]."

    • 3- Learn a robust policy, able to deal with various behaviours.
      • Here learning is done iteratively, as the reasoning level increases, the learning agent is exposed to a larger variety of behaviours.
    • Ingredients:
      • "Our training curriculum relies on the level-k cognitive hierarchy model from behavioral game theory".

  • About k-level and game theory:
    • "This model consists in assuming that an agent performs a limited number of iterations of strategic reasoning: (“I think that you think that I think”)."

    • A level-k agent acts optimally against the strategy of a level-(k-1) agent.
    • The level-0 is not learnt but uses an IDM + MOBIL hand-engineered rule-based policy.
  • About curriculum learning:
    • The idea is to iteratively increase the complexity of the problem. Here increase the diversity and the optimality of the surrounding cars.
    • Each cognitive level is trained in a RL environment populated with vehicles of any lower cognitive level.
      • "We then train a level-3 agent by populating the top lane with level-0 and level-2 agents and the bottom lane with level-0 or level-1 agents."

      • "Note that a level-1 policy corresponds to a standard RL procedure [no further iteration]."

    • Each learnt policy is learnt with DQN.
      • To accelerate training at each time step, the authors re-use the weights from the previous iteration to start training.
  • MDP formulation.
    • Actually, two policies are learnt:
      • Policies 1, 3, and 5: change-lane agents.
      • Policies 2 and 4: keep-lane agents.
    • action
      • "The learned policy is intended to be high level. At deployment, we expect the agent to decide on a desired speed (0 m/s, 3 m/s, 5 m/s) and a lane change command while a lower lever controller, operating at higher frequency, is responsible for executing the motion and triggering emergency braking system if needed."

      • Simulation runs at 10Hz but the agent takes an action every five simulation steps: 0.5 s between two actions.
      • The authors chose high-level actions and to rely on IDM:
        • "By using the IDM rule to compute the acceleration, the behavior of braking if there is a car in front will not have to be learned."

        • "The longitudinal action space is safe by design. This can be thought of as a form of shield to the RL agent from taking unsafe actions."

          • Well, all learnt agent exhibit at least 2% collision rate ??
    • state
      • Relative pose and speed of the 8 closest surrounding vehicles.
      • Full observability is assumed.
        • "Measurement uncertainty can be handled online (after training) using the QMDP approximation technique".

    • reward
      • Penalty for collisions: −1.
      • Penalty for deviating from a desired velocity: −0.001|v-ego − v-desired|.
      • Reward for being in the top lane: +0.01 for the merging-agent and 0 for the keep-lane agent.
      • Reward for success (passing the blocked vehicle): +1.

"Using Counterfactual Reasoning and Reinforcement Learning for Decision-Making in Autonomous Driving"

  • [ 2020 ] [📝] [ 🎓 Technische Universität München ] [ 🚗 fortiss ] [:octocat:]

  • [ counterfactual reasoning ]

Click to expand
Source.
The idea is to first train the agent interacting with different driver models. This should lead to a more robust policy. During inference the possible outcomes are first evaluated. If too many predictions result in collisions, a non-learnt controller takes over. Otherwise, the learnt policy is executed. Source.

Authors: Hart, P., & Knoll, A.

  • Motivations:

    • Cope with the behavioural uncertainties of other traffic participants.
  • The idea is to perform predictions considering multiple interacting driver models.

    • 1- During training: expose multiple behaviour models.
      • The parametrized model IDM is used to describe more passive or aggressive drivers.
      • Model-free RL is used. The diversity of driver models should improve the robustness.
    • 2- During application: at each step, the learned policy is first evaluated before being executed.
      • The evolution of the present scene is simulated using the different driver models.
      • The outcomes are then aggregated:
        • 1- Collision rate.
        • 2- Success rate (reaching the destination).
        • Based on these risk and performance metrics, the policy is applied or not.
          • If the collision rate is too high, then the ego vehicle stays on its current lane, controlled by IDM.
          • "Choosing the thresholds is nontrivial as this could lead to too passive or risky behaviors."

      • It could be seen as some prediction-based action masking.
      • These multi-modal predictions make me also think of the roll-out phase in tree searches.
      • Besides it reminds me the concept of concurrent MDP, where the agent tries to infer in which MDP (parametrized) it has been placed.
  • Not clear to me:

    • Why not doing planning if you explicitly know the transition (IMD) and the reward models? It would substantially increase the sampling efficiency.
  • About the simulator:

    • BARK standing for Behavior benchmARKing and developed at fortiss.
  • About "counterfactual reasoning":

    • From wikipedia: "Counterfactual thinking is, as it states: 'counter to the facts'. These thoughts consist of the 'What if?' ..."
    • "We use causal counterfactual reasoning: [...] sampling behaviors from a model pool for other traffic participants can be seen as assigning nonactual behaviors to other traffic participants.


"Modeling pedestrian-cyclist interactions in shared space using inverse reinforcement learning"

  • [ 2020 ] [📝] [ 🎓 University of British Columbia, Vancouver ]
  • [ max-entropy, feature matching ]
Click to expand
Source.
Left: The contribution of each feature in the linear reward model differs between the Maximum Entropy (ME) and the Feature Matching (FM) algorithms. The FM algorithm is inconsistent across levels and has a higher intercept to parameter weight ratio compared with the estimated weights using the ME. Besides, why does it penalize all lateral distances and all speeds in these overtaking scenarios? Right: good idea how to visualize reward function for state of dimension 5. Source.

Authors: Alsaleh, R., & Sayed, T.

  • In short: A simple but good illustration of IRL concepts using Maximum Entropy (ME) and Feature Matching (FM) algorithms.
  • Motivations, here:
    • 1- Work in non-motorized shared spaces, in this case a cyclist-pedestrian zone.
      • It means high degrees of freedom in motions for all participants.
      • And offers complex road-user interactions (behaviours different than on conventional streets).
    • 2- Model the behaviour of cyclists in this share space using agent-based modelling.
      • agent-based as opposed to physics-based prediction models such as social force model (SFM) or cellular automata (CA).
      • The agent is trying to maximize an unknown reward function.
      • The recovery of that reward function is the core of the paper.
  • First, 2 interaction types are considered:
    • The cyclist following the pedestrian.
    • The cyclist overtaking the pedestrian.
    • This distinction avoids the search for a 1-size-fits-all model.
  • About the MDP:
    • The cyclist is the agent.
    • state (absolute for the cyclist or relative compared to the pedestrian):
      • longitudinal distance
      • lateral distance
      • angle difference
      • speed difference
      • cyclist speed
    • state discretization:
      • "Discretized for each interaction type by dividing each state feature into 6 levels based on equal frequency observation in each level."

      • This non-constant bin-width partially addresses the imbalanced dataset.
      • 6^5 = 7776 states.
    • action:
      • acceleration
      • yaw rate
    • action discretization:
      • "Dividing the acceleration into five levels based on equal frequency observation in each level."

      • 5^2 = 25 actions.
    • discount factor:
      • "A discount factor of 0.975 is used assuming 10% effect of the reward at a state 3 sec later (90 time steps) from the current state."

  • About the dataset.
    • Videos of two streets in Vancouver, for a total of 39 hours.
    • 228 cyclist and 276 pedestrian trajectories are extracted.
  • IRL.
    • The two methods assume that the reward is a linear combination of features. Here features are state components.
    • 1- Feature Matching (FM).
      • It matches the feature counts of the expert trajectories.
      • The authors do not details the max-margin part of the algorithm.
    • 2- Maximum Entropy (ME).
      • It estimates the reward function parameters by maximizing the likelihood of the expert demonstrations under the maximum entropy distribution.
      • Being probabilistic, it can account for non-optimal observed behaviours.
  • The recovered reward model can be used for prediction - How to measure the similarity between two trajectories?
    • 1- Mean Absolute Error (MAE).
      • It compares elements of same indices in the two sequences.
    • 2- Hausdorff Distance.
      • "It computes the largest distance between the simulated and the true trajectories while ignoring the time step alignment".

  • Current limitations:
    • 1-to-1 interactions, i.e. a single pedestrian/cyclist pair.
    • Low-density scenarios.
    • "[in future works] neighbor condition (i.e. other pedestrians and cyclists) and shared space density can be explicitly considered in the model."


"Accelerated Inverse Reinforcement Learning with Randomly Pre-sampled Policies for Autonomous Driving Reward Design"

  • [ 2019 ] [📝] [ 🎓 UC Berkeley, Tsinghua University, Beijin ]
  • [ max-entropy ]
Click to expand
Source.
Instead of the costly RL optimisation step at each iteration of the vanilla IRL, the idea is to randomly sample a massive of policies in advance and then to pick one of them as the optimal policy. In case the sampled policy set does not contain the optimal policy, exploration of policy is introduced as well for supplement. Source.
Source.
The approximation used in Kuderer et al. (2015) is applied here to compute the second term of gradient about the expected feature values. Source.

Authors: Xin, L., Li, S. E., Wang, P., Cao, W., Nie, B., Chan, C., & Cheng, B.

  • Reminder: Goal of IRL = Recover the reward function of an expert from demonstrations (here trajectories).
  • Motivations, here:
    • 1- Improve the efficiency of "weights updating" in the iterative routine of IRL.
      • More precisely: generating optimal policy using model-free RL suffers from low sampling efficiency and should therefore be avoided.
      • Hence the term "accelerated" IRL.
    • 2- Embed human knowledge where restricting the search space (policy space).
  • One idea: "Pre-designed policy subspace".
    • "An intuitive idea is to randomly sample a massive of policies in advance and then to pick one of them as the optimal policy instead of finding it via RL optimisation."

  • How to construct the policies sub-space?
    • Human knowledge about vehicle controllers is used.
    • Parametrized linear controllers are implemented:
      • acc = K1∆d + K2∆v + K3*∆a, where are relative to the leading vehicle.
      • By sampling tuples of <K1, K2, K3> coefficients, 1 million (candidates) policies are generated to form the sub-space.
  • Section about Max-Entropy IRL (btw. very well explained, as for the section introducing IRL):
    • "Ziebart et al. (2008) employed the principle of maximum entropy to resolve ambiguities in choosing trajectory distributions. This principle maximizes the uncertainty and leads to the distribution over behaviors constrained to matching feature expectations, while being no more committed to any particular trajectory than this constraint requires".

    • "Maximizing the entropy of the distribution over trajectories subject to the feature constraints from expert’s trajectories implies to maximize the likelihood under the maximum entropy (exponential family) distributions. The problem is convex for MDPs and the optima can be obtained using gradient-based optimization methods".

    • "The gradient [of the Lagrangian] is the difference between empirical feature expectations and the learners expected feature expectations."

  • How to compute the second term of this gradient?
    • It implies integrating over all possible trajectories, which is infeasible.
    • As Kuderer et al. (2015), one can compute the feature values of the most likely trajectory as an approximation of the feature expectation.
    • "With this approximation, only the optimal trajectory associated to the optimal policy is needed, in contrast to regarding the generated trajectories as a probability distribution."

  • About the features.
    • As noted in my experiments about IRL, they serve two purposes (in feature-matching-based IRL methods):
      • 1- In the reward function: they should represent "things we want" and "things we do not want".
      • 2- In the feature-match: to compare two policies based on their sampled trajectories, they should capture relevant properties of driving behaviours.
    • Three features for this longitudinal acceleration task:
      • front-veh time headway.
      • long. acc.
      • deviation to speed limit.
  • Who was the expert?
    • Expert followed a modified linear car-following (MLCF) model.
  • Results.
    • Iterations are stopped after 11 loops.
    • It would have been interesting for comparison to test a "classic" IRL method where RL optimizations are applied.

"Jointly Learnable Behavior and Trajectory Planning for Self-Driving Vehicles"

  • [ 2019 ] [📝] [🎞️] [ 🚗 Uber ]
  • [ max-margin ]
Click to expand
Source.
Both behavioural planner and trajectory optimizer share the same cost function, whose weigth parameters are learnt from demonstration. Source.

Authors: Sadat, A., Ren, M., Pokrovsky, A., Lin, Y., Yumer, E., & Urtasun, R.

  • Main motivation:
    • Design a decision module where both the behavioural planner and the trajectory optimizer share the same objective (i.e. cost function).
    • Therefore "joint".
    • "[In approaches not-joint approaches] the final trajectory outputted by the trajectory planner might differ significantly from the one generated by the behavior planner, as they do not share the same objective".

  • Requirements:
    • 1- Avoid time-consuming, error-prone, and iterative hand-tuning of cost parameters.
      • E.g. Learning-based approaches (BC).
    • 2- Offer interpretability about the costs jointly imposed on these modules.
      • E.g. Traditional modular 2-stage approaches.
  • About the structure:
    • The driving scene is described in W (desired route, ego-state, map, and detected objects). Probably W for "World"?
    • The behavioural planner (BP) decides two things based on W:
      • 1- A high-level behaviour b.
        • The path to converge to, based on one chosen manoeuvre: keep-lane, left-lane-change, or right-lane-change.
        • The left and right lane boundaries.
        • The obstacle side assignment: whether an obstacle should stay in the front, back, left, or right to the ego-car.
      • 2- A coarse-level trajectory τ.
      • The loss has also a regularization term.
      • This decision is "simply" the argmin of the shared cost-function, obtained by sampling + selecting the best.
    • The "trajectory optimizer" refines τ based on the constraints imposed by b.
      • E.g. an overlap cost will be incurred if the side assignment of b is violated.
    • A cost function parametrized by w assesses the quality of the selected <b, τ> pair:
      • cost = w^T . sub-costs-vec(τ, b, W).
      • Sub-costs relate to safety, comfort, feasibility, mission completion, and traffic rules.
  • Why "learnable"?
    • Because the weight vector w that captures the importance of each sub-cost is learnt based on human demonstrations.
      • "Our planner can be trained jointly end-to-end without requiring manual tuning of the costs functions".

    • They are two losses for that objective:
      • 1- Imitation loss (with MSE).
        • It applies on the <b, τ> produced by the BP.
      • 2- Max-margin loss to penalize trajectories that have small cost and are different from the human driving trajectory.
        • It applies on the <τ> produced by the trajectory optimizer.
        • "This encourages the human driving trajectory to have smaller cost than other trajectories".

        • It reminds me the max-margin method in IRL where the weights of the reward function should make the expert demonstration better than any other policy candidate.

"Adversarial Inverse Reinforcement Learning for Decision Making in Autonomous Driving"

  • [ 2019 ] [📝] [ 🎓 UC Berkeley, Chalmers University, Peking University ] [ 🚗 Zenuity ]
  • [ GAIL, AIRL, action-masking, augmented reward function ]
Click to expand

Author: Wang, P., Liu, D., Chen, J., & Chan, C.-Y.

In Adversarial IRL (AIRL), the discriminator tries to distinguish learnt actions from demonstrated expert actions. Action masking is applied, removing some combinations that are not preferable, in order to reduce the unnecessary exploration. Finally, the reward function of the discriminator is extended with some manually-designed semantic reward to help the agent successfully complete the lane change and not to collide with other objects. Source.
In Adversarial IRL (AIRL), the discriminator tries to distinguish learnt actions from demonstrated expert actions. Action-masking is applied, removing some action combinations that are not preferable, in order to reduce the unnecessary exploration. Finally, the reward function of the discriminator is extended with some manually-designed semantic reward to help the agent successfully complete the lane change and not to collide with other objects. Source.
  • One related concept (detailed further on this page): Generative Adversarial Imitation Learning (GAIL).
    • An imitation learning method where the goal is to learn a policy against a discriminator that tries to distinguish learnt actions from expert actions.
  • Another concept used here: Guided Cost Learning (GCL).
    • A Max-Entropy IRL method that makes use of importance sampling (IS) to approximate the partition function (the term in the gradient of the log-likelihood function that is hard to compute since it involves an integral of over all possible trajectories).
  • One concept introduced: Adversarial Inverse Reinforcement Learning (AIRL).
    • It combines GAIL with GCL formulation.
      • "It uses a special form of the discriminator different from that used in GAIL, and recovers a cost function and a policy simultaneously as that in GCL but in an adversarial way."

    • Another difference is the use of a model-free RL method to compute the new optimal policy, instead of model-based guided policy search (GPS) used in GCL:
      • "As the dynamic driving environment is too complicated to learn for the driving task, we instead use a model-free policy optimization method."

    • One motivation of AIRL is therefore to cope with changes in the dynamics of environment and make the learnt policy more robust to system noises.
  • One idea: Augment the learned reward with some "semantic reward" term to improve learning efficiency.
    • The motivation is to manually embed some domain knowledge, in the generator reward function.
    • "This should provide the agent some informative guidance and assist it to learn fast."

  • About the task:
    • "The task of our focus includes a longitudinal decision – the selection of a target gap - and a lateral decision – whether to commit the lane change right now."

    • It is a rather "high-level" decision:
      • A low-level controller, consisting of a PID for lateral control and sliding-mode for longitudinal control, is the use to execute the decision.
    • The authors use some action-masking technics where only valid action pairs are allowed to reduce the agent’s unnecessary exploration.

"Predicting vehicle trajectories with inverse reinforcement learning"

  • [ 2019 ] [📝] [ 🎓 KTH ]
  • [ max-margin ]
Click to expand

Author: Hjaltason, B.

The φ are distances read from the origin of a vision field and are represented by red dotted lines. They take value in [0, 1], where φi = 1 means the dotted line does not hit any object and φi = 0 means it hits an object at origin. In this case, two objects are inside the front vision field. Hence φ1 = 0.4 and φ2 = 0.6.. Source.
About the features: The φ are distances read from the origin of a vision field and are represented by red dotted lines. They take value in [0, 1], where φi = 1 means the dotted line does not hit any object and φi = 0 means it hits an object at origin. In this case, two objects are inside the front vision field. Hence φ1 = 0.4 and φ2 = 0.6. Source.
Example of max-margin IRL. Source.
Example of max-margin IRL. Source.
  • A good example of max-margin IRL:
    • "There are two classes: The expert behaviour from data gets a label of 1, and the "learnt" behaviours a label of -1. The framework performs a max-margin optimization step to maximise the difference between both classes. The result is an orthogonal vector wi from the max margin hyperplane, orthogonal to the estimated expert feature vector µ(πE)".

    • From this new R=w*f, an optimal policy is derived using DDPG.
    • Rollouts are performed to get an estimated feature vector that is added to the set of "learnt" behaviours.
    • The process is repeated until convergence (when the estimated values w*µ(π) are close enough).
  • Note about the reward function:
    • Here, r(s, a, s') is also function of the action and the next state.
    • Here a post about different forms of reward functions.

"A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress"

  • [ 2019 ] [📝] [ 🎓 University of Georgia ]
  • [ reward engineering ]
Click to expand

Authors: Arora, S., & Doshi, P.

Trying to generalize and classify IRL methods. Source.
Trying to generalize and classify IRL methods. Source.
I learnt about state visitation frequency: ψ(π)(s) and the feature count expectation: µ(π)(φ). Source.
I learnt about state visitation frequency: ψ(π)(s) and the feature count expectation: µ(π)(φ). Source.
  • This large review does not focus on AD applications, but it provides a good picture of IRL and can give ideas. Here are my take-aways.
  • Definition:
    • "Inverse reinforcement learning (IRL) is the problem of modeling the preferences of another agent using its observed behavior [hence class of IL], thereby avoiding a manual specification of its reward function."

  • Potential AD applications of IRL:
    • Decision-making: If I find your underlying reward function, and I consider you as an expert, I can imitate you.
    • Prediction: If I find your underlying reward function, I can imagine what you are going to do
  • I start rethinking Imitation Learning. The goal of IL is to derive a policy based on some (expert) demonstrations.
    • Two branches emerge, depending on what structure is used to model the expert behaviour. Where is that model captured?
      • 1- In a policy.
        • This is a "direct approach". It includes BC and its variants.
        • The task is to learn that state -> action mapping.
      • 2- In a reward function.
        • Core assumption: Each driver has an internal reward function and acts optimally w.r.t. it.
        • The main task it to learn that reward function (IRL), which captures the expert's preferences.
        • The second step consists in deriving the optimal policy for this derived reward function.

          As Ng and Russell put it: "The reward function, rather than the policy, is the most succinct, robust, and transferable definition of the task"

    • What happens if some states are missing in the demonstration?
      • 1- Direct methods will not know what to do. And will try to interpolate from similar states. This could be risky. (c.f. distributional shift problem and DAgger).
        • "If a policy is used to describe a task, it will be less succinct since for each state we have to give a description of what the behaviour should look like". From this post

      • 2- IRL methods acts optimally w.r.t. the underlying reward function, which could be better, since it is more robust.
        • This is particularly useful if we have an expert policy that is only approximately optimal.
        • In other words, a policy that is better than the "expert" can be derived, while having very little exploration. This "minimal exploration" property is useful for tasks such as AD.
        • This is sometimes refers to as Apprenticeship learning.
  • One new concept I learnt: State-visitation frequency (it reminds me some concepts of Markov chains).
    • Take a policy π. Let run the agent with it. Count how often it sees each state. This is called the state-visitation frequency (note it is for a specific π).
    • Two ideas from there:
      • Iterating until this state-visitation frequency stops changing yields the converged frequency function.
      • Multiplying that converged state-visitation frequency with reward gives another perspective to the value function.
        • The value function can now be seen as a linear combination of the expected feature count µ(φk)(π) (also called successor feature).
  • One common assumption: -> "The solution is a weighted linear combination of a set of reward features".
    • This greatly reduces the search space.
    • "It allowed the use of feature expectations as a sufficient statistic for representing the value of trajectories or the value of an expert’s policy."

  • Known IRL issues (and solutions):
    • 1- This is an under-constrained learning problem.
      • "Many reward functions could explain the observations".

      • Among them, they are highly "degenerate" functions with all reward values zero.
      • One solution is to impose constraints in the optimization.
        • For instance try to maximize the sum of "value-margins", i.e. the difference between the value functions of the best and the second-best actions.
        • "mmp makes the solution policy have state-action visitations that align with those in the expert’s demonstration."

        • "maxent distributes probability mass based on entropy but under the constraint of feature expectation matching."

      • Another common constraint is to encourage the reward function to be as simple as possible, similar to L1 regularization in supervised learning.
    • 2- Two incomplete models:
      • 2.1- How to deal with incomplete/absent model of transition probabilities?
      • 2.2- How to select the reward features?
        • "[One could] use neural networks as function approximators that avoid the cumbersome hand-engineering of appropriate reward features".

      • "These extensions share similarity with model-free RL where the transition model and reward function features are also unknown".

    • 3- How to deal with noisy demonstrations?
      • Most approaches assume a Gaussian noise and therefore apply Gaussian filters.
  • How to classify IRL methods?
    • It can be useful to ask yourself two questions:
      • 1- What are the parameters of the Hypothesis R function`?
        • Most approaches use the "linear approximation" and try to estimate the weights of the linear combination of features.
      • 2- What for "Divergence Metric", i.e. how to evaluate the discrepancy to the expert demonstrations?
        • "[it boils down to] a search in reward function space that terminates when the behavior derived from the current solution aligns with the observed behavior."

        • How to measure the closeness or the similarity to the expert?
          • 1- Compare the policies (i.e. the behaviour).
            • E.g. how many <state, action> pairs are matching?
            • "A difference between the two policies in just one state could still have a significant impact."

          • 2- Compare the value functions (they are defined over all states).
            • The authors mention the inverse learning error (ILE) = || V(expert policy) - V(learnt policy) || and the value loss (use as a margin).
    • Classification:
      • Margin-based optimization: Learn a reward function that explains the demonstrated policy better than alternative policies by a margin (address IRL's "solution ambiguity").
        • The intuition here is that we want a reward function that clearly distinguishes the optimal policy from other possible policies.
      • Entropy-based optimization: Apply the "maximum entropy principle" (together with the "feature expectations matching" constraint) to obtain a distribution over potential reward functions.
      • Bayesian inference to derive P(^R|demonstration).
        • What for the likelihood P(<s, a> | ˆR)? This probability is proportional to the exponentiated value function: exp(Q[s, a]).
      • Regression and classification.

"Learning Reward Functions for Optimal Highway Merging"

  • [ 2019 ] [📝] [ 🎓 Stanford ] [🎞️]
  • [ reward engineering ]
Click to expand

Author: Weiss, E.

The assumption-free reward function that uses a simple polynomial form based on state and action values at each time step does better at minimizing both safety and mobility objectives, even though it does not incorporate human knowledge of typical reward function structures. About Pareto optimum: at these points, it becomes impossible to improve in the minimization of one objective without worsening our minimization of the other objective). Source.
The assumption-free reward function that uses a simple polynomial form based on state and action values at each time step does better at minimizing both safety and mobility objectives, even though it does not incorporate human knowledge of typical reward function structures. About Pareto optimum: at these points, it becomes impossible to improve in the minimization of one objective without worsening our minimization of the other objective). Source.
  • What?
  • My main takeaway:
    • A simple problem that illustrates the need for (learning more about) IRL.
  • The merging task is formulated as a simple MDP:
    • The state space has size 3 and is discretized: lat + long ego position and long position of the other car.
    • The other vehicle transitions stochastically (T) according to three simple behavioural models: fast, slow, average speed driving.
    • The main contribution concerns the reward design: how to shape the reward function for this multi-objective (trade-off safety / efficiency) optimization problem?
  • Two reward functions (R) are compared:
    • 1- "The first formulation models rewards based on our prior knowledge of how we would expect autonomous vehicles to operate, directly encoding human values such as safety and mobility into this problem as a positive reward for merging, a penalty for merging close to the other vehicle, and a penalty for staying in the on-ramp."

    • 2- "The second reward function formulation assumes no prior knowledge of human values and instead comprises a simple degree-one polynomial expression for the components of the state and the action."

      • The parameters are tuned using a sort of grid search (no proper IRL).
  • How to compare them?
    • Since both T and R are known, a planning (as opposed to learning) algorithm can be used to find the optimal policy. Here value iteration is implemented.
    • The resulting agents are then evaluated based on two conflicting objectives:
      • "Minimizing the distance along the road at which point merging occurs and maximizing the gap between the two vehicles when merging."

    • Next step will be proper IRL:
      • "We can therefore conclude that there may exist better reward functions for capturing optimal driving policies than either the intuitive prior knowledge reward function or the polynomial reward function, which doesn’t incorporate any human understanding of costs associated with safety and efficiency."


"Game-theoretic Modeling of Traffic in Unsignalized Intersection Network for Autonomous Vehicle Control Verification and Validation"

  • [ 2019 ] [📝] [ 🎓 University of Michigan and Bilkent University, Ankara ]
  • [ DAgger, level-k control policy ]
Click to expand

Authors: Tian, R., Li, N., Kolmanovsky, I., Yildiz, Y., & Girard, A.


"Interactive Decision Making for Autonomous Vehicles in Dense Traffic"

  • [ 2019 ] [📝] [ 🚗 Honda ]

  • [ game tree search, interaction-aware decision making ]

Click to expand
In the rule-based stochastic driver model describing the other agents, 2 thresholds are introduced: The reaction threshold, sampled from the range {−1.5m, 0.4m}, describes whether or not the agent reacts to the ego car. The aggression threshold, uniformly sampled {−2.2, 1.1m}, describes how the agent reacts. Source.
In the rule-based stochastic driver model describing the other agents, 2 thresholds are introduced: The reaction threshold, sampled from the range {−1.5m, 0.4m}, describes whether or not the agent reacts to the ego car. The aggression threshold, uniformly sampled {−2.2, 1.1m}, describes how the agent reacts. Source.
Two tree searches are performed: The first step is to identify a target merging gap based on the probability of a successful merge for each of them. The second search involves forward simulation and collision checking for multiple ego and traffic intentions. In practice the author found that ''the coarse tree - i.e. with intention only - was sufficient for long term planning and only one intention depth needed to be considered for the fine-grained search''. This reduces this second tree to a matrix game. Source.
Two tree searches are performed: The first step is to identify a target merging gap based on the probability of a successful merge for each of them. The second search involves forward simulation and collision checking for multiple ego and traffic intentions. In practice the author found that ''the coarse tree - i.e. with intention only - was sufficient for long term planning and only one intention depth needed to be considered for the fine-grained search''. This reduces this second tree to a matrix game. Source.

Author: Isele, D.

  • Three motivations when working on decision-making for merging in dense traffic:
    • 1- Prefer game theory approaches over rule-based planners.
      • To avoid the frozen robot issue, especially in dense traffic.
      • "If the ego car were to wait for an opening, it may have to wait indefinitely, greatly frustrating drivers behind it".

    • 2- Prefer the stochastic game formulation over MDP.
      • Merging in dense traffic involves interacting with self-interested agents ("self-interested" in the sense that they want to travel as fast as possible without crashing).
      • "MDPs assume agents follow a set distribution which limits an autonomous agent’s ability to handle non-stationary agents which change their behaviour over time."

      • "Stochastic games are an extension to MDPs that generalize to multiple agents, each of which has its own policy and own reward function."

      • In other words, stochastic games seen more appropriate to model interactive behaviours, especially in the forward rollout of tree search:
        • An interactive prediction model based on the concept of counterfactual reasoning is proposed.
        • It describes how behaviour might change in response to ego agent intervention.
    • 3- Prefer tree search over neural networks.
      • "Working with the game trees directly produces interpretable decisions which are better suited to safety guarantees, and ease the debugging of undesirable behaviour."

      • In addition, it is possible to include stochasticity for the tree search.
        • More precisely, the probability of a successful merge is computed for each potential gap based on:
          • The traffic participant’s willingness to yield.
          • The size of the gap.
          • The distance to the gap (from our current position).
  • How to model other participants, so that they act "intelligently"?
    • "In order to validate our behaviour we need interactive agents to test against. This produces a chicken and egg problem, where we need to have an intelligent agent to develop and test our agent. To address this problem, we develop a stochastic rule-based merge behaviour which can give the appearance that agents are changing their mind."

    • This merging-response driver model builds on the ideas of IDM, introducing two thresholds (c.f. figure):
      • One threshold governs whether or not the agent reacts to the ego car,
      • The second threshold determines how the agent reacts.
      • "This process can be viewed as a rule-based variant of negotiation strategies: an agent proposes he/she go first by making it more dangerous for the other, the other agent accepts by backing off."

  • How to reduce the computational complexity of the probabilistic game tree search, while keeping safely considerations ?
    • The forward simulation and the collision checking are costly operations. Especially when the depth of the tree increases.
    • Some approximations include reducing the number of actions (for both the ego- and the other agents), reducing the number of interacting participants and reducing the branching factor, as can been seen in the steps of the presented approach:
      • 1- Select an intention class based on a coarse search. - the ego-actions are decomposed into a sub-goal selection task and a within-sub-goal set of actions.
      • 2- Identify the interactive traffic participant. - it is assumed that at any given time, the ego-agent interacts with only one other agent.
      • 3- Predict other agents’ intentions. - working with intentions, the continuous action space can be discretized. It reminds me the concept of temporal abstraction which reduces the depth of the search.
      • 4- Sample and evaluate the ego intentions. - a set of safe (absence of collision) ego-intentions can be generated and assessed.
      • 5- Act, observe, and update our probability models. - the probability of safe successful merge.

"Adaptive Robust Game-Theoretic Decision Making for Autonomous Vehicles"

  • [ 2019 ] [📝] [ 🎓 University of Michigan ] [:octocat:]

  • [ k-level strategy, MPC, interaction-aware prediction ]

Click to expand
The agent maintain belief on the k parameter for other vehicles and updates it at each step. Source.
The agent maintain belief on the k parameter for other vehicles and updates it at each step. Source.

Authors: Sankar, G. S., & Han, K.

  • One related work (described further below): Decision making in dynamic and interactive environments based on cognitive hierarchy theory: Formulation, solution, and application to autonomous driving by (Li, S., Li, N., Girard, A., & Kolmanovsky, I. 2019).
  • One framework: "level-k game-theoretic framework".
    • It is used to model the interactions between vehicles, taking into account the rationality of the other agents.
    • The agents are categorized into hierarchical structure of their cognitive abilities, parametrized with a reasoning depth k in [0, 1, 2].
      • A level-0 vehicle considers the other vehicles in the traffic scenario as stationary obstacles, hence being "aggressive".
      • A level-1 agent assumes other agents are at level-0. ...
    • This parameter k is what the agent must estimate to model the interaction with the other vehicles.
  • One term: "disturbance set".
    • This set, denoted W, describe the uncertainty in the position estimate of other vehicle (with some delta, similar to the variance in Kalman filters).
    • It should capture both the uncertainty about the transition model and the uncertainty about the driver models.
    • This set is considered when taking action using a "feedback min-max strategy".
      • I must admit I did not fully understand the concept. Here is a quote:
      • "The min-max strategy considers the worst-case disturbance affecting the behaviour/performance of the system and provides control actions to mitigate the effect of the worst-case disturbance."

    • The important idea is to adapt the size of this W set in order to avoid over-conservative behaviours (compared to reachable-set methods).
      • This is done based on the confidence in the estimated driver model (probability distribution of the estimated k) for the other vehicles.
        • If the agent is sure that the other car follows model 0, then it should be "fully" conservative.
        • If the agent is sure it follows level 1, then it could relax its conservatism (i.e. reduce the size of the disturbance set) since it is taken into consideration.
  • I would like to draw some parallels:
    • With (PO)MDP formulation: for the use of a transition model (or transition function) that is hard to define.
    • With POMDP formulation: for the tracking of believes about the driver model (or intention) of other vehicles.
      • The estimate of the probability distribution (for k) is updated at every step.
    • With IRL: where the agent can predict the reaction of other vehicles assuming they act optimally w.r.t a reward function it is estimating.
    • With MPC: the choice of the optimal control following a receding horizon strategy.

"Towards Human-Like Prediction and Decision-Making for Automated Vehicles in Highway Scenarios"

  • [ 2019 ] [📝] [:octocat:] [🎞️] [ 🎓 INRIA ] [ 🚗 Toyota ]

  • [ maximum entropy IRL ]

Click to expand
  • Note:
    • this 190-page thesis is also referenced in the sections for prediction and planning.
    • I really like how the author organizes synergies between three modules that are split and made independent in most modular architectures:
      • (1) driver model
      • (2) behaviour prediction
      • (3) decision-making

Author: Sierra Gonzalez, D.

  • Related work: there are close concepts to the approach of (Kuderer et al., 2015) referenced below.
  • One idea: encode the driving preferences of a human driver with a reward function (or cost function), mentioning a quote from Abbeel, Ng and Russell:

“The reward function, rather than the policy or the value function, is the most succinct, robust, and transferable definition of a task”.

  • Other ideas:

    • Use IRL to avoid the manual tuning of the parameters of the reward model. Hence learn a cost/reward function from demonstrations.
    • Include dynamic features, such as the time-headway, in the linear combination of the cost function, to take the interactions between traffic participants into account.
    • Combine IRL with a trajectory planner based on "conformal spatiotemporal state lattices".
      • The motivation is to deal with continuous state and action spaces and handle the presence of dynamic obstacles.
      • Several advantages (I honestly did not understand that point): the ability to exploit the structure of the environment, to consider time as part of the state-space and respect the non-holonomic motion constraints of the vehicle.
  • One term: "planning-based motion prediction".

    • The resulting reward function can be used to generate trajectory (for prediction), using optimal control.
    • Simply put, it can be assumed that each vehicle in the scene behaves in the "risk-averse" manner encoded by the model, i.e. choosing actions leading to the lowest cost / highest reward.
    • This method is also called "model-based prediction" since it relies on a reward function or on the models of an MDP.
    • This prediction tool is not used alone but rather coupled with some DBN-based manoeuvre estimation (detailed in the section on prediction).

"An Auto-tuning Framework for Autonomous Vehicles"

  • [ 2018 ] [📝] [🚗 Baidu]

  • [ max-margin ]

Click to expand
Source.
Two ideas of rank-based conditional IRL framework (RC-IRL): Conditional comparison (left) and Rank-based learning (middle - is it a loss? I think you want to maximize this term instead?). Right: Based on the idea of the maximum margin, the goal is to find the direction that clearly separates the demonstrated trajectory from randomly generated ones. Illustration of the benefits of using RC to prevent background shifting: Even if the optimal reward function direction is the same under the two scenarios, it may not be ideal to train them together because the optimal direction may be impacted by overfitting the background shifting. Instead, the idea of conditioning on scenarios can be viewed as a pairwise comparison, which can remove the background differences. Source.
Source.
The human expert trajectory and randomly generated sample trajectories are sent to a SIAMESE network in a pair-wise manner. Again, I do not understand very well. Source.

Authors: Fan, H., Xia, Z., Liu, C., Chen, Y., & Kong, Q.

  • Motivation:

    • Define an automatic tuning method for the cost function used in the Apollo EM-planning module to address many different scenarios.
    • The idea is to learn these parameters from human demonstration via IRL.
  • Two main ideas (to be honest, I have difficulties understanding their points):

  • 1- Conditional comparison.

    • How to measure similarities between the expert policy and a candidate policy?
      • Usually: compare the expectation of their value functions.
      • Here: compare their value functions evaluated state by state.
    • Why "conditional"?
      • Because the loss function is conditional on states.
        • This can allegedly significantly reduce the background variance.
        • The authors use the term "background variance" to refer to the "differences in behaviours metrics", due to the diversity of scenarios. (Not very clear to me.)
      • "Instead, the idea of conditioning on scenarios can be viewed as a pairwise comparison, which can remove the background differences."

  • 2- Rank-based learning.

    • "To accelerate the training process and extend the coverage of corner cases, we sample random policies and compare against the expert demonstration instead of generating the optimal policy first, as in policy gradient."

    • Why "ranked"?
      • "Our assumption is that the human demonstrations rank near the top of the distribution of policies conditional on initial state on average."

      • "The value function is a rank or search objective for selecting best trajectories in the online module."


"Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making"

  • [ 2018 ] [📝] [ 🎓 Tsinghua University, California Institute of Technology, Hunan University ]

  • [ maximum-margin IRL ]

Click to expand
Kernel functions are used on the continuous state space to obtain a smooth reward function using linear function approximation. Source.
Kernel functions are used on the continuous state space to obtain a smooth reward function using linear function approximation. Source.
As often, the divergence metric - to measure the gap between one candidate and the expert - is the expected value function. Example of how to use 2 other candidate policies. I am still confused that each of their decision is based on a state seen by the expert, i.e. they are not building their own full trajectory. Source.
As often, the divergence metric (to measure the gap between one candidate and the expert) is the expected value function estimated on sampled trajectories. Example of how to use 2 other candidate policies. I am still confused that each of their decision is based on a state seen by the expert, i.e. they are not building their own full trajectory. Source.

Authors: Gao, H., Shi, G., Xie, G., & Cheng, B.

  • One idea: A simple and "educationally relevant" application to IRL and a good implementation of the algorithm of (Ng A. & Russell S., 2000): Algorithms for Inverse Reinforcement Learning.
    • Observe human behaviours during a "car following" task, assume his/her behaviour is optimal w.r.t. an hidden reward function, and try to estimate that function.
    • Strong assumption: no lane-change, no overtaking, no traffic-light. In other worlds, just concerned about the longitudinal control.
  • Which IRL method?
    • Maximum-margin. Prediction aim at learning a reward function that explains the demonstrated policy better than alternative policies by a margin.
    • The "margin" is there to address IRL's solution ambiguity.
  • Steps:
    • 1- Define a simple 2d continuous state space s = (s0, s1).
      • s0 = ego-speed divided into 15 intervals (each centre will serve to build means for Gaussian kernel functions).
      • s1 = dist-to-leader divided into 36 intervals (same remark).
      • A normalization is additionally applied.
    • 2- Feature transformation: Map the 2d continuous state to a finite number of features using kernel functions.
      • I recommend this short video about feature transformation using kernel functions.
      • Here, Gaussian radial kernel functions are used:
        • Why "radial"? The closer the state to the centre of the kernel, the higher the response of the function. And the further you go, the larger the response "falls".
        • Why "Gaussian"? Because the standard deviation describes how sharp that "fall" is.
        • Note that this functions are 2d: mean = (the centre of one speed interval, the centre of one dist interval).
      • The distance of the continuous state s = (s0, s1) to each of the 15*36=540 means s(i, j) can be computed.
      • This gives 540 kernel features f(i, j) = K(s, s(i, j)).
    • 3- The one-step reward is assumed to be linear combination of that features.
      • Given a policy, a trajectory can be constructed.
        • This is a list of states. This list can be mapped to a list of rewards.
        • The discounted sum of this list leads to the trajectory return, seen as expected Value function.
      • One could also form 540 lists for this trajectory (one per kernel feature). Then reduce them by discounted_sum(), leading to 540 V_f(i, j) per trajectory.
        • The trajectory return is then a simple the linear combination: theta(i, j) * V_f(i, j).
      • This can be computed for the demonstrating expert, as well as for many other policies.
      • Again, the task it to tune the weights so that the expert results in the largest values, against all possible other policies.
    • 4- The goal is now to find the 540 theta(i, j) weights parameters solution of the max-margin objective:
      • One goal: costly single-step deviation.
        • Try to maximize the smallest difference one could find.
          • I.e. select the best non-expert-policy action and try to maximize the difference to the expert-policy action in each state.
          • max[over theta] min[over π] of the sum[over i, j] of theta(i, j) * [f_candidate(i, j) - f_expert(i, j)].
        • As often the value function serves as "divergence metric".
      • One side heuristic to remove degenerate solutions:
        • "The reward functions with many small rewards are more natural and should be preferred". from here.

        • Hence a regularization constraint (a constraint, not a loss like L1!) on the theta(i, j).
      • The optimization problem with strict constraint is transformed into an optimization problem with "inequality" constraint.
        • Violating constraints is allowed by penalized.
        • As I understood from my readings, that relaxes the linear assumption in the case the true reward function cannot be expressed as a linear combination of the fixed basis functions.
      • The resulting system of equations is solved here with Lagrange multipliers (linear programming was recommended in the orginal max-margin paper).
    • 5- Once the theta(i, j) are estimated, the R can be expressed.
  • About the other policy "candidates":
    • "For each optimal car-following state, one of the other car-following actions is randomly selected for the solution".

    • In other words, in V(expert) > V(other_candidates) goal, "other_candidates" refers to random policies.
    • It would have been interesting to have "better" competitors, for instance policies that are optional w.r.t. the current estimate of R function. E.g. learnt with RL algorithms.
      • That would lead to an iterative process that stops when R converges.

"A Human-like Trajectory Planning Method by Learning from Naturalistic Driving Data"

  • [ 2018 ] [📝] [ 🎓 Peking University ] [ 🚗 Groupe PSA ]

  • [ sampling-based trajectory planning ]

Click to expand
Source.
Source.

Authors: He, X., Xu, D., Zhao, H., Moze, M., Aioun, F., & Franck, G.

  • One idea: couple learning and sampling for motion planning.
    • More precisely, learn from human demonstrations (offline) how to weight different contributions in a cost function (as opposed to hand-crafted approaches).
    • This cost function is then used for trajectory planning (online) to evaluate and select one trajectory to follow, among a set of candidates generated by sampling methods.
  • One detail: the weights of the optimal cost function minimise the sum of [prob(candidate) * similarities(demonstration, candidate)].
    • It is clear to me how a cost can be converted to some probability, using softmax().
    • But for the similarity measure of a trajectory candidate, how to compute "its distance to the human driven one at the same driving situation"?
    • Should the expert car have driven exactly on the same track before or is there any abstraction in the representation of the situation?
    • How can it generalize at all if the similarity is estimated only on the location and velocity? The traffic situation will be different between two drives.
  • One quote:

"The more similarity (i.e. less distance) the trajectory has with the human driven one, the higher probability it has to be selected."


"Learning driving styles for autonomous vehicles from demonstration"

  • [ 2015 ] [📝] [ 🎓 University of Freiburg ] [ 🚗 Bosch ]

  • [ MaxEnt IRL ]

Click to expand
Source.
Source.

Authors: Kuderer, M., Gulati, S., & Burgard, W.

  • One important contribution: Deal with continuous features such as integral of jerk over the trajectory.
  • One motivation: Derive a cost function from observed trajectories.
    • The trajectory object is first mapped to some feature vector (speed, acceleration ...).
  • One Q&A: How to then derive a cost (or reward) from these features?
    • The authors assume the cost function to be a linear combination of the features.
    • The goal is then about learning the weights.
    • They acknowledge in the conclusion that it may be a too simple model. Maybe neural nets could help to capture some more complex relations.
  • One concept: "Feature matching":
    • "Our goal is to find a generative model p(traj| weights) that yields trajectories that are similar to the observations."

    • How to define the "Similarity"?
      • The "features" serve as a measure of similarity.
  • Another concept: "ME-IRL" = Maximum Entropy IRL.
    • One issue: This "feature matching" formulation is ambiguous.
      • There are potentially many (degenerated) solutions p(traj| weights). For instance weights = zeros.
    • One idea is to introduce an additional goal:
      • In this case: "Among all the distributions that match features, they to select the one that maximizes the entropy."
    • The probability distribution over trajectories is in the form exp(-cost[features(traj), θ]), to model that agents are exponentially more likely to select trajectories with lower cost.
  • About the maximum likelihood approximation in MaxEnt-IRL:
    • The gradient of the Lagrangian cost function turns to be the difference between two terms:
      • 1- The empirical feature values (easy to compute from the recorded).
      • 2- The expected feature values (hard to compute: it requires integrating over all possible trajectories).
        • An approximation is made to estimate the expected feature values: the authors compute the feature values of the "most" likely trajectory, instead of computing the expectations by sampling.
      • Interpretation:
        • "We assume that the demonstrations are in fact generated by minimizing a cost function (IOC), in contrast to the assumption that demonstrations are samples from a probability distribution (IRL)".

  • One related work:


Prediction and Manoeuvre Recognition


"Modeling and Prediction of Human Driver Behavior: A Survey"

  • [ 2020 ] [📝] [ 🎓 Stanford, University of Illinois ] [🚗 Qualcomm]

  • [ state estimation, intention estimation, trait estimation, motion prediction ]

Click to expand
Source.
Terminology: the problem is formulated as a discrete-time multi-agent partially observable stochastic game (POSG). In particular, the internal state can contain agent’s navigational goals or the behavioural traits. Source.

Authors: Brown, K., Driggs-Campbell, K., & Kochenderfer, M. J.

  • Motivation:

    • A review and taxonomy of 200 models from the literature on driver behaviour modelling.
      • "In the context of the partially observable stochastic game (POSG) formulation, a driver behavior model is a collection of assumptions about the human observation function G, internal-state update function H and policy function π (the state-transition function F also plays an important role in driver-modeling applications, though it has more to do with the vehicle than the driver)."

      • The "References" section is large and good!
    • Models are categorized based on the tasks they aim to address.
      • 1- state estimation.
      • 2- intention estimation.
      • 3- trait estimation.
      • 4- motion prediction.
  • The following lists are non-exhaustive, see the tables for full details. Instead, they try to give an overview of the most represented instances:

  • 1- (Physical) state estimation.

    • [Algorithm]: approximate recursive Bayesian filters. E.g. KF, PF, moving average filter.
    • "Some advanced state estimation models take advantage of the structure inherent in the driving environment to improve filtering accuracy. E.g. DBN".

  • 2- (Internal states) intention estimation.

    • "Intention estimation usually involves computing a probability distribution over a finite set of possible behavior modes - often corresponding to navigational goals (e.g., change lanes, overtake) - that a driver might execute in the current situation."

    • [Architecture]: [D]BN, SVM, HMM, LSTM.
    • [Scope]: highway, intersection, lane-changing, urban.
    • [Evaluation]: accuracy (classification), ROC curve, F1, false positive rate.
    • [Intention Space] (set of possible behaviour modes that may exist in a driver’s internal state - often combined):
      • lateral modes (e.g. lane-change / lane-keeping intentions),
      • routes (a sequence of decisions, e.g. turn right → go straight → turn right again that a driver may intend to execute),
      • longitudinal modes (e.g. car-following / cruising),
      • joint configurations.
      • "Configuration intentions are defined in terms of spatial relationships to other vehicles. For example, intention estimation for a merging scenario might involve reasoning about which gap between vehicles the target car intends to enter. The intention space of a car in the other lane might be whether or not to yield and allow the merging vehicle to enter."

    • [Hypothesis Representation] (how to represent uncertainty in the intention hypothesis?): discrete probability distribution over possible intentions.
      • "In contrast, point estimate hypothesis ignores uncertainty and simply assigns a probability of 1 to a single (presumably the most likely) behavior mode."

    • [Estimation / Inference Paradigm]: single-shot, recursive, Bayesian (based on probabilistic graphical models), black-box, game theory.
      • "Recursive estimation algorithms operate by repeatedly updating the intention hypothesis at each time step based on the new information received. In contrast, single-shot estimators compute a new hypothesis from scratch at each inference step. The latter may operate over a history of observations, but it does not store any information between successive inference iterations."

      • "Game-theoretic models are distinguished by being interaction-aware. They explicitly consider possible situational outcomes in order to compute or refine an intention hypothesis. This interaction-awareness can be as simple as pruning intentions with a high probability of conflicting with other drivers, or it can mean computing the Nash equilibrium of an explicitly formulated game with a payoff matrix."

  • 3- trait estimation.

    • "Whereas intention estimation reasons about what a driver is trying to do, trait estimation reasons about factors that affect how the driver will do it. Broadly speaking, traits encompass skills, preferences, and style, as well as properties like fatigue, distractedness, etc."

    • "Trait estimation may be interpreted as the process of inferring the “parameters” of the driver’s policy function π on the basis of observed driving behavior. [...] Traits can also be interpreted as part of the driver’s internal state."

    • [Architecture]: IDM, MOBIL, reward parameters.

    • [Training]: IRL, EM, genetic algorithms, heuristic.

    • [Theory]: Inverse RL.

    • [Scope]: car following (IDM), highway, intersection, urban.

    • [Trait Space]: policy parameters, reward parameters (assuming that drivers are trying to optimize a cost function).

      • "Some of the most widely known driver models are simple parametric controllers with tuneable “style” or “preference” policy parameters that represent intuitive behavioral traits of drivers. E.g. IDM."

      • IDM traits: minimum desired gap, desired time headway, maximum feasible acceleration, preferred deceleration, maximum desired speed.
      • "Reward function parameters often correspond to the same intuitive notions mentioned above (e.g., preferred velocity), the important difference being that they parametrize a reward function rather than a closed-loop control policy."

    • [Hypothesis Representation] (uncertainty): in almost all cases, the hypothesis is represented by a point estimate rather than a distribution.

    • [Estimation Paradigm]: offline / online.

      • "Some models combine the two paradigms by computing a prior distribution offline, then tuning it online. This tuning procedure often relies on Bayesian methods."

    • [Model Class]: heuristic, optimization, Bayesian, IRL, contextually varying.

      • "One simple approach to offline trait estimation is to set trait parameters heuristically. Specifying parameters manually is one way to incorporate expert domain knowledge into models."

      • "In some approaches, trait parameters are modeled as contextually varying, meaning that they vary based on the region of the state space (the context) or the current behavior mode."

  • 4- motion prediction.

    • "Infer the future physical states of the surrounding vehicles".

    • [Architecture]: IDM, LSTM (and other RNN/NN), constant acceleration / speed (CA, CV), encoder-decoder, GMM, GP, adaptive, spline.
    • [Training]: heuristic.
      • "Simple examples include rule-based heuristic control laws like IDM. More sophisticated examples include closed-loop policies based on NN, DBN, and random forests."

    • [Theory]: RL, MPC, trajectory optimization.
      • "Some MPC policy models (including those used within a forward simulation paradigm) fall into the game theoretic category because they explicitly predict the future states of their environment (including other cars) before computing a planned trajectory."

    • [Scope]: highway, car-following (e.g. using IDM), intersection, urban.
    • [Evaluation]: RMSE, NLL, MAE, collision rate.
    • [Vehicle dynamics model]: linear, learned, bicycle kinematic.
      • "Many models in the literature assume linear state-transition dynamics. Linear models can be first order (i.e., output is position, input is velocity), second order (i.e., output is position, input is acceleration), and so forth."

      • "Kinematic models are simpler than dynamic models, but the no-slip assumption can lead to significant modeling errors."

      • "Some state-transition models are learned, in the sense that the observed correlation between consecutive predicted states results entirely from training on large datasets. Some incorporate an explicit transition model where the parameters are learned, whereas others simply output a full trajectory."

    • [Scene-level uncertainty modelling]:
      • single-scenario (ignoring multimodal uncertainty at the scene level),
      • partial scenario,
      • multi-scenario (reason about the different possible scenarios that may follow from an initial traffic scene),
      • reachable set.
      • "Some models reason only about a partial scenario, meaning they predict the motion of only a subset of vehicles in the traffic scene, usually under a single scenario."

      • "Some models reason about multimodal uncertainty on the scene-level by performing multiple (parallel) rollouts associated with different scenarios."

      • "Rather than reasoning about the likelihood of future states, some models reason about reachability. Reachability analysis implies taking a worst-case mindset in terms of predicting vehicle motion."

    • [Agent-level uncertainty modelling]: single deterministic, particle set, Gaussians.
    • [Prediction paradigm]:
      • open-loop independent trajectory prediction.
        • "Many models operate under the independent prediction paradigm, meaning that they predict a full trajectory independently for each agent in the scene. These approaches are interaction-unaware because they are open-loop. Though they may account for interaction between vehicles at the current time t, they do not explicitly reason about interaction over the prediction window from t+1 to the prediction horizon tf. [...] Because independent trajectory prediction models ignore interaction, their predictive power tends to quickly degrade as the prediction horizon extends further into the future."

      • closed-loop forward simulation.
        • "In the forward simulation paradigm, motion hypotheses are computed by rolling out a closed-loop control policy π for each target vehicle." - game theoretic prediction.

      • game theoretic prediction.
        • "Agents are modeled as looking ahead to consider the possible ramifications of their actions. This notion of looking ahead makes game-theoretic prediction models more deeply interaction-aware than forward simulation models based on reactive closed-loop control."


"Motion Prediction using Trajectory Sets and Self-Driving Domain Knowledge"

  • [ 2020 ] [📝] [🚗 nuTonomy]

  • [ multimodal, probabilistic, mode collapse, domain knowledge, classification ]

Click to expand
Source.
Top: The idea of CoverNet is to first generate feasible future trajectories, and then classify them. It uses the past states of all road users and a HD map to compute a distribution over a vehicle's possible future states. Bottom-left: the set of trajectories can be reduced by considering the current state and the dynamics: at high speeds, sharp turns are not dynamically feasible for instance. Bottom-right: the contribution here also deals with "feasibility", i.e. tries to reduce the set using domain knowledge. A second loss is introduced to penalize predictions that go off-road. The first loss (cross entropy with closest prediction treated as ground truth) is also adapted: instead of a delta distribution over the closest mode, there is also probability assigned to near misses. Source.

Authors: Boulton, F. A., Grigore, E. C., & Wolff, E. M.

  • Related work: CoverNet: Multimodal Behavior Prediction using Trajectory Sets, (Phan-Minh, Grigore, Boulton, Beijbom, & Wolff, 2019).

  • Motivation:

    • Extend their CoverNet by including further "domain knowledge".
      • "Both dynamic constraints and "rules-of-the-road" place strong priors on likely motions."

      • CoverNet: Predicted trajectories should be consistent with the current dynamic state.
      • This work : Predicted trajectories should stay on road.
    • The main idea is to leverage the map information by adding an auxiliary loss that penalizes off-road predictions.
  • Motivations and ideas of CoverNet:

    • 1- Avoid the issue of "mode collapse".
      • The prediction problem is treated as classification over a diverse set of trajectories.
      • The trajectory sets for CoverNet is available on nuscenes-devkit github.
    • 2- Ensure a desired level of coverage of the state space.
      • The larger and the more diverse the set, the higher the coverage. One can play with the resolution to ensure coverage guarantees, while pruning of the set improves the efficiency.
    • 3- Eliminate dynamically infeasible trajectories, i.e. introduced dynamic constraints.
      • Trajectories that are not physically possible are not considered, which limits the set of reachable states and improves the efficiency.
      • "We create a dynamic trajectory set based on the current state by integrating forward with our dynamic model over diverse control sequences."

  • Two losses:

    • 1- Moving beyond "standard" cross-entropy loss for classification.
      • What is the ground truth trajectory? Obviously, it is not part of the set.
      • One solution: designate the closest one in the set.
        • "We utilize cross-entropy with positive samples determined by the element in the trajectory set closest to the actual ground truth in minimum average of point-wise Euclidean distances."

        • Issue: This will penalize the second-closest trajectory just as much as the furthest, since it ignores the geometric structure of the trajectory set.
      • Another idea: use a weighted cross-entropy loss, where the weight is a function of distance to the ground truth.
        • "Instead of a delta distribution over the closest mode, there is also probability assigned to near misses."

        • A threshold defines which trajectories are "close enough" to the ground truth.
      • This weighted loss is adapted to favour mode diversity:
        • "We tried an "Avoid Nearby" weighted cross entropy loss that assigns weight of 1 to the closest match, 0 to all other trajectories within 2 meters of ground truth, and 1/|K| to the rest. We see that we are able to increase mode diversity and recover the performance of the baseline loss."

        • "Our results indicate that losses that are better able to enforce mode diversity may lead to improved performance."

    • 2- Add an auxiliary loss for off-road predictions.
      • This helps learn domain knowledge, i.e. partially encode "rules-of-the-road".
      • "This auxiliary loss can easily be pretrained using only map information (e.g., off-road area), which significantly improves performance on small datasets."

  • Related works for predictions (we want multimodal and probabilistic trajectory predictions):

    • Input, i.e. encoding of the scene:
      • "State-of-the-art motion prediction algorithms now typically use CNNs to learn appropriate features from a birds-eye-view rendering of the scene (map and road users)."

      • Graph neural networks (GNNs) looks promising to encode interactions.
      • Here: A BEV raster RGB image (fixed size) containing map information and the past states of all objects.
    • Output, i.e. representing the possible future motions:
      • 1- Generative models.
        • They encode choice over multiple actions via sampling latent variables.
        • Issue: multiple trajectory samples or 1-step policy rollouts (e.g. R2P2) are required at inference.
        • Examples: Stochastic policies, CVAEs and GANs.
      • 2- Regression.
        • Unimodal: predict a single future trajectory. Issue: unrealistically average over behaviours, even when predicting Gaussian uncertainty.
        • Multimodal: distribution over multiple trajectories. Issue: suffer from mode collapse.
      • 3- Classification.
        • "We choose not to learn an uncertainty distribution over the space. The density of our trajectory sets reduces its benefit compared to the case when there are a only a handful of modes."

        • How to deal with varying number of classes to predict? Not clear to me.
  • How to solve mode collapse in regression?

    • The authors consider MultiPath by Waymo (detailed also in this page) as their baseline.
    • A set of anchor boxes can be used, much like in object detection:
    • "This model implements ordinal regression by first choosing among a fixed set of anchors (computed a priori) and then regressing to residuals from the chosen anchor. This model predicts a fixed number of trajectories (modes) and their associated probabilities."

    • The authors extend MultiPath with dynamically-computed anchors, based on the agent's current speed.
      • Again, it makes no sense to consider anchors that are not dynamically reachable.
      • They also found that using one order of magnitude more “anchor” trajectories that Waymo (64) is beneficial: better coverage of space via anchors, leaving the network to learn smaller residuals.
  • Extensions:

    • As pointed out by this KIT Master Thesis offer, the current state of CoverNet only has a motion model for cars. Predicting bicycles and pedestrians' motions would be a next step.
    • Interactions are ignored now.

"PnPNet: End-to-End Perception and Prediction with Tracking in the Loop"

  • [ 2020 ] [📝] [ 🎓 University of Toronto ] [🚗 Uber]

  • [ joint perception + prediction, multi-object tracking ]

Click to expand
Source.
The authors propose to leverage tracking for the joint perception+prediction task. Source.
Source.
Top: One main idea is to make the prediction module directly reuse the scene context captured in the perception features, and also consider the past object tracks. Bottom: a second contribution is the use of a LSTM as a sequence model to learn the object trajectory representation. This encoding is jointly used for the tracking and prediction tasks. Source.

Authors: Liang, M., Yang, B., Zeng, W., Chen, Y., Hu, R., Casas, S., & Urtasun, R.

  • Motivations:

    • 1- Perform perception and prediction jointly, with a single neural network.
      • There for it is called "Perception and Prediction": PnP.
      • The whole model is also said end-to-end, because it is end-to-end trainable.
        • This constrast with modular sequential architectures where both the perception output and map information is forwarded to an independent prediction module, for instance in a bird’s eye view (BEV) raster representation.
    • 2- Improve prediction by leveraging the (past) temporal information (motion history) contained in tracking results.
      • In particular, one goal is to recover from long-term object occlusion.
        • "While all these [vanilla PnP] approaches share the sensor features for detection and prediction, they fail to exploit the rich information of actors along the time dimension [...]. This may cause problems when dealing with occluded actors and may produce temporal inconsistency in predictions."

      • The idea is to include tracking in the loop to improve prediction (motion forecasting):
        • "While the detection module processes sequential sensor data and generates object detections at each time step independently, the tracking module associates these estimates across time for better understanding of object states (e.g., occlusion reasoning, trajectory smoothing), which in turn provides richer information for the prediction module to produce accurate future trajectories."

        • "Exploiting motion from explicit object trajectories is more accurate than inferring motion from the features computed from the raw sensor data. [this reduces the prediction error by (∼6%) in the experiment]"

      • All modules share computation as there is a single backbone network, and the full model can be trained end-to-end.
        • "While previous joint perception and prediction models make the prediction module another convolutional header on top of the detection backbone network, which shares the same features with the detection header, in PnPNet we put the prediction module after explicit object tracking, with the object trajectory representation as input."

  • How to represent (long-term) trajectories?

    • The idea is to capture both sensor observation and motion information of actors.
    • "For each object we first extract its inferred motion (from past detection estimates) and raw observations (from sensor features) at each time step, and then model its dynamics using a recurrent network."

    • [interesting choice] "For angular velocity of ego car we parameterize it as its cosine and sine values."

    • This trajectory representation is utilized in both tracking and prediction modules.
  • About multi-object tracking (MOT):

    • There exist two distinct challenges:
      • 1- The discrete problem of data association between previous tracks and current detections.
        • Association errors (i.e., identity switches) are prone to accumulate through time.
        • "The association problem is formulated as a bipartite matching problem so that exclusive track-to-detection correspondence is guaranteed. [...] Solved with the Hungarian algorithm."

        • "Many frameworks have been proposed to solve the data association problem: e.g., Markov Decision Processes (MDP), min-cost flow, linear assignment problem and graph cut."

      • 2- The continuous problem of trajectory estimation.
        • In the proposed approach, the LSTM representation of associated new tracks are refined to generate smoother trajectories:
          • "For trajectory refinement, since it reduces the localization error of online generated perception results, it helps establish a smoother and more accurate motion history."

    • The proposed multi-object tracker solves both problems, therefore it is said "discrete-continuous".

"VectorNet: Encoding HD Maps and Agent Dynamics from Vectorized Representation"

  • [ 2020 ] [📝] [📝] [🎞️] [🚗 Waymo]

  • [ GNN, vectorized representation ]

Click to expand
Source.
Both map features and our sensor input can be simplified into either a point, a polygon, or a curve, which can be approximately represented as polylines, and eventually further split into vector fragments. The set of such vectors form a simplified abstracted world used to make prediction with less computation than rasterized images encoded with ConvNets. Source.
Source.
A vectorized representation of the scene is preferred to the combination (rasterized rendering + ConvNet encoding). A global interaction graph can be built from these vectorized elements, to model the higher-order relationships between entities. To further improve the prediction performance, a supervision auxiliary task is introduced. Source.

Authors: Gao, J., Sun, C., Zhao, H., Shen, Y., Anguelov, D., Li, C., & Schmid, C.

  • Motivations:
    • 1- Reduce computation cost while offering good prediction performances.
    • 2- Capture long range context information, for longer horizon prediction.
      • ConvNets are commonly used to encode the scene context, but they have limited receptive field.
      • And increasing kernel size and input image size is not so easy:
        • FLOPs of ConvNets increase quadratically with the kernel size and input image size.
        • The number of parameters increases quadratically with the kernel size.
    • Four ingredients:
      • 1- A vectorized representation is preferred to the combination (rasterized rendering + ConvNet encoding).
      • 2- A graph network, to model interactions.
      • 3- Hierarchy, to first encode map and sensor information, and then learn interactions.
      • 4- A supervision auxiliary task, in parallel to the prediction task.
  • Two kind of input:
    • 1- HD map information: Structured road context information such as lane boundaries, stop/yield signs, crosswalks and speed bumps.
    • 2- Sensor information: Agent trajectories.
  • How to encode the scene context information?
    • 1- Rasterized representation.
      • Rendering: in a bird-eye image, with colour-coded attributes. Issue: colouring requires manual specifications.
      • Encoding: encode the scene context information with ConvNets. Issue: receptive field may be limited.
      • "The most popular way to incorporate highly detailed maps into behavior prediction models is by rendering the map into pixels and encoding the scene information, such as traffic signs, lanes, and road boundaries, with a convolutional neural network (CNN). However, this process requires a lot of compute and time. Additionally, processing maps as imagery makes it challenging to model long-range geometry, such as lanes merging ahead, which affects the quality of the predictions."

      • Impacting parameters:
        • Convolutional kernel sizes.
        • Resolution of the rasterized images.
        • Feature cropping:
          • "A larger crop size (3 vs 1) can significantly improve the performance, and cropping along observed trajectory also leads to better performance."

    • 2- Vectorized representation
      • All map and trajectory elements can be approximated as sequences of vectors.
      • "This avoids lossy rendering and computationally intensive ConvNet encoding steps."

    • About graph neural networks (GNNs), from Rishabh Anand's medium article:
      • 1- Given a graph, we first convert the nodes to recurrent units and the edges to feed-forward neural networks.
      • 2- Then we perform Neighbourhood Aggregation (Message Passing) for all nodes n number of times.
      • 3- Then we sum over the embedding vectors of all nodes to get graph representation H. Here the "Global interaction graph".
      • 4- Feel free to pass H into higher layers or use it to represent the graph’s unique properties! Here to learn interaction models to make prediction.
  • About hierarchy:
    • 1-First, aggregate information among vectors inside a polyline, namely polyline subgraphs.
      • Graph neural networks (GNNs) are used to incorporate these sets of vectors
      • "We treat each vector vi belonging to a polyline Pj as a node in the graph with node features."

      • How to encode attributes of these geometric elements? E.g. traffic light state, speed limit?
        • I must admit I did not fully understand. But from what I read on medium:
          • Each node has a set of features defining it.
          • Each edge may connect nodes together that have similar features.
    • 2- Then, model the higher-order relationships among polylines, directly from their vectorized form.
      • Two interactions are jointly modelled:
        • 1- The interactions of multiple agents.
        • 2- Their interactions with the entities from road maps.
          • E.g. a car enters an intersection, or a pedestrian approaches a crosswalk.
          • "We clearly observe that adding map information significantly improves the trajectory prediction performance."

  • An auxiliary task:
    • 1- Randomly masking out map features during training, such as a stop sign at a four-way intersection.
    • 2- Require the net to complete it.
    • "The goal is to incentivize the model to better capture interactions among nodes."

    • And to learn to deal with occlusion.
    • "Adding this objective consistently helps with performance, especially at longer time horizons".

  • Two training objectives:
    • 1- Main task = Prediction. Future trajectories.
    • 2- Auxiliary task = Supervision. Huber loss between predicted node features and ground-truth masked node features.
  • Evaluation metrics:
    • The "widely used" Average Displacement Error (ADE) computed over the entire trajectories.
    • The Displacement Error at t (DE@ts) metric, where t in {1.0, 2.0, 3.0} seconds.
  • Performances and computation cost.
    • VectorNet is compared to ConvNets on the Argoverse forecasting dataset, as well as on some Waymo in-house prediction dataset.
    • ConvNets consumes 200+ times more FLOPs than VectorNet for a single agent: 10.56G vs 0.041G. Factor 5 when there are 50 agents per scene.
    • VectorNet needs 29% of the parameters of ConvNets: 72K vs 246K.
    • VectorNet achieves up to 18% better performance on Argoverse.

"Online parameter estimation for human driver behavior prediction"

  • [ 2020 ] [📝] [:octocat:] [🎓 Stanford] [🚗 Toyota Research Institute]

  • [ stochastic IDM ]

Click to expand
Source.
The vanilla IDM is a parametric rule-based car-following model that balances two forces: the desire to achieve free speed if there were no vehicle in front, and the need to maintain safe separation with the vehicle in front. It outputs an acceleration that is guaranteed to be collision free. The stochastic version introduces a new model parameter σ-IDM. Source.

Authors: Bhattacharyya, R., Senanayake, R., Brown, K., & Kochenderfer

  • Motivations:
    • 1- Explicitly model stochasticity in the behaviour of individual drivers.
      • Complex multi-modal distributions over possible outcomes should be modelled.
    • 2- Provide safety guarantees
    • 3- Highway scenarios: no urban intersection.
    • The methods should combine advantages of rule-based and learning-based estimation/prediction methods:
      • Interpretability.
      • Guarantees on safety (the learning-based model Generative Adversarial Imitation Learning (GAIL) used as baseline is not collision free).
      • Validity even in regions of the state space that are under-represented in the data.
      • High expressive power to capture nuanced driving behaviour.
  • About the method:
    • "We apply online parameter estimation to an extension of the Intelligent Driver Model IDM that explicitly models stochasticity in the behavior of individual drivers."

    • This rule-based method is online, as opposed for instance to the IDM with parameters obtained by offline estimation, using non-linear least squares.
    • Particle filtering is used for the recursive Bayesian estimation.
    • The derived parameter estimates are then used for forward motion prediction.
  • About the estimated parameters (per observed vehicle):
    • 1- The desired velocity (v-des).
    • 2- The driver-dependent stochasticity on acceleration (σ-IDM).
    • They are assumed stationary for each driver, i.e., human drivers do not change their latent driving behaviour over the time horizons.
  • About the datasets:
    • NGSIM for US Highway 101 at 10 Hz.
    • Highway Drone Dataset (HighD) at 25 Hz.
    • RMSE of the position and velocity are used to measure “closeness” of a predicted trajectory to the corresponding ground-truth trajectory.
    • Undesirable events, e.g. collision, going off-the-road, hard braking, that occur in each scene prediction are also considered.
  • How to deal with the "particle deprivation problem"?:
    • Particle deprivation = particles converge to one region of the state space and there is no exploration of other regions.
    • Dithering method = external noise is added to aid exploration of state space regions.
    • From (Schön, Gustafsson, & Karlsson, 2009) in "The Particle Filter in Practice":
      • "Both the process noise and measurement noise distributions need some dithering (increased covariance). Dithering the process noise is a well-known method to mitigate the sample impoverishment problem. Dithering the measurement noise is a good way to mitigate the effects of outliers and to robustify the PF in general".

    • Here:
      • "We implement dithering by adding random noise to the top 20% particles ranked according to the corresponding likelihood. The noise is sampled from a discrete uniform distribution with v-des {−0.5, 0, 0.5} and σ-IDM {−0.1, 0, 0.1}. (This preserves the discretization present in the initial sampling of particles).

  • Future works:
    • Non-stationarity.
    • Combination with a lane changing model such as MOBIL to extend to two-dimensional driving behaviour.

"PLOP: Probabilistic poLynomial Objects trajectory Planning for autonomous driving"

  • [ 2020 ] [📝] [[🎞️](TO COME)] [ 🚗 Valeo ]

  • [ Gaussian mixture, multi-trajectory prediction, nuScenes, A2D2, auxiliary loss ]

Click to expand
Source.
The architecture has two main sections: an encoder to synthesize information and the predictor where we exploit it. Note that PLOP does not use the classic RNN decoder scheme for trajectory generation, preferring a single step version which predicts the coefficients of a polynomial function instead of the consecutive points. Also note the navigation command that conditions the ego prediction. Source.
Source.
PLOP uses multimodal sensor data input: Lidar and camera. The map is accumulated over the past 2s, so 20 frames. It produces a multivariate gaussian mixture for a fixed number of K possible trajectories over a 4s horizon. Uncertainty and variability are handled by predicting vehicle trajectories as a probabilistic Gaussian Mixture models, constrained by a polynomial formulation. Source.

Authors: Buhet, T., Wirbel, E., & Perrotton, X.

  • Motivations:

    • The goal is to predicte multiple feasible future trajectories both for the ego vehicle and neighbors through a probabilistic framework.
      • In addition in an end-to-end trainable fashion.
    • It builds on a previous work: "Conditional vehicle trajectories prediction in carla urban environment" - (Buhet, Wirbel, & Perrotton, 2019). See analysis further below.
      • The trajectory prediction based on polynomial representation is upgraded from deterministic output to multimodal probabilistic output.
      • It re-uses the navigation command input for the conditional part of the network, e.g. follow, left, straight, right.
      • One main difference is the introduction of a new input sensor: Lidar.
      • And adding a semantic segmentation auxiliary loss.
    • The authors also reflect about what metrics is relevant for trajectory prediction:
      • "We suggest to use two additional criteria to evaluate the predictions errors, one based on the most confident prediction, and one weighted by the confidence [how alternative trajectories with non maximum weights compare to the most confident trajectory]."

  • One term: "Probabilistic poLynomial Objects trajectory Planning" = PLOP.

  • I especially like their review on related works about data-driven predictions (section taken from the paper):

    • SocialLSTM: encodes the relations between close agents introducing a social pooling layer.
    • -Deterministic approaches derived from SocialLSTM:
      • SEQ2SEQ presents a new LSTM-based encoder-decoder network to predict trajectories into an occupancy grid map.
      • SocialGAN and SoPhie use generative adversarial networks to tackle uncertainty in future paths and augment the original set of samples.
        • CS-LSTM extends SocialLSTM using convolutional layers to encode the relations between the different agents.
      • ChauffeurNet uses a sophisticated neural network with a complex high level scene representation (roadmap, traffic lights, speed limit, route, dynamic bounding boxes, etc.) for deterministic ego vehicle trajectory prediction.
    • Other works use a graph representation of the interactions between the agents in combination with neural networks for trajectory planning.
    • Probabilistic approaches:
      • Many works like PRECOG, R2P2, Multiple Futures Prediction, SocialGAN include probabilistic estimation by adding a probabilistic framework at the end of their architecture producing multiple trajectories for ego vehicle, nearby vehicles or both.
      • In PRECOG, Rhinehart et al. build a probabilistic model that explicitly models interactions between agents, using latent variables to model the plausible reactions of agents to each other, with a possibility to pre-condition the trajectory of the ego vehicle by a goal.
      • MultiPath also reuses an idea from object detection algorithms using trajectory anchors extracted from the training data for ego vehicle prediction.
  • About the auxiliary semantic segmentation task.

    • Teaching the network to represent such semantic in its features improves the prediction.
    • "Our objective here is to make sure that in the RGB image encoding, there is information about the road position and availability, the applicability of the traffic rules (traffic sign/signal), the vulnerable road users (pedestrians, cyclists, etc.) position, etc. This information is useful for trajectory planning and brings some explainability to our model."

  • About interactions with other vehicles.

    • The prediction for each vehicle does not have direct access to the sequence of history positions of others.
    • "The encoding of the interaction between vehicles is implicitly computed by the birdview encoding."

    • The number of predicted trajectories is fixed in the network architecture. K=12 is chosen.
      • "It allows our architecture to be agnostic to the number of considered neighbors."

  • Multi-trajectory prediction in a probabilistic framework.

    • "We want to predict a fixed number K of possible trajectories for each vehicle, and associate them to a probability distribution over x and y: x is the longitudinal axis, y the lateral axis, pointing left."

    • About the Gaussian Mixture.
      • Vehicle trajectories are predicted as probabilistic Gaussian Mixture models, constrained by a polynomial formulation: The mean of the distribution is expressed using a polynomial of degree 4 of time.
      • "In the end, this representation can be interpreted as predicting K trajectories, each associated with a confidence πk [mixture weights shared for all sampled points belonging to the same trajectory], with sampled points following a Gaussian distribution centered on (µk,x,t, µk,y,t) and with standard deviation (σk,x,t, σk,y,t)."

      • "PLOP does not use the classic RNN decoder scheme for trajectory generation, preferring a single step version which predicts the coefficients of a polynomial function instead of the consecutive points."

      • This offers a measure of uncertainty on the predictions.
      • For the ego car, the probability distribution is conditioned by the navigation command.
    • About the loss:
      • negative log-likelihood over all sampled points of the ground truth ego and neighbour vehicles trajectories.
      • There is also the auxiliary cross entropy loss for segmentation.
  • Some findings:

    • The presented model seems very robust to the varying number of neighbours.
      • Finally, for 5 agents or more, PLOT outperforms by a large margin all ESP and PRECOG, on authors-defined metrics.
      • "This result might be explained by our interaction encoding which is robust to the variations of N using only multiple birdview projections and our non-iterative single step trajectory generation."

    • "Using K = 1 approach yields very poor results, also visible in the training loss. It was an anticipated outcome due to the ambiguity of human behavior."


"Probabilistic Future Prediction for Video Scene Understanding"

  • [ 2020 ] [📝] [🎞️] [🎞️ (blog)] [ 🎓 University of Cambridge ] [ 🚗 Wayve ]

  • [ multi frame, multi future, auxiliary learning, multi-task, conditional imitation learning ]

Click to expand
Source.
One main motivation is to supply the Control module (e.g. policy learnt via IL) with a representation capable of modelling probability of future events. The Dynamics module produces such spatio-temporal representation, not directly from images but from learnt scene features. That embeddings, that are used by the Control in order to learn driving policy, can be explicitly decoded to future semantic segmentation, depth, and optical flow. Note that the stochasticity of the future is modelled with a conditional variational approach minimises the divergence between the present distribution (what could happen given what we have seen) and the future distribution (what we observe actually happens). During inference, diverse futures are generated by sampling from the present distribution. Source.
Source.
There are many possible futures approaching this four-way intersection. Using 3 different noise vectors makes the model imagine different driving manoeuvres at an intersection: driving straight, turning left or turning right. These samples predict 10 frames, or 2 seconds into the future. Source.
Source.
The differential entropy of the present distribution, characterizing how unsure the model is about the future is used. As we approach the intersection, it increases. Source.

Authors: Hu, A., Cotter, F., Mohan, N., Gurau, C., & Kendall, A.

  • Motivations:

    • 1- Supply the control module (e.g. IL) with an appropriate representation for interaction-aware and uncertainty-aware decision-making, i.e. one capable of modelling probability of future events.
      • Therefore the policy should receive temporal features explicitly trained to predict the future.
        • Motivation for that: It is difficult to learn an effective temporal representation by only using imitation error as a learning signal.
    • Others:
    • 2- "multi frame and multi future" prediction.
      • Perform prediction:
        • ... based on multiple past frames (i.e. not a single one).
        • ... and producing multiple possible outcomes (i.e. not deterministic).
          • Predict the stochasticity of the future, i.e. contemplate multiple possible outcomes and estimate the multi-modal uncertainty.
    • 3- Offer a differentiable / end-to-end trainable system, as opposed to system that reason over hand-coded representations.
      • I understand it as considering the loss of the IL part into the layers that create the latent representation.
    • 4- Cope with multi-agent interaction situations such as traffic merging, i.e. do not predict the behaviour of each actor in the scene independently.
      • For instance by jointly predicting ego-motion and motion of other dynamic agents.
    • 5- Do not rely on any HD-map to predict the static scene, to stay resilient to HD-map errors due to e.g. roadworks.
  • auxiliary learning: The loss used to train the latent representation is composed of three terms (c.f. motivation 3-):

    • future-prediction: weighted sum of future segmentation, depth and optical flow losses.
    • probabilistic: KL-divergence between the present and the future distributions.
    • control: regression for future time-steps up to some Future control horizon.
  • Let's explore some ideas behinds these three components.

  • 1- Temporal video encoding: How to build a temporal and visual representation?

    • What should be predicted?

      • "Previous work on probabilistic future prediction focused on trajectory forecasting [DESIRE, Lee et al. 2017, Bhattacharyya et al. 2018, PRECOG, Rhinehart et al. 2019] or were restricted to single-frame image generation and low resolution (64x64) datasets that are either simulated (Moving MNIST) or with static scenes and limited dynamics."

      • "Directly predicting in the high-dimensional space of image pixels is unnecessary, as some details about the appearance of the world are irrelevant for planning and control."

      • Instead, the task is to predict a more complete scene representation with segmentation, depth, and flow, two seconds in the future.
    • What should the temporal module process?

      • The temporal model should learn the spatio-temporal features from perception encodings [as opposed to RGB images].
      • These encodings are "scene features" extracted from images by a Perception module. They constitute a more powerful and compact representation compared to RGB images.
    • How does the temporal module look like?

      • "We propose a new spatio-temporal architecture that can learn hierarchically more complex features with a novel 3D convolutional structure incorporating both local and global space and time context."

    • The authors introduce a so-called Temporal Block module for temporal video encoding.

      • These Temporal Block should help to learn hierarchically more complex temporal features. With two main ideas:
      • 1- Decompose the convolutional filters and play with all possible configuration.
        • "Learning 3D filters is hard. Decomposing into two subtasks helps the network learn more efficient."

        • "State-of-the-art works decompose 3D filters into spatial and temporal convolutions. The model we propose further breaks down convolutions into many space-time combinations and context aggregation modules, stacking them together in a more complex hierarchical representation."

      • 2- Incorporate the "global context" in the features (I did not fully understand that).
        • They concatenate some local features based on 1x1x1 compression with some global features extracted with average pooling.
        • "By pooling the features spatially and temporally at different scales, each individual feature map also has information about the global scene context, which helps in ambiguous situations."

  • 2- Probabilistic prediction: how to generate multiple futures?

    • "There are various reasons why modelling the future is incredibly difficult: natural-scene data is rich in details, most of which are irrelevant for the driving task, dynamic agents have complex temporal dynamics, often controlled by unobservable variables, and the future is inherently uncertain, as multiple futures might arise from a unique and deterministic past."

    • The idea is that the uncertainty of the future can be estimated by making the prediction probabilistic.
      • "From a unique past in the real-world, many futures are possible, but in reality we only observe one future. Consequently, modelling multi-modal futures from deterministic video training data is extremely challenging."

      • Another challenge when trying to learn a multi-modal prediction model:
        • "If the network predicts a plausible future, but one that did not match the given training sequence, it will be heavily penalised."

      • "Our work addresses this by encoding the future state into a low-dimensional future distribution. We then allow the model to have a privileged view of the future through the future distribution at training time. As we cannot use the future at test time, we train a present distribution (using only the current state) to match the future distribution through a KL-divergence loss. We can then sample from the present distribution during inference, when we do not have access to the future."

    • To put it another way, two probability distributions are modelled, in a conditional variational approach:
      • A present distribution P, that represents all what could happen given the past context.
      • A future distribution F, that represents what actually happened in that particular observation.
    • [Learning to align the present distribution with the future distribution] "As the future is multimodal, different futures might arise from a unique past context zt. Each of these futures will be captured by the future distribution F that will pull the present distribution P towards it."

    • How to evaluate predictions?
      • "Our probabilistic model should be accurate, that is to say at least one of the generated future should match the ground truth future. It should also be diverse".

      • The authors use a diversity distance metric (DDM), which measures both accuracy and diversity of the distribution.
    • How to quantify uncertainty?
      • The framework can automatically infer which scenes are unusual or unexpected and where the model is uncertain of the future, by computing the differential entropy of the present distribution.
      • This is useful for understanding edge-cases and when the model needs to "pay more attention".
  • 3- The rich spatio-temporal features explicitly trained to predict the future are used to learn a driving policy.

    • Conditional Imitation Learning is used to learn speed and sterring controls, i.e. regressing to the expert's true control actions {v, θ}.

      • One reason is that it is immediately transferable to the real world.
    • From the ablation study, it seems to highly benefit from both:

      • 1- The temporal features.

        "It is too difficult to forecast how the future is going to evolve with a single image".

      • 2- The fact that these features are capable of probabilistic predictions.
        • Especially for multi-agent interaction scenarios.
    • About the training set:

      • "We address the inherent dataset bias by sampling data uniformly across lateral and longitudinal dimensions. First, the data is split into a histogram of bins by steering, and subsequently by speed. We found that weighting each data point proportionally to the width of the bin it belongs to avoids the need for alternative approaches such as data augmentation."

  • One exciting future direction:

    • For the moment, the control module takes the representation learned from dynamics models. And ignores the predictions themselves.
      • By the way, why are predictions, especially for the ego trajectories, not conditionned on possible actions?
    • It could use these probabilistic embedding capable of predicting multi-modal and plausible futures to generate imagined experience to train a policy in a model-based RL.
    • The design of the reward function from the latent space looks challenging at first sight.

"Efficient Behavior-aware Control of Automated Vehicles at Crosswalks using Minimal Information Pedestrian Prediction Model"

  • [ 2020 ] [📝] [ 🎓 University of Michigan, University of Massachusetts ]

  • [ interaction-aware decision-making, probabilistic hybrid automaton ]

Click to expand
Source.
The pedestrian crossing behaviour is modelled as a probabilistic hybrid automaton. Source.
Source.
The interaction is captured inside a gap-acceptance model: the pedestrian evaluates the available time gap to cross the street and either accept the gap by starting to cross or reject the gap by waiting at the crosswalk. Source.
Source.
The baseline controller used for comparison is a finite state machine (FSM) with four states. Whenever a pedestrian starts walking to cross the road, the controller always tries to stop, either by yielding or through hard stop. Source.

Authors: Jayaraman, S. K., Jr, L. P. R., Yang, X. J., Pradhan, A. K., & Tilbury, D. M.

  • Motivations:

    • Scenario: interaction with a pedestrian approaching/crossing/waiting at a crosswalk.
    • 1- A (1.1) simple and (1.2) interaction-aware pedestrian prediction model.
    • 2- Effectively incorporating these predictions in a control framework
      • The idea is to first forecast the position of the pedestrian using a pedestrian model, and then react accordingly.
    • 3- Be efficient on both waiting and approaching pedestrian scenarios.
      • Assuming always a crossing may lead to over-conservative policies.
      • "[in simulation] only a fraction of pedestrians (80%) are randomly assigned the intention to cross the street."

  • Why are CV and CA prediction models not applicable?

    • "At crosswalks, pedestrian behavior is much more unpredictable as they have to wait for an opportunity and decide when to cross."

    • Longer durations are needed.
      • 1- Interaction must be taken into account.
      • 2- The authors decide to model pedestrians as a hybrid automaton that switches between discrete actions.
  • One term: Behavior-aware Model Predictive Controller (B-MPC)

    • 1- The pedestrian crossing behaviour is modelled as a probabilistic hybrid automaton:
      • Four states: Approach Crosswalk, Wait, Cross, Walk away.
      • Probabilistic transitions: using pedestrian's gap acceptance - hence capturing interactions.
        • "What is the probability of accepting the current traffic gap?

        • "Pedestrians evaluate the available time gap to cross the street and either accept the gap by starting to cross or reject the gap by waiting at the crosswalk."

    • 2- The problem is formulated as a constrained quadratic optimization problem:
      • Cost: success (passing the crosswalk), comfort (penalize jerk and sudden changes in acceleration), efficiency (deviation from the reference speed).
      • Constraints: respect motion model, restrict velocity, acceleration, as well as jerk, and ensure collision avoidance.
      • Solver: standard quadratic program solver in MATLAB.
  • Performances:

    • Baseline controller:
      • Finite state machine (FSM) with four states: Maintain Speed, Accelerate, Yield, and Hard Stop.
      • "Whenever a pedestrian starts walking to cross the road, the controller always tries to stop, either by yielding or through hard stop."

      • "The Boolean variable InCW, denotes the pedestrian’s crossing activity: InCW=1 from the time the pedestrian started moving laterally to cross until they completely crossed the AV lane, and InCW=0 otherwise."

      • That means the baseline controller does not react at all to "non-crossing" cases since it never sees the pedestrian crossing laterally.
    • "It can be seen that the B-MPC is more aggressive, efficient, and comfortable than the baseline as observed through the higher average velocity, lower average acceleration effort, and lower average jerk respectively."


"Rules of the road: Predicting driving behavior with a convolutional model of semantic interactions"

  • [ 2019 ] [📝] [ 🎓 Caltech ] [🚗 zoox]

  • [ multimodal, probabilistic, 1-shot ]

Click to expand
Source.
The idea is to encode a history of world states (both static and dynamic) and semantic map information in a unified, top-down spatial grid. This allows to use a deep convolutional architecture to model entity dynamics, entity interactions, and scene context jointly. The authors found that temporal convolutions achieved better performance and significantly faster training than an RNN structure. Source.
Source.
Three 1-shot models are proposed. Top: parametric and continuous. Bottom: non-parametric and discrete (trajectories are here sampled for display from the state probabilities). Source.

Authors: Hong, J., Sapp, B., & Philbin, J.

  • Motivations: A prediction method of future distributions of entity state that is:

    • 1- Probabilistic.
      • "A single most-likely point estimate isn't sufficient for a safety-critical system."

      • "Our perception module also gives us state estimation uncertainty in the form of covariance matrices, and we include this information in our representation via covariance norms."

    • 2- Multimodal.
      • It is important to cover a diversity of possible implicit actions an entity might take (e.g., which way through a junction).
    • 3- One-shot.
      • It should directly predict distributions of future states, rather than a single point estimate at each future timestep.
      • "For efficiency reasons, it is desirable to predict full trajectories (time sequences of state distributions) without iteratively applying a recurrence step."

      • "The problem can be naturally formulated as a sequence-to-sequence generation problem. [...] We chose ℓ=2.5s of past history, and predict up to m=5s in the future."

      • "DESIRE and R2P2 address multimodality, but both do so via 1-step stochastic policies, in contrast to ours which directly predicts a time sequence of multimodal distributions. Such policy-based methods require both future roll-out and sampling to obtain a set of possible trajectories, which has computational trade-offs to our one-shot feed-forward approach."

  • How to model entity interactions?

    • 1- Implicitly: By encoding them as surrounding dynamic context.
    • 2- Explicitly: For instance, SocialLSTM pools hidden temporal state between entity models.
    • Here, all surrounding entities are encoded within a specific tensor.
  • Input:

    • A stack of 2d-top-view-grids. Each frame has 128×128 pixels, corresponding to 50m×50m.
      • For instance, the dynamic context is encoded in a RGB image with unique colours corresponding to each element type.
      • The state history of the considered entity is encoded in a stack of binary maps.
        • One could have use only 1 channel and play with the colour to represent the history.
    • "Note that through rendering, we lose the true graph structure of the road network, leaving it as a modeling challenge to learn valid road rules like legal traffic direction, and valid paths through a junction."

      • Cannot it just be coded in another tensor?
  • Output. Three approaches are proposed:

    • 1- Continuous + parametric representations.
      • 1.1. A Gaussian distribution is regressed per future timestep.
      • 1.2. Multi-modal Gaussian Regression (GMM-CVAE).
        • A set of Gaussians is predicted by sampling from a categorial latent variable.
        • If non enhanced, this method is naive and suffers from exchangeability, and mode collapse.
        • "In general, our mixture of sampled Gaussian trajectories underperformed our other proposed methods; we observed that some samples were implausible."

    • 2- Discrete + non-parametric representations.
      • Predict occupancy grid maps.
        • A grid is produced for each future modelled timestep.
        • Each grid location holds the probability of the corresponding output state.
        • For comparison, trajectories are extracted via some trajectory sampling procedure.
  • Against non-learnt baselines:

    • "Interestingly, both Linear and Industry baselines performed worse relative to our methods at larger time offsets, but better at smaller offsets. This can be attributed to the fact that predicting near futures can be accurately achieved with classical physics (which both baselines leverage) — more distant future predictions, however, require more challenging semantic understanding."


"Learning Interaction-Aware Probabilistic Driver Behavior Models from Urban Scenarios"

  • [ 2019 ] [📝] [ 🎓 TUM ] [🚗 BMW]

  • [ probabilistic predictions, multi-modality ]

Click to expand
Source.
The network produces an action distribution for the next time-step. The features are function of one selected driver's route intention (such as turning left or right) and the map. Redundant features can be pruned to reduce the complexity of the model: even with as few as 5 features (framed in blue), it is possible for the network to learn basic behaviour models that achieve lower losses than both baseline recurrent networks. Source.
Source.
Right: At each time step, the confidence in the action changes. Left: How to compute some loss from the predicted variance if the ground-truth is a point-estimate? The predicted distribution can be evaluated at ground-truth, forming a likelihood. The negative-log-likelihood becomes the objective to minimize. The network can output high variance if it is not sure. But a regularization term deters it from being too uncertain. Source.

Authors: Schulz, J., Hubmann, C., Morin, N., Löchner, J., & Darius, B.

  • Motivations:

    • 1- Learn a driver model that is:
      • Probabilistic. I.e. capture multi-modality and uncertainty in the predicted low-level actions.
      • Interaction-aware. Well, here the actions of surrounding vehicles are ignored, but their states are considered
      • "Markovian", i.e. that makes 1-step prediction from the current state, assuming independence of previous states / actions.
    • 2- Simplicity + lightweight.
      • This model is intended to be integrated as a probabilistic transition model into sampling-based algorithms, e.g. particle filtering.
      • Applications include:
        • 1- Forward simulation-based interaction-aware planning algorithms, e.g. Monte Carlo tree search.
        • 2- Driver intention estimation and trajectory prediction, here a DBN example.
      • Since samples are plenty, runtime should be kept low. And therefore, nested net structures such as DESIRE are excluded.
    • Ingredients:
      • Feedforward net predicting steering and acceleration distributions.
      • Enable multi-modality by building one input vector, and making one prediction, per possible route.
  • About the model:

    • Input: a set of features build from:
      • One route intention. For instance, the distances of both agents to entry and exit of the related conflict areas are computed.
      • The map.
      • The kinematic state (pos, heading, vel) of the 2 closest agents.
    • Output: steering and acceleration distributions, modelled as Gaussian: mean and std are estimated (cov = 0).
      • Not the next state!!
        • "Instead of directly learning a state transition model, we restrict the neural network to learn a 2-dimensional action distribution comprising acceleration and steering angle."

      • Practical implication when building the dataset from real data: the actions of observed vehicles are unknown, but inferred using an inverse bicycle model.
    • Using the model at run time:
      • 1- Sample the possible routes.
      • 2- For each route:
        • Start with one state.
        • Get one action distribution. Note that the uncertainty can change at each step.
        • Sample (acc, steer) from this distribution.
        • Move to next state.
        • Repeat.
  • Issue with the accumulation of 1-step to form long-term predictions:

    • As in vanilla imitation learning, it suffers from distribution shift resulting from the accumulating errors.
    • "If this error is too high, the features determined during forward simulation are not represented within the training data anymore."

    • A DAgger-like solution could be considered.
  • About conditioning on a driver's route intention:

    • Without, one could pack all the road info in the input. How many routes to describe? And expect multiple trajectories to be produced. How many output heads? Tricky.
    • Conditioning offers two advantages:
      • "The learning algorithm does not have to cope with the multi-modality induced by different route options. The varying number of possible routes (depending on the road topology) is handled outside of the neural network."

      • It also allows to define (and detail) relevant features along the considered path: upcoming road curvature or longitudinal distances to stop lines.
    • Limit to this approach (again related to the "off-distribution" issue):
      • "When enumerating all possible routes and running a forward simulation for each of the conditioned models, there might exist route candidates that are so unlikely that they have never been followed in the training data. Thus their features may result in unreasonable actions during inference, as the network only learns what actions are reasonable given a route, but not which routes are reasonable given a situation."


"Learning Predictive Models From Observation and Interaction"

  • [ 2019 ] [📝] [🎞️] [ 🎓 University of Pennsylvania, Stanford University, UC Berkeley ] [ 🚗 Honda ]

  • [ visual prediction, domain transfer, nuScenes, BDD100K ]

Click to expand
The idea is to learn a latent representation z that corresponds to the true action. The model can then perform joint training on the two kinds of data: it optimizes the likelihood of the interaction data, for which the actions are available, and observation data, for which the actions are missing. Hence the visual predictive model can predict the next frame xt+1 conditioned on the current frame xt and action learnt representation zt. Source.
The idea is to learn a latent representation z that corresponds to the true action. The model can then perform joint training on the two kinds of data: it optimizes the likelihood of the interaction data, for which the actions are available, and observation data, for which the actions are missing. Hence the visual predictive model can predict the next frame xt+1 conditioned on the current frame xt and action learnt representation zt. Source.
The visual prediction model is trained using two driving sets: action-conditioned videos from Boston and action-free videos from the Singapore. Frames from both subsets come from BDD100K or nuScenes datasets.. Source.
The visual prediction model is trained using two driving sets: action-conditioned videos from Boston and action-free videos from the Singapore. Frames from both subsets come from BDD100K and nuScenes datasets. Source.

Authors: Schmeckpeper, K., Xie, A., Rybkin, O., Tian, S., Daniilidis, K., Levine, S., & Finn, C.

  • On concrete industrial use-case:
    • "Imagine that a self-driving car company has data from a fleet of cars with sensors that record both video and the driver’s actions in one city, and a second fleet of cars that only record dashboard video, without actions, in a second city."

    • "If the goal is to train an action-conditioned model that can be utilized to predict the outcomes of steering actions, our method allows us to train such a model using data from both cities, even though only one of them has actions."

  • Motivations (mainly for robotics, but also AD):
    • Generate predictions for complex tasks and new environments, without costly expert demonstrations.
    • More precisely, learn an action-conditioned video predictive model from two kinds of data:
      • 1- passive observations: [x0, a1, x1 ... aN, xN].
        • Videos of another agent, e.g. a human, might show the robot how to use a tool.
        • Observations represent a powerful source of information about the world and how actions lead to outcomes.
        • A learnt model could also be used for planning and control, i.e. to plan coordinated sequences of actions to bring about desired outcomes.
        • But may suffer from large domain shifts.
      • 2- active interactions: [x0, x1 ... xN].
        • Usually more expensive.
  • Two challenges:
    • 1- Observations are not annotated with suitable actions: e.g. only access to the dashcam, not the throttle for instance.
      • In other words, actions are only observed in a subset of the data.
      • The goal is to learn from videos without actions, allowing it to leverage videos of agents for which the actions are unknown (unsupervised manner).
    • 2- Shift in the "embodiment" of the agent: e.g. robots' arms and humans' ones have physical differences.
      • The goal is to bridge the gap between the two domains (e.g., human arms vs. robot arms).
  • What is learnt?
    • p(xc+1:T|x1:c, a1:T)
    • I.e. prediction of future frames conditioned on a set of c context frames and sequence of actions.
  • What tests?
    • 1- Different environment within the same underlying dataset: driving in Boston and Singapore.
    • 2- Same environment but different embodiment: humans and robots manipulate objects with different arms.
  • What is assessed?
    • 1- Prediction quality (AD test).
    • 2- Control performance (robotics test).

"Deep Learning-based Vehicle Behaviour Prediction For Autonomous Driving Applications: A Review"

  • [ 2019 ] [📝] [ 🎓 University of Warwick ] [ 🚗 Jaguar Land Rover ]

  • [ multi-modality prediction ]

Click to expand
The author propose new classification of behavioural prediction methods. Only deep learning approaches are considered and physics-based approaches are excluded. The criteria are about the input, ouput and deep learning method. Source.
The author propose new classification of behavioural prediction methods. Only deep learning approaches are considered and physics-based approaches are excluded. The criteria are about the input, ouput and deep learning method. Source.
First criterion is about the input: What is the prediction based on? Important is to capture road structure and interactions while staying flexible in the representation (e.g. describe different types of intersections and work with varying numbers of target vehicles and surrounding vehicles). Partial observability should be considered by design. Source.
First criterion is about the input: What is the prediction based on? Important is to capture road structure and interactions while staying flexible in the representation (e.g. describe different types of intersections and work with varying numbers of target vehicles and surrounding vehicles). Partial observability should be considered by design. Source.
Second criterion is about the output: What is predicted? Important is to propagate the uncertainty from the input and consider multiple options (multi-modality). Therefore to reason with probabilities. Bottom - why multi-modality is important. Source.
Second criterion is about the output: What is predicted? Important is to propagate the uncertainty from the input and consider multiple options (multi-modality). Therefore to reason with probabilities. Bottom - why multi-modality is important. Source.

Authors: Mozaffari, S., Al-Jarrah, O. Y., Dianati, M., Jennings, P., & Mouzakitis, A.

  • One mentioned review: (Lefèvre et al.) classifies vehicle (behaviour) prediction models to three groups:
    • 1- physics-based
      • Use dynamic or kinematic models of vehicles, e.g. a constant velocity (CV) Kalman Filter model.
    • 2- manoeuvre-based
      • Predict vehicles' manoeuvres, i.e. a classification problem from a defined set.
    • 3- interaction-aware
      • Consider interaction of vehicles in the input.
  • About the terminology:
    • "Target Vehicles" (TV) are vehicles whose behaviour we are interested in predicting.
    • The other are "Surrounding Vehicles" (SV).
    • The "Ego Vehicle" (EV) can be also considered as an SV, if it is close enough to TVs.
  • Here, the authors ignore the physics-based methods and propose three criteria for comparison:
    • 1- Input.
      • Track history of TV only.
      • Track history of TV and SVs.
      • Simplified bird’s eye view.
      • Raw sensor data.
    • 2- Output.
      • Intention class: From a set of pre-defined discrete classes, e.g. go straight, turn left, and turn right.
      • Unimodal trajectory: Usually the one with highest likelihood or the average).
      • Intention-based trajectory: Predict the trajectory that corresponds to the most probable intention (first case).
      • Multimodal trajectory: Combine the previous ones. Two options, depending if the intention set is fixed or dynamically learnt:
        • static intention set: predict for each member of the set (an extension to intention-based trajectory prediction approaches).
        • dynamic intention set: due to dynamic definition of manoeuvres, they are prone to converge to a single manoeuvre or not being able to explore all the existing manoeuvres.
    • 3- In-between (deep learning method).
      • RNN are used because of their temporal feature extracting power.
      • CNN are used for their **spatial feature extracting ability (especially with bird’s eye views).
  • Important considerations for behavioural prediction:
    • Traffic rules.
    • Road geometry.
    • Multimodality: there may exist more than one possible future behaviour.
    • Interaction.
    • Uncertainty: both aleatoric (measurement noise) and epistemic (partial observability). Hence the prediction should be probabilistic.
    • Prediction horizon: approaches can serve different purposes based on how far in the future they predict (short-term or long-term future motion).
  • Two methods I would like to learn more about:
    • social pooling layers, e.g. used by (Deo & Trivedi, 2019):
      • "A social tensor is a spatial grid around the target vehicle that the occupied cells are filled with the processed temporal data (e.g., LSTM hidden state value) of the corresponding vehicle. It contains both the temporal dynamic of vehicles represented and spatial inter-dependencies among them."

    • graph neural networks, e.g. (Diehl et al., 2019) or (Li et al., 2019):
      • Graph Convolutional Network (GCN).
      • Graph Attention Network (GAT).
  • Comments:
    • Contrary to the object detection task, there is no benchmark for systematically evaluating previous studies on vehicle behaviour prediction.
      • Urban scenarios are excluded in the comparison since NGSIM I-80 and US-101 highway driving datasets are used.
      • Maybe the INTERACTION Dataset​ could be used.
    • The authors suggest embedding domain knowledge in the prediction, and call for practical considerations (industry-supported research).
      • "Factors such as environment conditions and set of traffic rules are not directly inputted to the prediction model."

      • "Practical limitations such as sensor impairments and limited computational resources have not been fully taken into account."


"Multi-Modal Simultaneous Forecasting of Vehicle Position Sequences using Social Attention"

  • [ 2019 ] [📝] [ 🎓 Ecole CentraleSupelec ] [ 🚗 Renault ]

  • [ multi-modality prediction, attention mechanism ]

Click to expand
Two multi-head attention layers are used to account for social interactions between all vehicles. They are combined with LSTM layers to offer joint, long-range and multi-modal forecasts. Source.
Two multi-head attention layers are used to account for social interactions between all vehicles. They are combined with LSTM layers to offer joint, long-range and multi-modal forecasts. Source.
Source.
Source.

Authors: Mercat, J., Gilles, T., Zoghby, N. El, Sandou, G., Beauvois, D., & Gil, G. P.

  • Previous work: "Social Attention for Autonomous Decision-Making in Dense Traffic" by (Leurent, & Mercat, 2019), detailed on this page as well.
  • Motivations:
    • 1- joint - Considering interactions between all vehicles.
    • 2- flexible - Independant of the number/order of vehicles.
    • 3- multi-modal - Considering uncertainty.
    • 4- long-horizon - Predicting over a long range. Here 5s on simple highway scenarios.
    • 5- interpretable - E.g. using the social attention coefficients.
    • 6- long distance interdependencies - The authors decide to exclude the spatial grid representations that "limit the zone of interest to a predefined fixed size and the spatial relation precision to the grid cell size".
  • Main idea: Stack LSTM layers with social multi-head attention layers.
    • More precisely, the model is broken into four parts:
      • 1- An Encoder processes the sequences of all vehicle positions (no information about speed, orientation, size or blinker).
      • 2- A Self-attention layer captures interactions between all vehicles using "dot product attention". It has "multiple head", each specializing on different interaction patterns, e.g. "closest front vehicle in any lane".
      • 3- A Predictor, using LSTM cells, forecasts the positions.
      • A second multi-head self-attention layer is placed here.
      • 4- A final Decoder produces sequences of Gaussians mixtures for each vehicle.
        • "What is forecast is not a mixture of trajectory density functions but a sequence of position mixture density functions. There is a dependency between forecasts at time tk and at time tk+1 but no explicit link between the modes at those times."

  • Two quotes about multi-modality prediction:
    • "When considering multiple modes, there is a challenging trade-off to find between anticipating a wide diversity of modes and focusing on realistic ones".

    • "VAE and GANs are only able to generate an output distribution with sampling and do not express a PDF".

  • Baselines used to compare the presented "Social Attention Multi-Modal Prediction" approach:
    • Constant velocity (CV), that uses Kalman filters (hence single modality).
    • Convolutional Social Pooling (CSP), that uses convolutional social pooling on a coarse spatial grid. Six mixture components are used.
    • Graph-based Interaction-aware Trajectory Prediction (GRIP), that uses a spatial and temporal graph representation of the scene.

"MultiPath : Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction"

  • [ 2019 ] [📝] [ 🚗 Waymo ]

  • [ anchor, multi-modality prediction, weighted prediction, mode collapse ]

Click to expand
Source.
Source.
A discrete set of intents is modelled as a set of K=3 anchor trajectories. Uncertainty is assumed to be unimodal given intent (here 3 intents are considered) while control uncertainty is modelled with a Gaussian distribution dependent on each waypoint state of an anchor trajectory. Such an example shows that modelling multiple intents is important. Source.
A discrete set of intents is modelled as a set of K=3 anchor trajectories. Uncertainty is assumed to be unimodal given intent (here 3 intents are considered) while control uncertainty is modelled with a Gaussian distribution dependent on each waypoint state of an anchor trajectory. Such an example shows that modelling multiple intents is important. Source.

Authors: Chai, Y., Sapp, B., Bansal, M., & Anguelov, D.

  • One idea: "Anchor Trajectories".
    • "Anchor" is a common idea in ML. Concrete applications of "anchor" methods for AD include Faster-RCNN and YOLO for object detections.
      • Instead of directly predicting the size of a bounding box, the NN predicts offsets from a predetermined set of boxes with particular height-width ratios. Those predetermined set of boxes are the anchor boxes. (explanation from this page).
    • One could therefore draw a parallel between the sizes of bounding boxes in Yolo and the shape of trajectories: they could be approximated with some static predetermined patterns and refined to the current context (the actual task of the NN here).
      • "After doing some clustering studies on ground truth labels, it turns out that most bounding boxes have certain height-width ratios." [explanation about Yolo from this page]

      • "Our trajectory anchors are modes found in our training data in state-sequence space via unsupervised learning. These anchors provide templates for coarse-granularity futures for an agent and might correspond to semantic concepts like change lanes, or slow down." [from the presented paper]

    • This idea reminds also me the concept of pre-defined templates used for path planning.
  • One motivation: model multiple intents.
    • This contrasts with the numerous approaches which predict one single most-likely trajectory per agent, usually via supervised regression.
    • The multi-modality is important since prediction is inherently stochastic.
      • The authors distinguish between intent uncertainty and control uncertainty (conditioned on intent).
    • A Gaussian Mixture Model (GMM) distribution is used to model both types of uncertainty.
      • "At inference, our model predicts a discrete distribution over the anchors and, for each anchor, regresses offsets from anchor waypoints along with uncertainties, yielding a Gaussian mixture at each time step."

  • One risk when working with multi-modality: directly learning a mixture suffers from issues of "mode collapse".
    • This issue is common in GAN where the generator starts producing limited varieties of samples.
    • The solution implemented here is to estimate the anchors a priori before fixing them to learn the rest of our parameters (as for Faster-RCNN and Yolo for instance).
  • Second motivation: weight the several trajectory predictions.
    • This contrasts with methods that randomly sample from a generative model (e.g. CVAE and GAN), leading to an unweighted set of trajectory samples (not to mention the problem of reproducibility and analysis).
    • Here, a parametric probability distribution is directly predicted: p(trajectory|observation), together with a compact weighted set of explicit trajectories which summarizes this distribution well.
      • This contrasts with methods that outputs a probabilistic occupancy grid.
  • About the "top-down" representation, structured in a 3d array:
    • The first 2 dimensions represent spatial locations in the top-down image
    • "The channels in the depth dimension hold static and time-varying (dynamic) content of a fixed number of previous time steps."

      • Static context includes lane connectivity, lane type, stop lines, speed limit.
      • Dynamic context includes traffic light states over the past 5 time-steps.
      • The previous positions of the different dynamic objects are also encoded in some depth channels.
  • One word about the training dataset.
    • The model is trained via imitation learning by fitting the parameters to maximize the log-likelihood of recorded driving trajectories.
    • "The balanced dataset totals 3.85 million examples, contains 5.75 million agent trajectories and constitutes approximately 200 hours of (real-world) driving."


"SafeCritic: Collision-Aware Trajectory Prediction"

  • [ 2019 ] [📝] [ 🎓 University of Amsterdam ] [ 🚗 BMW ]

  • [ Conditional GAN ]

Click to expand
The Generator predicts trajectories that are scored against two criteria: The Discriminator (as in GAN) for accuracy (i.e. consistent with the observed inputs) and the Critic (the generator acts as an Actor) for safety. The random noise vector variable z in the Generator can be sampled from N(0, 1) to sample novel trajectories. Source.
The Generator predicts trajectories that are scored against two criteria: The Discriminator (as in GAN) for accuracy (i.e. consistent with the observed inputs) and the Critic (the generator acts as an Actor) for safety. The random noise vector variable z in the Generator can be sampled from N(0, 1) to sample novel trajectories. Source.
Several features offered by the predictions of SafeCritic: accuracy, diversity, attention and safety. Source.
Several features offered by the predictions of SafeCritic: accuracy, diversity, attention and safety. Source.

Authors: van der Heiden, T., Nagaraja, N. S., Weiss, C., & Gavves, E.

  • Main motivation:
    • "We argue that one should take into account safety, when designing a model to predict future trajectories. Our focus is to generate trajectories that are not just accurate but also lead to minimum collisions and thus are safe. Safe trajectories are different from trajectories that try to imitate the ground truth, as the latter may lead to implausible paths, e.g, pedestrians going through walls."

    • Hence the trajectory predictions of the Generator are evaluated against multiple criteria:
      • Accuracy: The Discriminator checks if the prediction is coherent / plausible with the observation.
      • Safety: Some Critic predicts the likelihood of a future dynamic and static collision.
    • A third loss term is introduced:
      • "Training the generator is harder than training the discriminator, leading to slow convergence or even failure."

      • An additional auto-encoding loss to the ground truth is introduced.
      • It should encourage the model to avoid trivial solutions and mode collapse, and should increase the diversity of future generated trajectories.
      • The term mode collapse means that instead of suggesting multiple trajectory candidates (multi-modal), the model restricts its prediction to only one instance.
  • About RL:
    • The authors mentioned several terms related to RL, in particular they try to dray a parallel with Inverse RL:
      • "GANs resemble IRL in that the discriminator learns the cost function and the generator represents the policy."

    • I got the feeling of that idea, but I was honestly did not understand where it was implemented here. In particular no MDP formulation is given.
  • About attention mechanism:
    • "We rely on attention mechanism for spatial relations in the scene to propose a compact representation for modelling interaction among all agents [...] We employ an attention mechanism to prioritize certain elements in the latent state representations."

    • The grid-like scene representation is shared by both the Generator and the Critic.
  • About the baselines:
    • I like the "related work" section which shortly introduces the state-of-the-art trajectory prediction models based on deep learning. SafeCritic takes inspiration from some of their ideas, such as:
      • Aggregation of past information about multiple agents in a recurrent model.
      • Use of Conditional GAN to offer the possibility to also generate novel trajectory given observation via sampling (standard GANs have not encoder).
      • Generation of multi-modal future trajectories.
      • Incorporation of semantic visual features (extracted by deep networks) combined with an attention mechanism.
    • SocialGAN, SocialLSTM, Car-Net, SoPhie and DESIRE are used as baselines.
    • R2P2 and SocialAttention are also mentioned.

"A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving"

  • [ 2019 ] [📝] [ 🎓 University of Iasi ]
Click to expand

One figure:

Classification of motion models based on three increasingly abstract levels - adapted from (Lefèvre, S., Vasquez. D. & Laugier C. - 2014). Source.
Classification of motion models based on three increasingly abstract levels - adapted from (Lefèvre, S., Vasquez. D. & Laugier C. - 2014). Source.

Authors: Leon, F., & Gavrilescu, M.

  • A reference to one white paper: "Safety first for automated driving" 2019 - from Aptiv, Audi, Baidu, BMW, Continental, Daimler, Fiat Chrysler Automobiles, HERE, Infineon, Intel and Volkswagen (alphabetical order). The authors quote some of the good practices about Interpretation and Prediction:
    • Predict only a short time into the future (the further the predicted state is in the future, the less likely it is that the prediction is correct).
    • Rely on physics where possible (a vehicle driving in front of the automated vehicle will not stop in zero time on its own).
    • Consider the compliance of other road users with traffic rules.
  • Miscellaneous notes about prediction:
    • The authors point the need of high-level reasoning (the more abstract the feature, the more reliable it is long term), mentioning both "affinity" and "attention" mechanisms.
    • They also call for jointly addressing vehicle motion modelling and risk estimation (criticality assessment).
    • Gaussian Processed is found to be a flexible tool for modelling motion patterns and is compared to Markov Models for prediction.
      • In particular, GP regressions have the ability to quantify uncertainty (e.g. occlusion).
    • "CNNs can be superior to LSTMs for temporal modelling since trajectories are continuous in nature, do not have complicated "state", and have high spatial and temporal correlations".


"Deep Predictive Autonomous Driving Using Multi-Agent Joint Trajectory Prediction and Traffic Rules"

  • [ 2019 ] [📝] [🎞️] [ 🎓 Seoul National University ]
Click to expand

One figure:

The framework consists of four modules: encoder module, interaction module, prediction module and control module. Source.
The framework consists of four modules: encoder module, interaction module, prediction module and control module. Source.

Authors: Cho, K., Ha, T., Lee, G., & Oh, S.

  • One previous work: "Learning-Based Model Predictive Control under Signal Temporal Logic Specifications" by (Cho & Ho, 2018).
  • One term: "robustness slackness" for STL-formula.
    • The motivation is to solve dilemma situations (inherent to strict compliance when all rules cannot be satisfied) by disobeying certain rules based on their predicted degree of satisfaction.
    • The idea is to filter out non-plausible trajectories in the prediction step to only consider valid prediction candidates during planning.
    • The filter considers some "rules" such as Lane keeping and Collision avoidance of front vehicle or Speed limit (I did not understand why they are equally considered).
    • These rules are represented by Signal Temporal Logic (STL) formulas.
      • Note: STL is an extension of Linear Temporal Logic (with boolean predicates and discrete-time) with real-time and real-valued constraints.
    • A metric can be introduced to measure how well a given signal (here, a trajectory candidate) satisfies a STL formula.
      • This is called "robustness slackness" and acts as a margin to satisfaction of STL-formula.
    • This enables a "control under temporal logic specification" as mentioned by the authors.
  • Architecture
    • Encoder module: The observed trajectories are fed to some LSTM whose internal state is used by the two subsequent modules.
    • Interaction module: To consider interaction, all LSTM states are concatenated (joint state) together with a feature vector of relative distances. In addition, a CVAE is used for multi-modality (several possible trajectories are generated) and capture interactions (I did not fully understand that point), as stated by the authors:
      • "The latent variable z models inherent structure in the interaction of multiple vehicles, and it also helps to describe underlying ambiguity of future behaviours of other vehicles."

    • Prediction module: Based on the LSTM states, the concatenated vector and the latent variable, both future trajectories and margins to the satisfaction of each rule are predicted.
    • Control module: An MPC optimizes the control of the ego car, deciding which rules should be prioritized based on the two predicted objects (trajectories and robustness slackness).

"An Online Evolving Framework for Modeling the Safe Autonomous Vehicle Control System via Online Recognition of Latent Risks"

  • [ 2019 ] [📝] [ 🎓 Ohio State University ] [ 🚗 Ford ]

  • [ MDP, action-state transitions matrix, SUMO, risk assessment ]

Click to expand

One figure:

Both the state space and the transition model are adapted online, offering two features: prediction about the next state and detection of unknown (i.e. risky) situations. Source.
Both the state space and the transition model are adapted online, offering two features: prediction about the next state and detection of unknown (i.e. risky) situations. Source.

Authors: Han, T., Filev, D., & Ozguner, U.

  • Motivation
  • Main ideas: Both the state space and the transition model (here discrete state space so transition matrices) of an MDP are adapted online.
    • I understand it as trying to learn the transition model (experience is generated using SUMO), hence to some extent going toward model-based RL.
    • The motivation is to assist any AV control framework with a so-called "evolving Finite State Machine" (e-FSM).
      • By identifying state-transitions precisely, the future states can be predicted.
      • By determining states uniquely (using online-clustering methods) and recognizing the state consistently (expressed by a probability distribution), initially unexpected dangerous situations can be detected.
      • It reminds some ideas about risk assessment discussed during IV19: the discrepancy between expected outcome and observed outcome is used to quantify risk, i.e. the surprise or misinterpretation of the current situation).
  • Some concerns:
    • "The dimension of transition matrices should be expanded to represent state-transitions between all existing states"
      • What when the scenario gets more complex than the presented "simple car-following" and that the state space (treated as discrete) becomes huge?
    • In addition, "the total number of transition matrices is identical to the total number of actions".
      • Alone for the simple example, the acceleration command was sampled into 17 bins. Continuous action spaces are not an option.

"A Driving Intention Prediction Method Based on Hidden Markov Model for Autonomous Driving"

  • [ 2019 ] [📝] [ 🎓 IEEE ]

  • [ HMM, Baum-Welch algorithm, forward algorithm ]

Click to expand

One figure:

Source.
Source.

Authors: Liu, S., Zheng, K., Member, S., Zhao, L., & Fan, P.

  • One term: "mobility feature matrix"
    • The recorded data (e.g. absolute positions, timestamps ...) are processed to form the mobility feature matrix (e.g. speed, relative position, lateral gap in lane ...).
    • Its size is T × L × N: T time steps, L vehicles, N types of mobility features.
    • In the discrete characterization, this matrix is then turned into a set of observations using K-means clustering.
    • In the continuous case, mobility features are modelled as Gaussian mixture models (GMMs).
  • This work implements HMM concepts presented in my project Educational application of Hidden Markov Model to Autonomous Driving.

"Online Risk-Bounded Motion Planning for Autonomous Vehicles in Dynamic Environments"

  • [ 2019 ] [📝] [ 🎓 MIT ] [ 🚗 Toyota ]

  • [ intention-aware planning, manoeuvre-based motion prediction, POMDP, probabilistic risk assessment, CARLA ]

Click to expand

One figure:

Source.
Source.

Authors: Huang, X., Hong, S., Hofmann, A., & Williams, B.

  • One term: "Probabilistic Flow Tubes" (PFT)
    • A motion representation used in the "Motion Model Generator".
    • Instead of using hand-crafted rules for the transition model, the idea is to learns human behaviours from demonstration.
    • The inferred models are encoded with PFTs and are used to generate probabilistic predictions for both manoeuvre (long-term reasoning) and motion of the other vehicles.
    • The advantage of belief-based probabilistic planning is that it can avoid over-conservative behaviours while offering probabilistic safety guarantees.
  • Another term: "Risk-bounded POMDP Planner"
    • The uncertainty in the intention estimation is then propagated to the decision module.
    • Some notion of risk, defined as the probability of collision, is evaluated and considered when taking actions, leading to the introduction of a "chance-constrained POMDP" (CC-POMDP).
    • The online solver uses a heuristic-search algorithm, Risk-Bounded AO* (RAO*), takes advantage of the risk estimation to prune the over-risky branches that violate the risk constraints and eventually outputs a plan with a guarantee over the probability of success.
  • One quote (this could apply to many other works):

"One possible future work is to test our work in real systems".


"Towards Human-Like Prediction and Decision-Making for Automated Vehicles in Highway Scenarios"

  • [ 2019 ] [📝] [:octocat:] [🎞️] [ 🎓 INRIA ] [ 🚗 Toyota ]

  • [ planning-based motion prediction, manoeuvre-based motion prediction ]

Click to expand

Author: Sierra Gonzalez, D.

  • Prediction techniques are often classified into three types:

    • physics-based
    • manoeuvre-based (and goal-based).
    • interaction-aware
  • As I understood, the main idea here is to combine prediction techniques (and their advantages).

    • The driver-models (i.e. the reward functions previously learnt with IRL) can be used to identify the most likely, risk-aversive, anticipatory manoeuvres. This is called the model-based prediction by the author since it relies on one model.
      • But relying only on driver models to predict the behaviour of surrounding traffic might fail to predict dangerous manoeuvres.
      • As stated, "the model-based method is not a reliable alternative for the short-term estimation of behaviour, since it cannot predict dangerous actions that deviate from what is encoded in the model".
      • One solution is to add a term that represents how the observed movement of the target matches a given maneuver.
      • In other words, to consider the noisy observation of the dynamics of the targets and include these so-called dynamic evidence into the prediction.
  • Usage:

    • The resulting approach is used in the probabilistic filtering framework to update the belief in the POMDP and in its rollout (to bias the construction of the history tree towards likely situations given the state and intention estimations of the surrounding vehicles).
    • It improves the inference of manoeuvres, reducing rate of false positives in the detection of lane change manoeuvres and enables the exploration of situations in which the surrounding vehicles behave dangerously (not possible if relying on safe generative models such as IDM).
  • One quote about this combination:

"This model mimics the reasoning process of human drivers: they can guess what a given vehicle is likely to do given the situation (the model-based prediction), but they closely monitor its dynamics to detect deviations from the expected behaviour".

  • One idea: use this combination for risk assessment.

    • As stated, "if the intended and expected maneuver of a vehicle do not match, the situation is classified as dangerous and an alert is triggered".
    • This is an important concept of risk assessment I could identify at IV19: a situation is dangerous if there is a discrepancy between what is expected (given the context) and what is observed.
  • One term: "Interacting Multiple Model" (IMM), used as baseline in the comparison.

    • The idea is to consider a group of motion models (e.g. lane keeping with CV, lane change with CV) and continuously estimate which of them captures more accurately the dynamics exhibited by the target.
    • The final predictions are produced as a weighted combination of the individual predictions of each filter.
    • IMM belongs to the physics-based predictions approaches and could be extended for manoeuvre inference (called dynamics matching). It is often used to maintain the beliefs and guide the observation sampling in POMDP.
    • But the issue is that IMM completely disregards the interactions between vehicles.

"Decision making in dynamic and interactive environments based on cognitive hierarchy theory: Formulation, solution, and application to autonomous driving"

  • [ 2019 ] [📝] [ 🎓 University of Michigan ]

  • [ level-k game theory, cognitive hierarchy theory, interaction modelling, interaction-aware decision making ]

Click to expand

Authors: Li, S., Li, N., Girard, A., & Kolmanovsky, I.

  • One concept: cognitive hierarchy.

    • Other drivers are assumed to follow some "cognitive behavioural models", parametrized with a so called "cognitive level" σ.
    • The goal is to obtain and maintain belief about σ based on observation in order to optimally respond (using an MPC).
    • Three levels are considered:
      • level-0: driver that treats other vehicles on road as stationary obstacles.
      • level-1: cautious/conservative driver.
      • level-2: aggressive driver.
  • One quote about the "cognitive level" of human drivers:

"Humans are most commonly level-1 and level-2 reasoners".

Related works:

  • Li, S., Li, N., Girard, A. & Kolmanovsky, I. [2019]. "Decision making in dynamic and interactive environments based on cognitive hierarchy theory, Bayesian inference, and predictive control" [pdf]

  • Li, N., Oyler, D., Zhang, M., Yildiz, Y., Kolmanovsky, I., & Girard, A. [2016]. "Game-theoretic modeling of driver and vehicle interactions for verification and validation of autonomous vehicle control systems" [pdf]

    • "If a driver assumes that the other drivers are level-1 and takes an action accordingly, this driver is a level-2 driver".

    • Use RL with hierarchical assignment to learn the policy:
      • First, the π-0 (for level-0) is learnt for the ego-agent.
      • Then π-1 with all the other participants following π-0.
      • Then π-2 ...
    • Action masking: "If a car in the left lane is in a parallel position, the controlled car cannot change lane to the left".
      • "The use of these hard constrains eliminates the clearly undesirable behaviours better than through penalizing them in the reward function, and also increases the learning speed during training"
  • Ren, Y., Elliott, S., Wang, Y., Yang, Y., & Zhang, W. [2019]. "How Shall I Drive ? Interaction Modeling and Motion Planning towards Empathetic and Socially-Graceful Driving" [pdf] [code]

Source.
Source.
Source.
Source.


Rule-based Decision Making


"Formalizing Traffic Rules for Machine Interpretability"

Click to expand
Source.
Top left: Different techniques on how to model the rules have been employed: formal logics such as Linear Temporal Logic LTL or Signal Temporal Logic (STL), as well as real-value constraints. Middle and bottom: Rules are separated into premise and conclusion. The initial premise and exceptions (red) are combined by conjunction. Source

Authors: Esterle, K., Gressenbuch, L., & Knoll, A.

  • Motivation:

    • "Traffic rules are fuzzy and not well defined, making them incomprehensible to machines."

    • The authors formalize traffic rules from legal texts (here StVO) to a formal language (here LTL).
  • Which legal text defines rules?

  • Why Linear Temporal Logic (LTL) as the formal language to specify traffic rules?

    • "During the legal analysis, conjunction, disjunction, negation and implication proved to be powerful and useful tools for formalizing rules. As traffic rules such as overtaking consider temporal behaviors, we decided to use LTL."

    • "Others have used Signal Temporal Logic (STL) to obtain quantitative semantics about rule satisfaction. Quantitive semantics might be beneficial for relaxing the requirements to satisfy a rule."

  • Rules are separated into premise and conclusion.

    • "This allows rules to be separated into a premise about the current state of the environment, i.e. when a rule applies, and the legal behavior of the ego agent in that situation (conclusion). Then, exceptions to the rules can be modeled to be part of the assumption."

  • Tools:

    • INTERACTION: a dataset​ which focuses on dense interactions and analyze the compliance* of each vehicle to the traffic rules.
    • SPOT: a C++ library for model checking, to translate the formalized LTL formula to a deterministic finite automaton and to manipulate the automatons.
    • BARK: a benchmarking framework.
  • Evaluation of rule-violation on public data:

    • "Roughly every fourth lane change does not keep a safe distance to the rear vehicle, which is similar for the German and Chinese Data."


"A hierarchical control system for autonomous driving towards urban challenges"

  • [ 2020 ] [📝] [ 🎓 Chungbuk National University, Korea ]

  • [ FSM ]

Click to expand
Source.
Behavioural planning is performed using a two-state FSM. Right: transition conditions in the M-FSM. Source

Authors: Van, N. D., Sualeh, M., Kim, D., & Kim, G. W.

  • Motivations:
    • "In the DARPA Urban Challenge, Stanford Junior team succeeded in applying FSM with several scenarios in urban traffic roads. However, the main drawback of FSM is the difficulty in solving uncertainty and in large-scale scenarios."

    • Here:
      • The uncertainty is not addressed.
      • The diversity of scenarios is handled by a two-stage Finite State Machine (FSM).
  • About the two-state FSM:
    • 1- A Mission FSM (M-FSM).
      • Five states: Ready, Stop-and-Go (SAG) (main mode), Change-Lane (CL), Emergency-stop, avoid obstacle mode.
    • 2- A Control FSM (C-FSM) in each M-FSM state.
  • The decision is then converted into speed and waypoints objectives, handled by the local path planning.
    • It uses a real-time hybrid A* algorithm with an occupancy grid map.
    • The communication decision -> path planner is unidirectional: No feedback is given regarding the feasibility for instance.

"Trajectory Optimization and Situational Analysis Framework for Autonomous Overtaking with Visibility Maximization"

  • [ 2019 ] [📝] [🎞️] [ 🎓 National University of Singapore, Delft University, MIT ]
  • [ FSM, occlusion, partial observability ]
Click to expand
Left: previous work Source. Right: The BP FSM consists in 5 states and 11 transitions. Each transition from one state to the other is triggered by specific alphabet unique to the state. For instance, 1 is Obstacle to be overtaken in ego lane detected. Together with the MPC set of parameters, a guidance path is passed to the trajectory optimizer. Source.
Left: previous work Source. Right: The BP FSM consists in 5 states and 11 transitions. Each transition from one state to the other is triggered by specific alphabet unique to the state. For instance, 1 is Obstacle to be overtaken in ego lane detected. Together with the MPC set of parameters, a guidance path is passed to the trajectory optimizer. Source.

Authors: Andersen, H., Alonso-mora, J., Eng, Y. H., Rus, D., & Ang Jr, M. H.

  • Main motivation:
    • Deal with occlusions, i.e. partial observability.
    • Use case: a car is illegally parked on the vehicle’s ego lane. It may fully occlude the visibility. But has to be overtaken.
  • One related works:
  • About the hierarchical structure.
    • 1- A high-level behaviour planner (BP).
      • It is structured as a deterministic finite state machine (FSM).
      • States include:
        • Follow ego-lane
        • Visibility Maximization
        • Overtake
        • Merge back
        • Wait
      • Transition are based on some deterministic risk assessment.
        • The authors argue that the deterministic methods (e.g. formal verification of trajectory using reachability analysis) are simpler and computationally more efficient than probabilistic versions, while being very adapted for this information maximization:
        • This is due to the fact that the designed behaviour planner explicitly breaks the traffic rule in order to progress along the vehicle’s course.

    • Interface 1- > 2-:
      • Each state correspond to specific set of parameters that is used in the trajectory optimizer.
      • "In case of Overtake, a suggested guidance path is given to both the MPC and `backup trajectory generator".

    • 2- A trajectory optimizer.
      • The problem is formulated as receding horizon planner and the task is to solve, in real-time, the non-linear constrained optimization.
        • Cost include guidance path deviation, progress, speed deviation, size of blind spot (visible area) and control inputs.
        • Constraints include, among other, obstacle avoidance.
        • The prediction horizon of this MPC is 5s.
      • Again (I really like this idea), MPC parameters are set by the BP.
        • For instance, the cost for path deviation is high for Follow ego-lane, while it can be reduced for Visibility Maximization.
        • "Increasing the visibility maximization cost resulted in the vehicle deviating from the path earlier and more abrupt, leading to frequent wait or merge back cases when an oncoming car comes into the vehicle’s sensor range. Reducing visibility maximization resulted in later and less abrupt deviation, leading to overtaking trajectories that are too late to be aborted. We tune the costs for a good trade-off in performance."

        • Hence, depending on the state, the task might be to maximize the amount of information that the autonomous vehicle gains along its trajectory.
    • "Our method considers visibility as a part of both decision-making and trajectory generation".


"Jointly Learnable Behavior and Trajectory Planning for Self-Driving Vehicles"

  • [ 2019 ] [📝] [ 🚗 Uber ]
  • [ max-margin ]
Click to expand
Source.
Source.

Authors: Sadat, A., Ren, M., Pokrovsky, A., Lin, Y., Yumer, E., & Urtasun, R.

  • Main motivation:
    • Design a decision module where both the behavioural planner and the trajectory optimizer share the same objective (i.e. cost function).
    • Therefore "joint".
    • "[In approaches not-joint approaches] the final trajectory outputted by the trajectory planner might differ significantly from the one generated by the behavior planner, as they do not share the same objective".

  • Requirements:
    • 1- Avoid time-consuming, error-prone, and iterative hand-tuning of cost parameters.
      • E.g. Learning-based approaches (BC).
    • 2- Offer interpretability about the costs jointly imposed on these modules.
      • E.g. Traditional modular 2-stage approaches.
  • About the structure:
    • The driving scene is described in W (desired route, ego-state, map, and detected objects). Probably W for "World"?
    • The behavioural planner (BP) decides two things based on W:
      • 1- An high-level behaviour b.
        • The path to converge to based on one chosen manoeuvre: keep-lane, left-lane-change, or right-lane-change.
        • The left and right lane boundaries.
        • The obstacle side assignment: whether an obstacle should stay in the front, back, left, or right to the ego-car.
      • 2- A coarse-level trajectory τ.
      • The loss has also a regularization term.
      • This decision is "simply" the argmin of the shared cost-function, obtained by sampling+selecting the best.
    • The "trajectory optimizer" refines τ based on the constraints imposed by b.
      • For instance an overlap cost will be incurred if the side assignment of b is violated.
    • A cost function parametrized by w assesses the quality of the selected <b, τ> pair:
      • cost = w^T . sub-costs-vec(τ, b, W).
      • Sub-costs relate to safety, comfort, feasibility, mission completion, and traffic rules.
  • Why "learnable"?
    • Because the weight vector w that captures the importance of each sub-cost is learnt based on human demonstrations.
      • "Our planner can be trained jointly end-to-end without requiring manual tuning of the costs functions".

    • They are two losses for that objective:
      • 1- Imitation loss (with MSE).
        • It applies on the <b, τ> produced by the BP.
      • 2- Max-margin loss to penalize trajectories that have small cost and are different from the human driving trajectory.
        • It applies on the <τ> produced by the trajectory optimizer.
        • "This encourages the human driving trajectory to have smaller cost than other trajectories".

        • It reminds me the max-margin method in IRL where the weights of the reward function should make the expert demonstration better than any other policy candidate.

"Liability, Ethics, and Culture-Aware Behavior Specification using Rulebooks"

  • [ 2019 ] [📝] [:octocat:] [🎞️] [🎞️] [ 🎓 ETH Zurich ] [ 🚗 nuTonomy, Aptiv ]

  • [ sampling-based planning, safety validation, reward function, RSS ]

Click to expand

Some figures:

Defining the rulebook. Source.
Defining the rulebook. Source.
The rulebook is associated to an operator =< to prioritize between rules. Source.
The rulebook is associated to an operator =< to prioritize between rules. Source.
The rulebook serves for deciding which trajectory to take and can be adapted using a series of operations. Source.
The rulebook serves for deciding which trajectory to take and can be adapted using a series of operations. Source.

Authors: Censi, A., Slutsky, K., Wongpiromsarn, T., Yershov, D., Pendleton, S., Fu, J., & Frazzoli, E.

  • Allegedly how nuTonomy (an Aptiv company) cars work.

  • One main concept: "rulebook".

    • It contains multiple rules, that specify the desired behaviour of the self-driving cars.
    • A rule is simply a scoring function, or “violation metric”, on the realizations (= trajectories).
    • The degree of violation acts like some penalty term: here some examples of evaluation of a realization x evaluated by a rule r:
      • For speed limit: r(x) = interval for which the car was above 45 km/h.
      • For minimizing harm: r(x) = kinetic energy transferred to human bodies.
    • Based on Use as a comparison operator to rank candidate trajectories.
  • One idea: Hierarchy of rules.

    • With many rules being defined, it may be impossible to find a realization (e.g. trajectory) that satisfies all.
    • But even in critical situation, the algorithm must make a choice - the least catastrophic option (hence no concept of infeasibility.)
    • To deal with this concept of "Unfeasibility", priorities between conflicting rules which are therefore hierarchically ordered.
    • Hence a rulebook R comes with some operator <: <R, <>.
    • This leads to some concepts:
    • Safety vs. infractions.
      • Ex.: a rule "not to collide with other objects" will have a higher priority than the rule "not crossing the double line".
    • Liability-aware specification.
      • Ex.: (edge-case): Instruct the agent to collide with the object on its lane, rather than collide with the object on the opposite lane, since changing lane will provoke an accident for which it would be at fault.
      • This is close to the RSS ("responsibility-sensitive safety" model) of Mobileye.
    • Hierarchy between rules:
      • Top: Guarantee safety of humans.
        • This is written analytically (e.g. a precise expression for the kinetic energy to minimize harm to people).
      • Bottom: Comfort constraints and progress goals.
        • Can be learnt based on observed behaviour (and also tend to be platform- and implementation- specific).
      • Middle: All the other priorities among rule groups
        • There are somehow open for discussion.
  • How to build a rulebook:

    • Rules can be defined analytically (e.g. LTL formalism) or learnt from data (for non-safety-critical rules).
    • Violation functions can be learned from data (e.g. IRL).
    • Priorities between rules can also be learnt.
  • One idea: manipulation of rulebooks.

    • Regulations and cultures differ depending on the country and the state.
    • A rulebook <R, <> can easily be adapted using three operations (priority refinement, rule augmentation, rule aggregation).
  • Related work: Several topics raised in this paper reminds me subjects addressed in Emilio Frazzoli, CTO, nuTonomy - 09.03.2018

    • 1- Decision making with FSM:
      • Too complex to code. Easy to make mistake. Difficult to adjust. Impossible to debug (:cry:).
    • 2- Decision making with E2E learning:
      • Appealing since there are too many possible scenarios.
      • But how to prove that and justify it to the authorities?
        • One solution is to revert such imitation strategy: start by defining the rules.
    • 3- Decision making "cost-function-based" methods
      • 3-1- RL / MCTS: not addressed here.
      • 3-2- Rule-based (not the if-else-then logic but rather traffic/behaviour rules).
    • First note:
      • Number of rules: small (15 are enough for level-4).
      • Number of possible scenarios: huge (combinational).
    • Second note:
      • Driving baheviours are hard to code.
      • Driving baheviours are hard to learn.
      • But driving baheviours are easy to assess.
    • Strategy:
      • 1- Generate candidate trajectories
        • Not only in time and space.
        • Also in term of semantic (logical trajectories in Kripke structure).
      • 2- Check if they satisfy the constraints and pick the best.
        • This involves linear operations.
    • Conclusion:
      • "Rules and rules priorities, especially those that concern safety and liability, must be part of nation-wide regulations to be developed after an informed public discourse; it should not be up to engineers to choose these important aspects."

      • This reminds me the discussion about social-acceptance I had at IV19.^
      • As E. Frazzoli concluded during his talk, the remaining question is:
        • "We do not know how we want human-driven vehicle to behave?"
        • Once we have the answer, that is easy.

Some figures from this related presentation:

Candidate trajectories are not just spatio-temporal but also semantic. Source.
Candidate trajectories are not just spatio-temporal but also semantic. Source.
Define priorities between rules, as Asimov did for his laws. Source.
Define priorities between rules, as Asimov did for his laws. Source.
As raised here by the main author of the paper, I am still wondering how the presented framework deals with the different sources of uncertainties. Source.
As raised here by the main author of the paper, I am still wondering how the presented framework deals with the different sources of uncertainties. Source.

"Provably Safe and Smooth Lane Changes in Mixed Traffic"

  • [ 2019 ] [📝] [:octocat:] [🎞️] [ 🎓 FZI, KIT ]

  • [ path-velocity decomposition, IDM, RSS ]

Click to expand

Some figures:

The first safe? check might lead to conservative behaviours (huge gaps would be needed for safe lane changes). Hence it is relaxed with some Probably Safe? condition. Source.
The first safe? check might lead to conservative behaviours (huge gaps would be needed for safe lane changes). Hence it is relaxed with some Probably Safe? condition. Source.
Source.
Source.
Formulation by Pek, Zahn, & Althoff, 2017. Source.
Formulation by Pek, Zahn, & Althoff, 2017. Source.

Authors: Naumann, M., Königshof, H., & Stiller, C.

  • Main ideas:

    • The notion of safety is based on the responsibility sensitive safety (RSS) definition.
      • As stated by the authors, "A safe lane change is guaranteed not to cause a collision according to the previously defined rules, while a single vehicle cannot ensure that it will never be involved in a collision."
    • Use set-based reachability analysis to prove the "RSS-safety" of lane change manoeuvre based on gap evaluation.
      • In other words, it is the responsibility of the ego vehicle to maintain safe distances during the lane change manoeuvre.
  • Related works: A couple of safe distances are defined, building on


"Decision-Making Framework for Autonomous Driving at Road Intersections: Safeguarding Against Collision, Overly Conservative Behavior, and Violation Vehicles"

  • [ 2018 ] [📝] [🎞️] [ 🎓 Daejeon Research Institute, South Korea ]

  • [ probabilistic risk assessment, rule-based probabilistic decision making ]

Click to expand

One figure:

Source.
Source.

Author: Noh, S.

  • Many ML-based works criticize rule-based approaches (over-conservative, no generalization capability and painful parameter tuning).
    • True, the presented framework contains many parameters whose tuning may be tedious.
    • But this approach just works! At least they go out of the simulator and show some experiments on a real car.
    • I really like their video, especially the multiple camera views together with the RViz representation.
    • It can be seen that probabilistic reasoning and uncertainty-aware decision making are essential for robustness.
  • One term: "Time-to-Enter" (tte).
    • It represents the time it takes a relevant vehicle to reach the potential collision area (CA), from its current position at its current speed.
    • To deal with uncertainty in the measurements, a variant of this heuristic is coupled with a Bayesian network for probabilistic threat-assessment.
  • One Q&A: What is the difference between situation awareness and situation assessment?
    • In situation awareness, all possible routes are considered for the detected vehicles using a map. The vehicles whose potential route intersect with the ego-path are classified as relevant vehicles.
    • In situation assessment, a threat level in {Dangerous, Attentive, Safe} is inferred for each relevant vehicle.
  • One quote:

"The existing literature mostly focuses on motion prediction, threat assessment, or decision-making problems, but not all three in combination."



Model-Free Reinforcement Learning


"Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving"

Click to expand
Source.
Illustration of rapid phase transitions: when small changes in the critical states – the ones we see in near-accident scenarios – require dramatically different actions of the autonomous car to stay safe. Source.
Source.
I must say I am a bit disappointed by the collision rate results. The authors mention safety a lot, but their approaches crash every third trial on the unprotected turn. The too-conservative agent TIMID gets zero collision. Is it then fair to claim ''Almost as Safe as Timid''? In such cases, what could be needed is an uncertainty-aware agent, e.g. POMDP with information gathering behaviours. Source.
Source.
One of the five scenarios: In this unprotected left turn, a truck occludes the oncoming ado car. Bottom left: The two primitive policies (aggressive and timid) are first learnt by imitation. Then they are used to train a high-level policies with RL, to select at each timestep ts (larger that primitive timestep) which primitive to follow. Bottom right: While AGGRESSIVE achieves higher completion rates for the low time limits, it cannot improve further with the increasing limit with collisions. Source.
Source.
About phase transition: H-REIL usually chooses the timid policy at the areas that have a collision risk while staying aggressive at other locations when it is safe to do so. Baselines: π-agg, resp. π-agg, has been trained only on aggressive, resp. timid, rule-based demonstrations with IL. π-IL was trained on the mixture of both. A pity that no pure RL baseline is presented. Source.

Authors: Cao, Z., Bıyık, E., Wang, W. Z., Raventos, A., Gaidon, A., Rosman, G., & Sadigh, D.

  • Motivation:

    • Learn (no rule-based) driving policies in near-accident scenarios, i.e. where quick reactions are required, being efficient, while safe.
    • Idea: decompose this complicated task into two levels.
  • Motivations for a hierarchical structure:

    • 1- The presence of rapid phase transitions makes it hard for RL and IL to capture the policy because they learn a smooth policy across states.
      • "Phase transitions in autonomous driving occur when small changes in the critical states – the ones we see in near-accident scenarios – require dramatically different actions of the autonomous car to stay safe."

      • Due to the non-smooth value function, an action taken in one state may not generalize to nearby states.
      • During training, the algorithms must be able to visit and handle all the critical states individually, which can be computationally inefficient.
      • How to model the rapid phase transition?
        • By switching from one driving mode to another.
        • [The main idea here] "Our key insight is to model phase transitions as optimal switches, learned by RL, between different modes of driving styles, each learned through IL."

    • 2- To achieve full coverage, RL needs to explore the full environment while IL requires a large amount of expert demonstrations covering all states.
      • Both are prohibitive since the state-action space in driving is continuous and extremely large.
        • One solution to improve data-efficiency: Conditional imitation learning (CoIL). it extends IL with high-level commands and learns a separate IL model for each command. High-level commands are required at test time, e.g., the direction at an intersection. Instead of depending on drivers to provide commands, the authors would like to learn these optimal mode-switching policy.
      • "The mode switching can model rapid phase transitions. With the reduced action space and fewer time steps, the high-level RL can explore all the states efficiently to address state coverage."

      • "H-REIL framework usually outperforms IL with a large margin, supporting the claim that in near-accident scenarios, training a generalizable IL policy requires a lot of demonstrations."

    • Hierarchical RL enables efficient exploration for the higher level with a reduced action space, i.e. goal space, while making RL in the lower level easier with an explicit and short-horizon goal.

  • Motivations for combining IL+RL, instead of single HRL or CoIL:

    • 1- For the low-level policy:
      • Specifying reward functions for RL is hard.
      • "We emphasize that RL would not be a reasonable fit for learning the low-level policies as it is difficult to define the reward function."

      • "We employ IL to learn low-level policies πi, because each low-level policy sticks to one driving style, which behaves relatively consistently across states and requires little rapid phase transitions."

    • 2- For the high-level policy:
      • "IL does not fit to the high-level policy, because it is not natural for human drivers to accurately demonstrate how to switch driving modes."

      • "RL is a better fit since we need to learn to maximize the return based on a reward that contains a trade-off between various terms, such as efficiency and safety. Furthermore, the action space is now reduced from a continuous space to a finite discrete space [the conditional branches]."

      • Denser reward signals: setting ts > 1 reduces the number of time steps in an episode and makes the collision penalty, which appears at most once per episode, less sparse.
  • Hierarchical reinforcement and imitation learning (H-REIL).

    • 1- One high-level (meta-) policy, learned by RL that switches between different driving modes.
      • Decision: which low-level policy to use?
      • Goal: learn a mode switching policy that maximizes the return based on a simpler pre-defined reward function.
    • 2- Multiple low-level policies πi, learned by IL: one per driving mode.
      • Imitate drivers with different characteristics, such as different aggressiveness levels.
      • They are "basic" and realize relatively easier goals.
      • "The low-level policy for each mode can be efficiently learned with IL even with only a few expert demonstrations, since IL is now learning a much simpler and specific policy by sticking to one driving style with little phase transition."

  • How often are decisions taken?

    • 500ms = timestep of HL.
    • 100ms = timestep of LL.
    • There is clearly a trade-off:
      • 1- The high level should run at low frequency (action abstraction).
      • 2- Not too low since it should be able to react and switch quickly.
    • Maybe the timid IL primitive could take over before the end of the 500ms if needed.
    • Or add some reward regularization to discourage changing modes as long as it is not very crucial.
  • Two driving modes:

    • 1- timid: drives in a safe way to avoid all potential accidents. It slows down whenever there is even a slight risk of an accident.
    • 2- aggressive: favours efficiency over safety. "It drives fast and frequently collides with the ado car."
    • "Since humans often do not optimize for other nuanced metrics, such as comfort, in a near-accident scenario and the planning horizon of our high-level controller is extremely short, there is a limited amount of diversity that different modes of driving would provide, which makes having extra modes unrealistic and unnecessary in our setting."

    • "This intelligent mode switching enables H-REIL to drive reasonably under different situations: slowly and cautiously under uncertainty, and fast when there is no potential risk."

  • About the low-level IL task.

    • Conditional imitation:
      • All the policies share the same feature extractor.
      • Different branches split in later layers for action prediction, where each corresponds to one mode.
      • The branch is selected by external input from high-level RL.
    • Each scenario is run with the ego car following a hand-coded with two settings: difficult and easy.
      • Are there demonstrations of collisions? Are IL agents supposed to imitate that? How can they learn to recover from near-accident situations?
      • "The difficult setting is described above where the ado car acts carelessly or aggressively, and is likely to collide with the ego car."

      • These demonstrations are used to learn the aggressive and timid primitive policies.
    • Trained with COiLTRAiNE.
    • Number of episodes collected per mode, for imitation:
      • CARLO: 80.000 (computationally lighter).
      • CARLA: 100 (it includes perception data).
  • About the high-level RL task: POMDP formulation.

    • reward
      • "It is now much easier to define a reward function because the ego car already satisfies some properties by following the policies learned from expert demonstrations. We do not need to worry about jerk [if ts large], because the experts naturally give low-jerk demonstrations."

      • 1- Efficiency term: Re is negative in every time step, so that the agent will try to reach its destination as quickly as possible.
        • Ok, but how can it scale to continuous (non-episodic) scenarios, such as real-world driving?
      • 2- Safety term: Rs gets an extremely large negative value if a collision occurs.
    • training environment: CARLO.
    • No detail about the transition model.
  • About observation spaces, for both tasks:

    • In CARLO: positions and speeds of the ego car and the ado car, if not occluded, perturbed with Gaussian noise.
    • In CARLA: Same but with front-view image.
      • How to process the image?
        • Generate a binary image using an object detection model.
        • Only the bounding boxes are coloured white. It provides information of the ado car more clearly and alleviates the environmental noise.
    • How can the agents be trained if the state space varies?
    • Are frames stacked, as represented on the figure? Yes, one of the authors told me 5 are used.
  • About CARLO simulator to train faster.

    • CARLO stands for CARLA - Low Budget. It is less realistic but computationally much lighter than CARLA.
    • "While CARLO does not provide realistic visualizations other than two-dimensional diagrams, it is useful for developing control models and collecting large amounts of data. Therefore, we use CARLO as a simpler environment where we assume perception is handled, and so we can directly use the noisy measurements of other vehicles’ speeds and positions (if not occluded) in addition to the state of the ego vehicle."

  • Some concerns:

    • What about the car dynamics in CARLO?
      • [same action space] "For both CARLO and CARLA, the control inputs for the vehicles are throttle/brake and steering."

      • CARLO assumes point-mass dynamics models, while the model of physics engine of CARLA is much more complex with non-linear dynamics!
      • First, I thought agents were trained in CARLO and tested in CARLA. But the transfer is not possible because of **mismatch in dynamics and state spaces.
        • But apparently training and testing are performed individually and separately in both simulators. Ok, but only one set of results is presented. I am confused.
    • Distribution shift and overfitting.
      • Evaluation is performed on scenarios used during training.
      • Can the system address situations it has not been trained on?
    • Safety!

"Safe Reinforcement Learning for Autonomous Lane Changing Using Set-Based Prediction"

  • [ 2020 ] [📝] [ 🎓 TU Munich ]

  • [ risk estimation, reachability analysis, action-masking ]

Click to expand
Source.
action masking for safety verification using set-based prediction and sampling-based trajectory planning. Top-left: A braking trajectory with maximum deceleration is appended to the sampled trajectory. The ego vehicle never follows this braking trajectory, but it is utilized to check if the vehicle is in an invariably safe state at the end of its driving trajectory. Source.

Authors: Krasowski, H., Wang, X., & Althoff

  • Previous work: "High-level Decision Making for Safe and Reasonable Autonomous Lane Changing using Reinforcement Learning", (Mirchevska, Pek, Werling, Althoff, & Boedecker, 2018).

  • Motivations:

    • Let a safety layer guide the exploration process to forbid (mask) high-level actions that might result in a collision and to speed up training.
  • Why is it called "set-based prediction"?

    • Using reachability analysis, the set of future occupancies of each surrounding traffic participant and the ego car is computed.
    • "If both occupancy sets do not intersect for all consecutive time intervals within a predefined time horizon and if the ego vehicle reaches an invariably safe set, a collision is impossible."

    • Two-step prediction:
      • 1- The occupancies of the surrounding traffic participants are obtained by using TUM's tool SPOT = "Set-Based Prediction Of Traffic Participants".
        • [action space] "High-level actions for lane-changing decisions: changing to the left lane, changing to the right lane, continuing in the current lane, and staying in the current lane by activating a safe adaptive cruise control (ACC)."

        • SPOT considers the physical limits of surrounding traffic participants and constraints implied by traffic rules.
      • 2- The precise movement is obtained by a sampling-based trajectory planner.
  • Other approaches:

    • "One approach is to execute the planned trajectories if they do not collide with a traffic participant according to its prediction. The limitation is that collisions still happen if other traffic participants’ behaviour deviates from their prediction."

    • Reachability analysis verifies the safety of planned trajectories by computing all possible future motions of obstacles and checking whether they intersect with the occupancy of the ego vehicle.
      • "Since computing the exact reachable sets of nonlinear systems is impossible, reachable sets are over-approximated to ensure safety."

  • What if, after masking, all actions are verified as unsafe? Can safety be guaranteed?

    • "To guarantee safety, we added a verified fail-safe planner, which holds available a safe action that is activated when the agent fails to identify a safe action." [not clear to me]

    • Apparently, the lane is kept and an ACC module is used.
  • MDP formulation: Episodic task.

    • "We terminate an episode if the time horizon of the current traffic scenario is reached, the goal area is reached, or the ego vehicle collides with another vehicle."

    • Furthermore, the distance to goal is contained inside the state.
    • Not clear to me: How can this work in long highway journeys? By setting successive goals? How can state values be consistent? Should not the task be continuous rather than episodic?
  • About safe RL.

    • Safe RL approaches are distinguished by approaches that:
      • 1- Modify the optimization criterion.
      • 2- Modify the exploration process."
    • "By modifying the optimality objective, agents behave more cautious than those trained without a risk measure included in the objective; however, the absence of unsafe behaviors cannot be proven. In contrast, by verifying the safety of the action and excluding possible unsafe actions, we can ensure that the exploration process is safe."

  • Using real-world highway dataset (highD).

    • "We generated tasks by removing a vehicle from the recorded data and using its start and the final state as the initial state and the center of the goal region, respectively."

    • [Evaluation] "We have to differentiate between collisions for which the ego vehicle is responsible and collisions that occur because no interaction between traffic participants was considered due to prerecorded data."

  • Limitations.

    • 1- Here, safe actions are determined by set-based prediction, which considers all possible motions of traffic participants.
      • "Due to the computational overhead for determining safe actions, the computation time for training safe agents is 16 times higher than for the non-safe agents. The average training step for safe agents takes 5.46s and 0.112s for non-safe agents."

      • "This significant increase in the training time is mainly because instead of one trajectory for the selected action, all possible trajectories are generated and compared to the predicted occupancies of traffic participants."

    • 2- Distribution shift.

      "The agents trained in safe mode did not experience dangerous situations with high penalties during training and cannot solve them in the non-safe test setting. Thus, the safety layer is necessary during deployment to ensure safety."

    • 3- The interaction between traffic participants is essential.

      "Although the proposed approach guarantees safety in all scenarios, the agent drives more cautiously than normal drivers, especially in dense traffic."


"Safe Reinforcement Learning with Mixture Density Network: A Case Study in Autonomous Highway Driving"

  • [ 2020 ] [📝] [ 🎓 West Virginia University ]

  • [ risk estimation, collision buffer, multi-modal prediction ]

Click to expand
Source.
Multimodal future trajectory predictions are incorporated into the learning phase of RL algorithm as a model lookahead: If one of the future states of one of the possible trajectories leads to a collision, then a penalty will be assigned to the reward function to prevent collision and to reinforce to remember unsafe states. Otherwise, the reward term penalizes deviations from the desired speed, lane position, and safe longitudinal distance to the lead traffic vehicle. Source.

Author: Baheri, A.

  • Previous work: "Deep Q-Learning with Dynamically-Learned Safety Module: A Case Study in Autonomous Driving" (Baheri et al., 2019), detailed on this page too.

  • Motivations:

    • Same ideas as the previous work.
    • 1- Speed up the learning phase.
    • 2- Reduce collisions (no safety guarantees though).
  • Ingredients:

    • "Safety" is improved via reward shaping, i.e. modification of the optimization criterion, instead of constraining exploration (e.g. action masking, action shielding).
    • Two ways to classify a state as risky:
      • 1- Heuristic (rule-based). From a minimum relative gap to a traffic vehicle based on its relative velocity.
      • 2- Prediction (learning-based). Model lookaheads (prediction / rollouts) are performed to assess the risk of a given state.
      • [Why learnt?] "Because heuristic safety rules are susceptible to deal with unexpected behaviors particularly in a highly changing environment". [Well, here the scenarios are generated in a simulator, that is ... heuristic-based]

    • Contrary to action masking, "bad behaviours" are not discarded: they are stored in a collision-buffer, which is sampled during the update phase.
  • How to predict a set of possible trajectories?

    • Mixture density RNN (MD-RNN).
    • It has been offline trained (supervised learning).
    • "The central idea of a MDN is to predict an entire probability distribution of the output(s) instead of generating a single prediction. The MD-RNN outputs a GMM for multimodal future trajectory predictions that each mixture component describes a certain driving behavior."


"Reinforcement Learning with Uncertainty Estimation for Tactical Decision-Making in Intersections"

  • [ 2020 ] [📝] [ 🎓 Chalmers University ] [ 🚗 Volvo, Zenuity ]

  • [ uncertainty-aware, continuous episodes ]

Click to expand
Source.
Confidence of the recommended actions in a DQN are estimated using an ensemble method. Situation that are outside of the training distribution and events rarely seen while training can be detected. Middle: The reward associated to non-terminated states is function of the jerk (comfort) and scaled so that its accumulation until timeout reaches -1 if no jerk is applied. Source.

Authors: Hoel, C.-J., Tram, T., & Sjöberg, J.

  • Previous work: "Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation" (Hoel, Wolff, & Laine, 2020), detailed on this page too.

  • Motivation:

    • Same idea as the previous work (ensemble RPF method used to estimate the confidence of the recommended actions) but addressing intersections instead of highways scenarios.
  • Similar conclusions:

    • Uncertainty for situations that are outside of the training distribution can be detected, e.g. with the approaching vehicle driving faster than during training.
    • "Importantly, the method also indicates high uncertainty for rare events within the training distribution."

  • Miscellaneous:

  • Learning interactions:

    • A 1d-CNN structure is used for to input about the surrounding (and interchangeable) vehicles.
    • "Applying the same weights to the input that describes the surrounding vehicles results in a better performance."

  • How to learn continuous tasks while being trained on episodic ones (with timeout)?

    • "If the current policy of the agent decides to stop the ego vehicle, an episode could continue forever. Therefore, a timeout time is set to τmax = 20 s, at which the episode terminates. The last experience of such an episode is not added to the replay memory. This trick prevents the agent to learn that an episode can end due to a timeout, and makes it seem like an episode can continue forever, which is important, since the terminating state due to the time limit is not part of the MDP."

    • "The state space, described above, did not provide any information on where in an episode the agent was at a given time step, e.g. if it was in the beginning or close to the end. The reason for this choice was that the goal was to train an agent that performed well in highway driving of infinite length. Therefore, the longitudinal position was irrelevant. However, at the end of a successful episode, the future discounted return was 0. To avoid that the agent learned this, the last experience was not stored in the experience replay memory. Thereby, the agent was tricked to believe that the episode continued forever. [(C. Hoel, Wolff, & Laine, 2018)]"


"Development of A Stochastic Traffic Environment with Generative Time-Series Models for Improving Generalization Capabilities of Autonomous Driving Agents"

  • [ 2020 ] [📝] [ 🎓 Istanbul Technical University ]

  • [ generalisation, driver model ]

Click to expand
Source.
The trajectory generator uses the Social-GAN architecture with a 0.8s horizon. Bottom-right: state vector of the agent trained with Rainbow-DQN. Bottom-right: reward function: hard crash refers to direct collisions with other vehicles whereas the soft crash represents dangerous approaches (no precise detail). Source.

Authors: Ozturk, A., Gunel, M. B., Dal, M., Yavas, U., & Ure, N. K.

  • Previous work: "Automated Lane Change Decision Making using Deep Reinforcement Learning in Dynamic and Uncertain Highway Environment" (Alizadeh et al., 2019)
  • Motivation:
    • Increase generalization capabilities of a RL agent by training it in a non-deterministic and data-driven traffic simulator.
      • "Most existing work assume that surrounding vehicles employ rule-based decision-making algorithms such as MOBIL and IDM. Hence the traffic surrounding the ego vehicle always follow smooth and meaningful trajectories, which does not reflect the real-world traffic where surrounding vehicles driven by humans mostly execute erroneous manoeuvres and hesitate during lane changes."

      • "In this work, we develop a data driven traffic simulator by training a generative adversarial network (GAN) on real life trajectory data. The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles, which enables training the RL agent on much richer and realistic scenarios."

  • About GAN for trajectory generations:
    • "The generator takes the real vehicle trajectories and generate new trajectories. The discriminator classifies the generated trajectories as real or fake."

    • Two ways to adapt GAN architecture to time-series data:
      • 1- Convert the time series data into a 2D array and then perform convolution on it.
      • 2- [done here] Develop a sequence to sequence encoder and decoder LSTM network.
    • NGSIM seems to have been used for training. Not detailed.
  • To generate interaction-aware trajectories:
    • Based on Social-GAN.
      • "A social pooling is introduced where a LSTM Encoder encodes all the vehicles' position in a relative manner to the rest of the vehicles then a max pooling operation is performed at the hidden states of the encoder; arriving at a socially aware module."

    • Alternatives could be to include graph-based or convolution-based models to extract interaction models.

"From Simulation to Real World Maneuver Execution using Deep Reinforcement Learning"

  • [ 2020 ] [📝] [🎞️] [ 🎓 University of Parma ] [ 🚗 VisLab ]

  • [ sim2real, noise injection, train-validation-test, D-A3C, conditional RL, action repeat ]

Click to expand
Source.
Environments for training (top) differ from those used for validation and testing (bottom): Multi-environment System consists in four training roundabouts in which vehicles are trained simultaneously, and a validation environment used to select the best network parameters based on the results obtained on such scenario. The generalization performance is eventually measured on a separate test environment. Source.
Source.
A sort of conditional-learning methods: an aggressiveness level is set as an input in the state and considered during the reward computation. More precisely: α = (1−aggressiveness). During the training phase, aggressiveness assumes a random value from 0 to 1 kept fixed for the whole episode. Higher values of aggressiveness should encourage the actor to increase the impatience; consequently, dangerous actions will be less penalized. The authors note that values of aggressiveness sampled outside the training interval [0, 1] produce consistent consequences to the agent conduct, intensifying its behaviour even further. The non-visual channel can also be used to specify the desired cruising speed. Source1 Source2.
Source.
The active agent decides high-level actions, i.e. when to enter the roundabout. It should interact with passive agents, that have been collectively trained in a multi-agent fashion and decide lower-level speed/acceleration actions. Source.
Source.
Source.

Authors: Capasso, A. P., Bacchiani, G., & Broggi, A.

  • Motivations:
    • 1- Increase robustness. In particular generalize to unseen scenarios.
      • This relates to the problem of overfitting for RL agents.
      • "Techniques like random starts and sticky actions are often un-useful to avoid overfitting."

    • 2- Reduce the gap between synthetic and real data.
      • Ambition is to deploy a system in the real world even if it was fully trained in simulation.
    • Ingredients:
      • 1- Multiple training environments.
      • 2- Use of separated validation environment to select the best hyper-parameters.
      • 3- Noise injection to increase robustness.
  • Previous works:
    • 1 Intelligent Roundabout Insertion using Deep Reinforcement Learning [🎞️] about the need for a learning-based approach.
      • "The results show that a rule-based (TTC-based) approach could lead to long waits since its lack of negotiation and interaction capabilities brings the agent to perform the entry only when the roundabout is completely free".

    • 2 Microscopic Traffic Simulation by Cooperative Multi-agent Deep Reinforcement Learning about cooperative multi-agent:
      • "Agents are collectively trained in a multi-agent fashion so that cooperative behaviors can emerge, gradually inducing agents to learn to interact with each other".

      • As opposed to rule-based behaviours for passive actors hard coded in the simulator (SUMO, CARLA).
      • "We think that this multi-agent learning setting is captivating for many applications that require a simulation environment with intelligent agents, because it learns the joint interactive behavior eliminating the need for hand-designed behavioral rules."

  • Two types of agent, action, and reward.
    • 1- Active agents learn to enter a roundabout.
      • The length of the episode does not change since it ends once the insertion in the roundabout is completed.
      • High-level action in {Permitted, Not Permitted, Caution}
    • 2- Passive agents learn to drive in the roundabout.
      • Low-level action in {accelerate (1m/s2), keep the same speed, brake (−2m/s2).}
      • The length of the episode changes!
        • "We multiply the reward by a normalization factor, that is the ratio between the path length in which the rewards are computed, and the longest path measured among all the training scenarios"

    • They interact with each other.
      • The density of passive agents on the roundabout can be set in {low, medium, high}.
    • The abstraction levels differ: What are the timesteps used? What about action holding?
  • About the hybrid state:
    • Visual channel:
      • 4 images 84x84, corresponding to a square area of 50x50 meters around the agent:
        • Obstacles.
        • Ego-Path to follow.
        • Navigable space.
        • Stop line: the position where the agent should stop if the entry cannot be made safely.
    • Non-visual channel:
      • Measurements:
        • Agent speed.
        • Last action performed by the vehicle.
      • Goal specification:
        • "We coupled the visual input with some parameters whose values influence the agent policy, inducing different and tuneable behaviors: used to define the goal for the agent."

        • Target speed.
        • Aggressiveness in the manoeuvre execution:
          • A parameter fed to the net, which is taken into account in the reward computation.
          • I previous works, the authors tuned this aggressiveness level as the elapsed time ratio: time and distance left.
        • I see that as a conditional-learning technique: one can set the level of aggressiveness of the agent during testing.
    • In previous works, frames were stacked, so that speed can be inferred.
  • About robustness via noise injection:
    • Perception noise: position, size and heading as well as errors in the detection of passive vehicles (probably false positive and false negative).
    • Localisation noise: generate a new path of the active vehicle perturbing the original one using Cubic Bézier curves.
  • Multi-env and train-validate-test:
    • "Multi-environment System consists in four training roundabouts in which vehicles are trained simultaneously, and a validation environment used to select the best network parameters based on the results obtained on such scenario."

    • The generalization performance is eventually measured on a further test environment, which does not have any active role during the training phase.
  • About the simulator.
    • Synthetic representations of real scenarios are built with the Cairo graphic library.
  • About the RL algorithm.
    • Asynchronous Advantage Actor Critic (A3C):
      • Reducing the correlation between experiences, by collecting experiences from agents running in parallel environment instances. An alternative to the commonly used experience replay.
      • This parallelization also increases the time and memory efficiency.
    • Delayed A3C (D-A3C):
      • Keep parallelism: different environment instances.
      • Make interaction possible: several agents in each environment, so that they can sense each other.
      • Reduce the synchronization burden of A3C:
      • "In D-A3C, the system collects the contributions of each asynchronous learner during the whole episode, sending the updates to the global network only at the end of the episode, while in the classic A3C this exchange is performed at fixed time intervals".

  • About frame-skipping technique and action repeat (previous work).
    • "We tested both with and without action repeat of 4, that is repeating the last action for the following 4 frames (repeated actions are not used for computing the updates). It has been proved that in some cases action repeat improves learning by increasing the capability to learn associations between temporally distant (state, action) pairs, giving to actions more time to affect the state."

    • However, the use of repeated actions brings a drawback, that is to diminish the rate at which agents interact with the environment.

"Delay-Aware Multi-Agent Reinforcement Learning"

  • [ 2020 ] [📝] [:octocat:] [ 🎓 Carnegie Mellon ]

  • [ delay aware MDP, CARLA ]

Click to expand
Source.
Real cars exhibit action delay. Standard delay-free MDPs can be augmented to Delay-Aware MDP (DA-MDP) by enriching the observation vector with the sequence of previous action. Source1 Source2.
Source.
Combining multi-agent and delay-awareness. In the left-turn scenario, agents decide the longitudinal acceleration based on the observation the position and velocity of other vehicles. They are positively rewarded if all of them successfully finish the left turn and penalized if any collision happens. Another cooperative task is tested: driving out of the parking lot. Source.

Authors: Chen, B., Xu, M., Liu, Z., Li, L., & Zhao, D.

  • Motivations:
    • 1- Deal with observation delays and action delays in the environment-agent interactions of MDPs.
      • It is hard to transfer a policy learnt with standard delay-free MDPs to real-world cars because of actuator delays.
        • "Most DRL algorithms are evaluated in turn-based simulators like Gym and MuJoCo, where the observation, action selection and actuation of the agent are assumed to be instantaneous."

      • Ignoring the delay of agents violates the Markov property and results in POMDPs, with historical actions as hidden states.
        • To retrieve the Markov property, a Delay-Aware MDP (DA-MDP) formulation is proposed here.
    • 2- One option would be to go model-based, i.e. learning the transition dynamic model. Here the authors prefer to stay model-free.
      • "The control community has proposed several methods to address the delay problem, such as using Smith predictor [24], [25], Artstein reduction [26], [27], finite spectrum assignment [28], [29], and H∞ controller."

    • 3- Apply to multi-agent problems, using the Markov game (MG) formulation.
      • "Markov game is a multi-agent extension of MDP with partially observable environments.

      • Application: vehicles trying to cooperate at an unsignalized intersection.
  • About delays:
    • There exists two kinds:
      • "For simplicity, we will focus on the action delay in this paper, and the algorithm and conclusions should be able to generalize to systems with observation delays."

    • "The delay of a vehicle mainly includes:"

      • Actuator delay.
        • "The actuator delay for vehicle powertrain system and hydraulic brake system is usually between 0.3 and 0.6 seconds."

      • Sensor delay.
        • "The delay of sensors (cameras, LIDARs, radars, GPS, etc) is usually between 0.1 and 0.3 seconds."

      • Time for decision making.
      • Communication delay, in V2V.
  • "Consider a velocity of 10 m/s, the 0.8 seconds delay could cause a position error of 8 m, which injects huge uncertainty and bias to the state-understanding of the agents."

  • Main idea: observation augmentation with action sequence buffer to retrieve the Markov property.
    • The authors show that a Markov game (MG) with multi-step action delays can be converted to a regular MG by state augmentation (delay aware MG = DA-MG).
      • Proof by comparing their corresponding Markov Reward Processes (MRPs).
      • Consequence: instead of solving MGs with delays, we can alternatively solve the corresponding DA-MGs directly with DRL.
    • The input of the policy now consists of:
      • The current information of the environment. E.g. speeds and positions.
      • The planned action sequence of length k that will be executed from.
    • The agents interact with the environment not directly but through an action buffer.
      • The state vector is augmented with an action sequence being executed in the next k steps where k N is the delay duration.
      • The dimension of state space increases, since X becomes S × Ak.
      • a(t) is the action taken at time t but executed at time t + k due to the k-step action delay.
        • Difference between select an action (done by the agent) and execute an action (done by the environment).
  • About delay sensitivity:
    • Trade-off between delay-unawareness and complexity for the learning algorithm.
    • "When the delay is small (here less than 0.2s), the effect of expanding state-space on training is more severe than the model error introduced by delay-unawareness".

  • Papers about action delay in MDPs:
  • About multi-agent RL:
    • "The simplest way is to directly train each agent with single-agent RL algorithms. However, this approach will introduce the non-stationary issue."

    • Centralized training and decentralized execution.
      • Centralized Q-functions: The centralized critic conditions on global information (global state representation and the actions of all agents).
        • "The non-stationary problem is alleviated by centralized training since the transition distribution of the environment is stationary when knowing all agent actions."

      • Decentralized policy for each agent: The decentralized actor conditions only on private observation to avoid the need for a centralized controller during execution.

"Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems"

  • [ 2020 ] [📝] [ 🎓 UC Berkeley ]

  • [ offline RL ]

Click to expand
Source.
Offline reinforcement learning algorithms are reinforcement learning algorithms that utilize previously collected data, without additional online data collection. The term ''fully off-policy'' is sometimes used to indicate that no additional online data collection is performed. Source.

Authors: Levine, S., Kumar, A., Tucker, G., & Fu, J.

Note: This theoretical tutorial could also have been part of sections on model-based RL and imitation learning.

  • Motivations:
    • Make RL data-driven, i.e. forget about exploration and interactions and utilize only previously collected offline data.
      • "Fully offline RL framework is enormous: in the same way that supervised machine learning methods have enabled data to be turned into generalizable and powerful pattern recognizers (e.g., image classifiers, speech recognition engines, etc.), offline RL methods equipped with powerful function approximation may enable data to be turned into generalizable and powerful decision making engines, effectively allowing anyone with a large enough dataset to turn this dataset into a policy that can optimize a desired utility criterion."

    • Enable the use of RL for safety critical applications.
      • "In many settings, the online interaction is impractical, either because data collection is expensive (e.g., in robotics, educational agents, or healthcare) and dangerous (e.g., in autonomous driving, or healthcare)."

      • This could also remove the need for "simulation to real-world transfer" (sim-2-real), which is difficult:
      • "If it was possible to simply train policies with previously collected data, it would likely be unnecessary in many cases to manually design high-fidelity simulators for simulation-to-real-world transfer."

  • About the "data-driven" aspect.
    • Because formulation resembles the standard supervised learning problem statement, technics / lesson-learnt from supervised learning could be considered.
    • "Much of the amazing practical progress that we have witnessed over the past decade has arguably been driven just as much by advances in datasets as by advances in methods. In real-world applications, collecting large, diverse, representative, and well-labeled datasets is often far more important than utilizing the most advanced methods." I agree!

    • Both RobotCar and BDD-100K are cited as large video datasets containing thousands of hours of real-life driving activity.
  • One major challenge for offline RL formulation:
    • "The fundamental challenge with making such counterfactual queries [given data that resulted from a given set of decisions, infer the consequence of a different set of decisions] is distributional shift: while our function approximator (policy, value function, or model) might be trained under one distribution, it will be evaluated on a different distribution, due both to the change in visited states for the new policy and, more subtly, by the act of maximizing the expected return."

    • This makes offline RL differ from Supervised learning methods which are designed around the assumption that the data is independent and identically distributed (i.i.d.).
  • About applications to AD:
    • offline RL is potentially a promising tool for enabling safe and effective learning in autonomous driving.
      • "Model-based RL methods that employ constraints to keep the agent close to the training data for the model, so as to avoid out-of-distribution inputs as discussed in Section 5, can effectively provide elements of imitation learning when training on driving demonstration data.

    • The work of (Rhinehart, McAllister & Levine, 2018) "Deep Imitative Models for Flexible Inference, Planning, and Control", is mentioned:
      • It tries to combine the benefits of imitation learning (IL) and goal-directed planning such as model-based RL (MBRL).
      • "Indeed, with the widespread availability of high-quality demonstration data, it is likely that effective methods for offline RL in the field of autonomous driving will, explicitly or implicitly, combine elements of imitation learning and RL.


"Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey"

  • [ 2020 ] [📝] [ 🎓 University of South Florida ]

  • [ literature review ]

Click to expand
Source.
The review focuses on traffic signal control (TSC) use-cases. Some AD applications are nonetheless shortly mentioned. Source.

Authors: Haydari, A., & Yilmaz, Y.


"Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation"

  • [ 2020 ] [📝] [:octocat:] [🎞️] [ 🎓 Chalmers University ] [ 🚗 Volvo ]

  • [ ensemble, bayesian RL, SUMO ]

Click to expand
Source.
The main idea is to use an ensemble of neural networks with additional randomized prior functions (RPF) to estimate the uncertainty of decisions. Each member estimates Q(s, a) in a sum f + βp. Note that the prior p nets are initialized with random parameters θˆk that are kept fixed. Source.
Source.
Bottom: example of situation outside of the training distribution. Before seeing the other, the policy chooses to maintain its current speed. As soon as the stopped vehicle is seen, the uncertainty cv becomes higher that the safety threshold. The agent chooses then the fallback action brake hard early enough and manages to avoid a collision. The baseline DQN agent also brakes when it approaches the stopped vehicle, but too late. Source.

Authors: Hoel, C.-J., Wolff, K., & Laine, L.

  • Motivations:

    • 1- Estimate an uncertainty for each (s, a) pair when computing the Q(s,a), i.e. express some confidence about the decision in Q-based algorithms.
    • 2- Use this metric together with some safety criterion to detect situations are outside of the training distribution.
      • Simple DQN can cause collisions if the confidence of the agent is not considered:
      • "A fundamental problem with these methods is that no matter what situation the agents are facing, they will always output a decision, with no information on the uncertainty of the decision or if the agent has experienced anything similar during its training. If, for example, an agent that was trained for a one-way highway scenario would be deployed in a scenario with oncoming traffic, it would still output a decision, without any warning."

      • "The DQN algorithm returns a maximum-likelihood estimate of the Q-values. But gives no information about the uncertainty of the estimation. Therefore collisions occur in unknown situations".

    • 3- And also leverage this uncertainty estimation to better train and transfer to real-world.
      • "The uncertainty information could be used to guide the training to situations that the agent is currently not confident about, which could improve the sample efficiency and broaden the distribution of situations that the agent can handle."

      • If an agent is trained in a simulated environment and later deployed in real traffic, the uncertainty information could be used to detect situations that need to be added to the simulated world.

  • Previous works:

  • How to estimate uncertainty:

    • 1- Statistical bootstrapping (sampling).
      • The risk of an action is represented as the variance of the return (Q(s, a)) when taking that action. The variance is estimated using an ensemble of models:
      • "An ensemble of models is trained on different subsets of the available data and the distribution that is given by the ensemble is used to approximate the uncertainty".

        • Issue: "No mechanism for uncertainty that does not come from the observed data".
    • 2- Ensemble-RPF.
      • Same idea, but a randomized untrainable prior function (RPF) is added to each ensemble member.
        • "untrainable": The p nets are initialized with random parameters θˆk that are kept fixed.
      • This is inspired by the work of DeepMind: Randomized prior functions for deep reinforcement learning (Osband, Aslanides, & Cassirer, 2018).
      • Efficient parallelization is required since K nets need to be trained instead of one.
  • About the RPF:

    • Each of the K ensemble members estimates Q(s, a) in a sum f + βp, where β balances the importance of the prior function.
    • Training (generate + explore): One member is sampled. It is used to generate an experience (s, a, r, s') that is added to each ensemble buffer with probability p-add.
    • Training (evaluate + improve): A minibatch M of experiences is sampled from each ensemble buffer and the trainable network parameters of the corresponding ensemble member are updated.
  • About the "coefficient of variation" cv(s, a):

    • It is used to estimate the agent's uncertainty of taking different actions from a given state.
    • It represents the relative standard deviations, which is defined as the ratio of the standard deviation to the mean.
    • It indicates how far (s, a) is from the training distribution.
  • At inference (during testing):

    • actions with a level of uncertainty cv(s, a) that exceeds a pre-defined threshold are prohibited.
    • If no action fulfils the criteria, a fallback action a-safe is used.
    • Otherwise, the selection is done maximizing the mean Q-value.
    • The DQN-baseline, SUMO-baseline and proposed DQN-ensemble-RPF are tested on scenarios outside of the training distributions.
      • "The ensemble RPF method both indicates a high uncertainty and chooses safe actions, whereas the DQN agent causes collisions."

  • About the MDP:

    • state: relative positions and speeds.
    • action:
      • longitudinal: {-4, −1, 0, 1} m/s2.
      • lateral: {stay in lane, change left, change right}.
      • The fallback action a-safe is set to stay in lane and apply −4 m/s2
    • time:
      • Simulation timestep ∆t = 1s. Ok, they want high-level decision. Many things can happen within 1s though! How can it react properly?
      • A lane change takes 4s to complete. Once initiated, it cannot be aborted. I see that as a de-bouncing method.
    • reward:
      • v/v-max, in [0, 1], encouraging the agent to overtake slow vehicles.
      • -10 for collision / off-road (when changing lane when already on the side). done=True.
      • -10 if the behaviour of the ego vehicle causes another vehicle to emergency brake, or if the ego vehicle drives closer to another vehicle than a minimum time gap. done=False.
      • -1 for each lane change. To discourage unnecessary lane changes.
  • One quote about the Q-neural net:

    • "By applying CNN layers and a max pooling layer to the part of the input that describes the surrounding vehicles, the output of the network becomes independent of the ordering of the surrounding vehicles in the input vector, and the architecture allows a varying input vector size."

  • One quote about hierarchy in decision-making:

    • "The decision-making task of an autonomous vehicle is commonly divided into strategic, tactical, and operational decision-making, also called navigation, guidance and stabilization. In short, tactical decisions refer to high level, often discrete, decisions, such as when to change lanes on a highway."

  • One quote about the k-Markov approximation:

    • "Technically, the problem is a POMDP, since the ego vehicle cannot observe the internal state of the driver models of the surrounding vehicles. However, the POMDP can be approximated as an MDP with a k-Markov approximation, where the state consists of the last k observations."

    • Here the authors define full observability within 200 m.
  • Why is it called Bayesian RL?

    • Originally used in RL to creating efficient exploration methods.
    • Working with an ensemble gives probability distribution for the Q function.
    • The prior is introduced, here p.
    • What is the posterior? The resulting f+βp.
    • How could the likelihood be interpreted?

"Risk-Aware High-level Decisions for Automated Driving at Occluded Intersections with Reinforcement Learning"

Click to expand
Source.
The state considers the topology and include information about occlusions. A risk estimate is computed for each state using manually engineered rules: is it possible for the ego car to safely stop/leave the intersection? Source.
Source.
Centre-up: The main idea is to punish risky situations instead of only collision failures. Left: Other interesting tricks are detailed: To deal with a variable number of vehicles, to relax the Markov assumption, and to focus on the most important (closest) parts of the scene. Centre-down: Also, the intersection is described as a region (start and end), as opposed to just a single crossing point. Source.
Source.
Source.

Authors: Kamran, D., Lopez, C. F., Lauer, M., & Stiller, C.

  • Motivations:

    • 1- Scenario: deal with occluded areas.
    • 2- Behaviour: Look more human-like, especially for creeping, paying attention to the risk.
      • "This [RL with sparse rewards] policy usually drives fast at occluded intersections and suddenly stops instead of having creeping behavior similar to humans at risky situations."

    • 3- Policy: Find a trade-off between risky and over-cautious.
      • "Not as overcautious as the rule-based policy and also not as risky as the collision-based DQN approach"

  • reward: The main idea is to compromise between risk and utility.

    • Sparse reward functions ignoring risk have been applied in multiple previous works:
      • 1- Penalize collisions.
      • 2- Reward goal reaching.
      • 3- Give tiny penalties at each step to favour motion.
    • "We believe that such sparse reward can be improved by explicitly providing risk measurements for each (state, action) pair during training. The total reward value for the current state will then be calculated as the weighted average of risk and utility rewards."

    • utility reward: about the speed of ego vehicle.
    • risk reward: Assuming a worst-case scenario. Two safety conditions between the ego vehicle and each related vehicle are defined.
      • 1- Can the ego-car safely leave the intersection before the other vehicle can reach it?
      • 2- Can the ego-car safely stop to yield before the stop line?
      • They are not just binary. Rather continuous, based on time computations.
  • state.

    • A vector of measurements that can be used by a rule-based or a learning-based policy.
    • The map topology is strongly considered.
      • As opposed to grid-based representations which, in addition, require more complex nets.
    • About occlusion:
      • "For each intersecting lane L.i, the closest point to the conflict zone which is visible by perception sensors is identified and its distance along the lane to the conflict point will be represented as do.i. The maximum allowed velocity for each lane is also mapped and will be represented as vo.i."

    • About varying number of vehicles:
      • The 5 most-important interacting vehicles are considered, based on some distance metric.
    • About the discretization.
      • Distances are binned in a non-linear way. The resolution is higher in close regions: Using sqrt(x/d-max) instead of x/d-max.
    • About the temporal information:
      • 5 frames are stacked. With delta-t = 0.5s.
      • Probably to relax the Markov assumption. In particular, the "real" car dynamic suffers from delay (acceleration is not immediate).
  • action.

    • High-level actions define target speeds:
      • stop: full stop with maximum deceleration.
      • drive-fast: reach 5m/s.
      • drive-slow: reach 1m/s.

About

Personal notes about scientific and research works on "Decision-Making for Autonomous Driving"