personalrobotics / aikido

Artificial Intelligence for Kinematics, Dynamics, and Optimization

Home Page:https://personalrobotics.github.io/aikido/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Caching a trajectory

gilwoolee opened this issue · comments

We need to design a way to cache and load various types of trajectories so that we can save redundant planning/retiming.

For example, OpenRAVE supports serialization/deserialization which we utilized in prpy and had "waving" trajectories in herbpy that could be directly executed.

I think we should discuss which features to support or improve from the OpenRave API.

  1. Planner fills the trajectory class with timestamps, interpolation methods, and waypoints. It should be up to user to select trajectory class that will best execute this data.
  2. Users can create, append, retime, and serialize trajectories both in C++ and Python.
  3. Store arbitrary animations of an environment. Support for any configuration space similar to PlannerParameters so that multiple bodies and affine transformations are supported. For example, a robot opening door requires the robot joints and the door to move together.
  4. Sample a trajectory at any time. Easily set the scene as the trajectory dictates it. How would this affect robot controllers?
  5. It should be possible to play back the trajectory classes without loading any robots or any environment data. A real controller, on robot side, can link with openrave-core and use the trajectory class for interpolating incoming trajectory data.

Here are potential discussion points:

  1. Does it make sense for us to give flexibility to the user to select trajectory class?
  2. I think we should support the C++ version. Shall the serialization be in yaml format?
  3. Maybe we can defer this for now and support only single robot, and leave it as future feature.
  4. Not sure what it means by "easily set the scene as the trajectory dictates it"
  5. This may be a neat feature. We should be able to directly send a cached timed trajectory to a real-robot through ros controller.

@siddhss5 @brianhou @aditya-vk @sniyaz Please join the discussion.

Is there any reason why we should not copy-exact the OpenRAVE format? Do we need to invent something new?

We could, but it supports features that we may not need or re-think, which is why I raised the questions above. I'd like to make the first version as minimal as possible instead of supporting potentially unnecessary (or incorrect) things.

I do like the concept of (5), of being able to send trajectories straight to ros control. Combined with easy recording of teleop trajectories, this can allow us to quickly experiment with things.

I think one other question to ask as users is whether we want to assume any conventions about the serialized trajectories. It seems that rogue executes the serialized trajectories immediately, without any retiming/further postprocessing. Should we only serialize timed trajectories, or is there some value in having unprocessed paths? What about smoothed trajectories? (Maybe timed+unsmoothed would be easier to read/modify by hand, if we feel like that's a thing people do.)

  1. For AIKIDO, "trajectory class" currently refers to either Interpolated (perhaps with a non-GeodesicInterpolator) or a Spline, right? I think we're basically making this decision for the user by saying that it should always be a spline by the time it's sent to an executor, and I'm not convinced we have a need for anything else in the near future. That said, I guess it doesn't particularly hurt if we leave that as a field in whatever serialization format we decide on.

  2. I'm not especially familiar with OpenRAVE's format, but it seems to only include positions (not velocities). Is that sufficient?

    XML seems slightly verbose to deal with, although I'm sure a library would take care of that for us. I think we'd probably want to mock up what the format might look like if we were to use YAML, just to make it a fair comparison.

  3. I think if we do it "right", it might not be that hard to get arbitrary object animations for free. Maybe if we just store the DART joint/DOF names? I don't think this is a feature that would get use all that regularly though.

  4. We already have "sampling a trajectory at any time". The rest of this bullet also makes no sense to me.

  5. Agree that this would be nifty! Seems like it shouldn't be that hard: all we'd have to do is convert the serialized format into a ROS trajectory message, and it should be all the same to the controller.

Can we converge on the Minimum Viable Product (MVP) here, and assign a worker? It’s a choice that we can definitely iterate on (it’s a two-way door, rather than a one-way door) so it’s good to put something out there quickly :).

@brianhou To get MVP, how about this?

Just serialize timed trajectories, only supporting Spline, and keep the format as close to OpenRAVE's version. Let's not worry about feature 5, and have the task be a minimal demo code that loads the robot and do a series of trajectories (e.g. waving) by directly executing a sequence of deserialized trajectories without retiming or collision checking. In terms of the file format, I'm more inclined to use yaml just because we already use it in aikido. I agree we should keep its contents as close to OpenRAVE's xml for a fair comparison.

I'm not especially familiar with OpenRAVE's format, but it seems to only include positions (not velocities). Is that sufficient?

I think if we specify delta time, positions, and the interpolation method, it should be sufficient? e.g. herb's cached trajectory.

Sounds good to me!

I think if we specify delta time, positions, and the interpolation method, it should be sufficient? e.g. herb's cached trajectory.

Oops, I meant trajectories that have nonzero initial velocity. I don’t think it matters though!

Cool! As another example, here’s what RobowFlex as its trajectory representation. I know it’s pretty verbose but hey it works :).

Resolved by #541.