personalrobotics / ada_feeding

Robot-assisted feeding demos and projects for the ADA robot

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[ROS2] Separate MoveToMouth into Two Actions

amalnanavati opened this issue · comments

Currently, the MoveToMouth tree moves to the staging location, waits to detect the mouth, and then moves to the user. The challenge with this is that while the robot is detecting the face, it is unclear to the user what is going on (the app just shows "thinking") and hence it is unclear to them what they should do to facilitate successful robot behaviors (e.g., move their head, teleoperate the arm down, etc.)

Instead, we should separate bite transfer into two separate action calls:

  • MoveToStagingConfiguration should move to the staging configuration.
  • Then, the app should subscribe to face detection and display the face detection stream for the user.
  • MoveToMouth should take in the results of face detection and move to the mouth.

For robustness, MoveToMouth should have some fallbacks in case the detected face is stale, plus it should convert it to the base frame at the timestamp in the message in case the robot has moved since.