personalrobotics / ada_feeding

Robot-assisted feeding demos and projects for the ADA robot

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[ROS2] Don't lock in mouth pose at the beginning of `MovetoMouth`

amalnanavati opened this issue · comments

The fact that the user's mouth pose is locked in at the beginning of MoveToMouth is non-ideal, since they are often looking down at their phone. Instead, the robot arm should adjust to their mouth pose as it is moving to their mouth. Even if we don't do to full visual servoing (i.e., where it tracks their mouth even very close to their mouth), we should at least track their mouth at the beginning of the motion.

#144 lays a foundation for this since now updating the mouth TF frame is asynchronous relative to the MoveToMouth tree. Hence, one way to achieve the above capability is:

  1. In the web app, don't toggle off face detection when the app navigates away from R_DetectingFace. Instead, toggle it off in other parts of the state machine (e.g., U_Bite_Done, and RobotMotion that is not MoveToMouth).
  2. Although the above should be enough, it will mess up in --sim mock since dummy Face Detection returns a hardcoded pose. Instead, dummy FaceDetection should first try to do a TF transform to get the transform from the camera to the base, and if not use the hard-coded staging transform. Then it should multiple that transform my a hardcoded head pose in the base frame.