Schunk gripper times out and requires stop/ack cycle under new driver
gizatt opened this issue · comments
Greg Izatt commented
If you leave the gripper sitting for a while (~5 minutes), it occasionally enters a FAST_STOP state and needs to be brought out of it with an ack. Can we detect and auto-remedy this? (It's easy to handle by hand currently by using the Schunk interface website that's hosted at the Schunk ip.)
Happens on driver in PR #324.
Lucas Manuelli commented
I noticed this behavior as well. Seems to happen with 2 minutes pretty consistently.
Greg Izatt commented
Did you mean to close this?
…On Fri, Sep 7, 2018 at 12:47 PM Lucas Manuelli ***@***.***> wrote:
I noticed this behavior as well. Seems to happen with 2 minutes pretty
consistently.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#326 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC7LTyqOKxYmLXV7ZGuYbK1oO_zWEhZTks5uYqMcgaJpZM4WURj7>
.
Lucas Manuelli commented
Whoops, no
Greg Izatt commented
Investigated with @manuelli, lessons so far:
- The gripper driver itself (even without anything communicating with it) times out and enters an error state after a few seconds to a few minutes, violating the "reconnect_timeout_ms" param. (Defaults to 150ms, but we saw it happen even with the timeout at 4000ms, which is beyond a timeout we'd expect on even a really busy network.)
- Re-connection happens automatically and succeeds, but puts the gripper in an error state. Lucas has a node that will detect that and acknowledge the error state to make the gripper controllable again that seems to work well.
- Python Schunk driver is currently misusing the
gripper_control
actionlib interface and will be tidied up, but was probably not the cause of this issue. - There's a reasonable chance we might be able to remedy this if we switch the WSG-50 driver back to UDP (from TCP, which it's using now).
- I also just found a switch on the WSG-50 interface config webpage that stops it from entering an error state on TCP connection loss, and watched it lose + regain connection without Lucas' keep-alive node running and still be controllable afterwards. So that helps too! By the power of those two things combined, this issue is hopefully thoroughly worked-around.
Lucas Manuelli commented
We seem to have solved this.