appetizerio / replaykit

[DEPRECATED] Command line tools for recording, replaying and mirroring touchscreen events for Android

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Replay is not precise: pixel difference exists

shoooe opened this issue · comments

This can be reproduced as follows:

  • Open Google Maps
  • Set some kind of reference on the map
  • Record yourself swipe/move around the map and then going back to the reference
  • Go back to the reference
  • Replay the actions

What you'll find is that the position at the end of the replay is not always the same. In other words, replaying doesn't always yield the same results.

I wonder if this is just an extension of this issue: http://stackoverflow.com/a/42632915/493122

Thanks for the report and the stackoverflow thread. I'll try a complicated gesture on Gmap and compare screenshots from multiple replays.

Is your program using low level events with abd shell getevent and adb shell sendevent by any chance? Because I was about to try that to see if it was precise enough.

The input side uses adb shell getevent and the output pipes to https://github.com/openstf/minitouch. In most devices, two input events have an interval between 10ms~20ms. And I measure the time taking to process an event is roughly microsecond level (not accounting for the socket over adb cost). Also in some rate cases, I observe app lags just a bit, causing different replay outcomes.
Anyhow, I will repeat replaying for a couple times and compare the screenshots first.

BTW, sendevent is not precise, cuz every time you send a point you need to create a sendevent process on the device side, which is way too costly. Minitouch has a native executable listening on a socket for incoming input events and directly injects whatever received to the system.

Hi, I have coded up a script to measure the "visual difference" of different replays, inspired by your methodology. The gist is here. I would refine it and add it to this repo.
Basically it opens a painter app and draws a picture. The visual difference is captured with ImageMagick, in red.

I discover two issues on our side:

  1. it seems that the last operation would miss an UP command, which is a known and closed issue. I've reopened the issue track this. AFAIK, it is a buffering problem. (aimed for 1.0.5 release for now)
  2. I also observe visual difference for the thing I draw on the canvas. This could be caused by the <x,y> imprecision or timing, or both. I will follow up to create a canvas app to compare the MotionEvent it receives with the ones send by our toolkit. If the problem is with <x,y>, it would be just an easy bug to deal with. The timing is more of a limitation, also related to your question on the stackoverflow. See below. (aimed for 1.0.5 release for now)
  3. For timing, we've tried several backends, such as sendevent, MonkeyDevice/Chimp. As I mentioned, sendevent would create a process per point, which totally kills the timing.MonkeyDevice agent on the device is just not as reliable as it should be and we have some initial attempts but later abandoned. Check this if you are interested. The current acceptable backend is openstf/minitouch, which is with current toolkit. After calibrating and some tuning, I believe the toolkit would get better (aimed for 1.0.5 release for now)
  4. I doubt that one can achieve actual "deterministic replays" with whatever those input recording and replaying tools. From our experiences, even if we can enforce perfect determinism for input events (x,y and timing), the app would still have other non-determinism, notably lags, network activities and Canvas view responsiveness. I suspect the problem you encountered is an add-up of the imprecision of the MonkeyDevice and the Canvas View's problem. Thus for your higher level design, treat these tools, as well as our toolkit, as a "input automation tool" and have some tolerance of the error.

I also post part of this answer to stackoverflow in case other users there are interested.