google / perfetto

Performance instrumentation and tracing for Android, Linux and Chrome (read-only mirror of https://android.googlesource.com/platform/external/perfetto/)

Home Page:https://www.perfetto.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Any incremental reload mechanism exist?

ztlevi opened this issue · comments

Hi,

I was wondering if there is any incremental reload mechanism exists. I'm aware of this live_reload way, but it's used for dev development rather than detecting the changes of the trace files.

A completely reload takes long time when we try to open a large trace file.

Is there any incremental UI loading you can think of? When we flush more data into the trace file, or when we dump more data into the sqlite DB?

Thanks!
Ting

Perfetto has been designed from the beginning around supporting incremental streaming (and any time we've taken design decisions, we've tried to stick to this principle) but there's no out of the box support for this (mainly because we haven't needed this ourselves).

Realistically, there are a bunch of small things which are breaking this but they should all be fixable. However, as we're strapped for time, and this is not a priority for us, this is not something we're looking at actively. Patches in this area are welcome.

Got it. I would like to take a look at this feature. My sense is most of the changes are needed in the UI side, like a listener which receive emitted request to do some live reload or incremental load by doing more sql queries. While the file watch or the trace_processor will emit the requests when necessary.

Does that sound like a plan?

There is also some work to be done at the trace processor level mainly around normalizing the notion of a Flush vs EndOfFile - right now both are treated the same whereas we should really distinguish between the two.

Also you will need to have some code which feeds data to trace processor incrementally; I don't think you can use shell out of the box.

I thought about this some more. I think the trace processor is actually in a better shape than I what I thought because we already correctly action flushes.

What I suggest, is ignoring the UI for now and doing is the following:

  1. Collect the trace with write_into_file and flush_period_ms options set
  2. Start a trace_processor_shell instance with --httpd flag
  3. Write a program which incrementally reads from this file and then pushes the bytes to trace processor over http
  4. Using the Python API, query the trace processor to check if, when you do a query like select count(1) from slice, the trace processor is correctly updating with new counts continously.

If this works, only then would I consider looking at the UI. You will probably need to do as you say and write some code which causes a refetch of all the data; afaik there's a bunch of places where we assume things happen once on startup and won't change (i.e. number of tracks, the layout of the tracks etc.). You'll probably want to start with just updating the tracks which exist right now before moving to handle the more complex cases.

Closing as the question has been answered.