johncarl81 / parceler

:package: Android Parcelables made easy through code generation.

Home Page:http://parceler.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ParcelAnnotationProcessor.process take lot amount of time.

doniwinata0309 opened this issue · comments

Hi, from asyncprofiler i can see the ParcelAnnotationProcessor.process dominating my android build and cpu process.

Screen Shot 2020-10-21 at 14 27 55

it seems spend most of time to FileOutputStream.write.
Screen Shot 2020-10-21 at 14 43 22

Do you have any clue what may causing this ? my project contains lot of parcel annotated class, so i am not sure if it expected or not for that process taking that long.
Let me know if you need additional information. Thank you in advance.

I attached the flame chart below
flamechart.zip

This is interesting. Could you give me a ballpark of how any parcel annotated classes you're working with?

Basically something like this: https://github.com/doniwinata0309/build-perf-test/blob/test/parceler/androidAppModule1/src/main/java/com/tes1.java

But some are more complex and extending from few parents, also contains field from custom object.

I just created 300+ dummy parcel class on this project
https://github.com/doniwinata0309/build-perf-test/tree/test/parceler

That will produce this flamegraph

Screen Shot 2020-10-22 at 18 06 29

android_build-6.7-flames.svg.zip

So it seems this process that taking so long
https://github.com/johncarl81/transfuse/blob/master/transfuse-support/src/main/java/org/androidtransfuse/transaction/CodeGenerationScopedTransactionWorker.java#L49

Right, ok... looking at it deeper, it looks like it's taking a long time to flush the FileOutputStream to disk. Not sure if we'll be able to do anything about this.

A while ago we implemented incremental processing to the annotation processor - does it take this long every time, or only on the first clean build?

Our Incremental build doing pretty well, we only got issue for clean/full build.

I see, it seems the huge number of input and disk IO speed that made it slower. Especially when there are several workers working in the same time. Normally 1 module may take 2 minute to complete, but when they are compiled in parallel (we set 5 workers can run in parallel) it will takes 4 minutes for each module to complete.

Do you think we can do something with FileOutputStream under JCodeModel library by modifying tranfuse library ? perhaps i can try some approach mentioned here: https://stackoverflow.com/a/20555164 or https://www.oracle.com/technical-resources/articles/javase/perftuning.html

Would buffering do the trick then? - maybe just wrap the given OutputStream here with BufferedOutputStream ?

possibly, i will try to modify it a little bit later. thank you for pointing that out.
If you also have some alternative please let me know, i will love to try it

I tried to modify it https://github.com/johncarl81/transfuse/blob/master/transfuse-core/src/main/java/org/androidtransfuse/gen/FilerSourceCodeWriter.java#L53
to this code:
int bufferSize = 8 * 1024;
OutputStream os = new BufferedOutputStream(
resource.openOutputStream(),
bufferSize
);

But the issue not fixed.

From this class:
https://github.com/johncarl81/transfuse/blob/master/transfuse-support/src/main/java/org/androidtransfuse/transaction/CodeGenerationScopedTransactionWorker.java#L49
But how FilerSourceCodeWriter used in that file ? It seems use CodeWriter instead FilerSourceCodeWriter right ?

ah sorry, it does use the FilerSourceCodeWriter, just realise it extending that class and i saw method printed in the flamegraph

Screen Shot 2020-10-27 at 18 08 20

Seems to be the same even i use buffer with size 8kb and 32kb.

There may be no way around it. I'll try to fire it up on my machine to see how slow it is on different hardware.

johncarl81/transfuse#233

There is new issue to adress this, we can try to use JavaFileObject.openWriter as alternative. I will try it next week.

hi @johncarl81
johncarl81/transfuse#234
this PR solve my issue and my build is faster now (sorry after few times rerun the scenario on CI and local, seems the build speed is pretty much the same with old parceler). However, the parceler annotation cpu usage now less than dagger and databinding.
Screen Shot 2020-11-02 at 17 04 02

do you mind to check it later ? thanks

Can we close @doniwinata0309 ?

yes thank you. is it going to delivered on next release of parceler (1.1.14 i guess) ?