fused-effects 1.1 performance degradation?
AlistairB opened this issue · comments
Hi,
I updated https://github.com/ocharles/effect-zoo/tree/eff-and-polysemy to use fused effects 1.1 out of curiosity this weekend. It seems to have overall decreased in performance. Specifically the 'big stack' and 'countdown' benchmarks are roughly 2x slower with 1.1.
- It is entirely possible I made a mistake with my 1.1 upgrade, introducing slowness so this should be checked.
- I am no way saying this change is important. I am just reporting this because I spent some time on it and it may be noteworthy.
- The 'file sizes' and 'reinterpretation' benchmarks show slight improvement with 1.1.
Benchmarks
Time is in milliseconds.
Big stack
1.0.2.2
1.1.0.0
Countdown
1.0.2.2
1.1.0.0
Reproducing
1.0.2.2 - https://github.com/AlistairB/effect-zoo/tree/removed-eff
1.1.0.0 - https://github.com/AlistairB/effect-zoo/tree/fused-1.1
Run with cabal run
and open the generated html files.
In both these branches I removed eff
from the benchmarks as I didn't want to have to install a ghc fork. Benchmarks are run with ghc 8.10.1 .
See AlistairB/effect-zoo@removed-eff...fused-1.1 for the changes I made to upgrade to 1.1
This is very interesting. I’m going to see if I can reproduce in https://github.com/patrickt/effects-benchmarks.
I don’t think we observed any performance degradation in our internal app, but I could be wrong, and that’s a very different workload.
Haven’t observed it in effects-benchmarks
. Going to try the bench
folder here and see if that reveals something. Perhaps INLINE stuff is coming into play, or this is an instance of criterion’s timing issues (we prefer gauge
).
Hmm, can’t reproduce this in the benchmark suite either.
With 1.0: https://gist.github.com/patrickt/ac727d6e5ca9ee557c5b6ccb7115e26b
With 1.1: https://gist.github.com/patrickt/7beb3f075b04d3fc15f39b238666bc41
Hmm interesting. 'file sizes' and 'reinterpretation' do show the expected improvement. Perhaps 'big stack' and 'countdown' are hitting some edge case that is now slower.
BTW I cleaned up the diff between the 2 branches AlistairB/effect-zoo@removed-eff...fused-1.1
Hella cool work @AlistairB, thank you so much for doing this and sharing your results with us!
I’m not seeing a smoking gun here; since this isn’t using any of our carriers that I can see, the only real change seems to be the difference in alg
’s signature between the two versions, which I’d expect to be a net null or even slight win for 1.1 given that we no longer need to (re)construct effects via Effect
(and thus should enjoy fewer allocations overall).
It’s definitely possible that we’re more sensitive to inlining now, or that we’ve underestimated the cost of constructing operations with send
in the new model; hopefully we can quantify where these costs are coming from a little more precisely.
Either way, thank you again!