insin / react-hn

React-powered Hacker News client

Home Page:https://insin.github.io/react-hn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Investigate performance regressions

addyosmani opened this issue · comments

It looks like our switch to optimistically caching all entries (in master) into IDB has introduced a performance regression when compared to the currently deployed version:

http://www.webpagetest.org/video/compare.php?tests=160502_FJ_TT4,160502_3Y_TT5

I'm going to investigate what's causing this. The change in question was this one 6f05fe2.

Hmm.. I see only two reasons here: 1) Target device's disk is too slow so network requests are faster; 2) This is IDB freezing main thread because it's known that in IDB land not everything is really async. Plus, Firetruck does a lot of separate requests to IDB (not sure if localForage optimizes this, I suppose it isn't), so this is another possible cause for the problem.

Possible solution is to not store everything in parts and immediately, but rather write everything (separate entry for each thread I think) each N seconds and only after onload. Then read everything together (again, for a thread/list of threads) instead of combining from parts.

Plus, Firetruck does a lot of separate requests to IDB (not sure if localForage optimizes this, I suppose it isn't), so this is another possible cause for the problem.

I think this is the crux of what's happening. We're performing writes as the reads are still occurring and on particularly content-dense pages this is causing significant slow-down.

Possible solution is to not store everything in parts and immediately, but rather write everything (separate entry for each thread I think) each N seconds and only after onload.

I agree that we should probably switch to a better batching/queue model for cache writes. Once the response reads are complete, we can then do the write operations. This could be done with events + a timeout or, even better, using the Background Sync API. In the short term, I'll try to switch firetruck to just perform writes in a batch. Shouldn't be too hard, but will try to knock out this week.

I've spent the last few days looking at this (it's more complex than originally thought). My current model moves a lot of the data storage operations into a Web Worker and works on a batch/queue/interval to save us thrashing the main thread but unfortunately we still run into significant delays with reading back comment threads from localforage/IDB. I haven't been able to pin it down to specific methods, just that there's a very large, constant amount of work being done and batching only helps a little.

I added time spent profiling to a lot of what my local firetruck branch is doing, but the "real-time" nature of the data means that there is a lot to cache and recache (thus a lot to iterate through when checking for cached entries). I'm going to see if I can go back to the drawing board and come up with a smarter solution to this problem.

It may be the case that if we can get Server-side rendering working we just cache and serve those prerendered views instead of 1000s of individual responses but let's see where things go.

Closed via 37f709c