dojo / examples

:rocket: Dojo - example applications.

Home Page:http://dojo.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Use Benchmark.js to hook into lighthouse/puppeteer to extract first paint metrics

kitsonk opened this issue · comments

@umaar commented on Thu Sep 28 2017

Extract the first paint metrics of Dojo 2 widget-core and report that into the test harness so that we understand performance changes internally.

This would be useful for a post Intern 4.0 release!


@matt-gadd commented on Fri Sep 29 2017

I’m not convinced first paint metrics have much validity in widget-core, given the time taken to first paint is mostly around the critical path of network resources, script parsing and evaluation etc, which widget-core isn’t responsible for.

Benchmarking time to first meaningful paint and more crucially time to interaction would make sense in the dojo/examples repo which has real apps that holistically test our full toolchain though 👍. In the future dojo/cli-build-webpack could potentially cover parts of those performance scenarios too (but we don’t currently have any integration tests there).

In the case of widget-core performance I think we should definitely look into integrating uibench or js-framework-benchmark into our ci, I believe @agubler already has an impl of one and it just needs integrating?


@kitsonk commented on Fri Sep 29 2017

What likely makes sense for widget-core are some __render__() baselines so we can track performance. We should have some widgets that use all the features of the widget-core, like themes, after render, etc. We should only knowingly commit code the slows that process knowingly. Doing our part to make it easier. We could also benchmark maquette so we know if upstream changes effect our code, but that is tangential and protectionist.


@agubler commented on Fri Sep 29 2017

I would agree that running benchmarks for tests similar to js-framework-benchmark against widget-core would be beneficial. Even better would be, if these benchmarks were integrated into the CI pipeline so we can understand performance implications of proposed changes as part of the review process.

We could hook our examples up to report on metrics like first meaningful paint and first meaningful interaction, it would potentially highlight regressions but the issue is that is is as much about the application design, performance patterns like PRPL, SSR, HTTP2, service workers and possibly build tooling than really the actual underlying view framework i.e. widget-core. I guess it could help us identify where "something" has changed for those examples, be that in the build command or a change of application design etc.

We should certainly tackle the first of these in widget-core and then see what makes sense for benchmarking more holistic performance metrics.


@matt-gadd commented on Fri Sep 29 2017

@kitsonk yes, thats what js-framework-benchmark effectively asserts (and defacto standard so implementations with most frameworks for comparison). do we want to go ahead and move this issue to dojo/examples? and raise another issue here for integrating js-framework-benchmark?

Do we think this is still valid for our examples?

It would be nice to have something we can point to that shows some performance metrics, so when people ask about performance we can show them. It doesn't have to be this, but something that we can refer to would be awesome.

We added benchmarking to the core @dojo/widget-core build rather than to any of these examples - Also we have the submitted the same benchmarking to https://github.com/krausest/js-framework-benchmark - the results are available here