rosetta-rs / template-benchmarks-rs

Collected benchmarks for templating crates written in Rust

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Automated regular benchmarking

Th3Whit3Wolf opened this issue · comments

I was wondering if you have thought about setting up a github action to automatically update the README at a regular interval? Like me once a week and on pull request.

Or maybe switch all of the dependencies from gits to versions and then have the script run on every PR as long as nothing breaks. I think you can set up dependabot to automatically merge pull request as long as a ci test passes.

Running the benchmarks on GitHub Actions might result in really variable performance, which might affect the relative results, so it doesn't seem like a non-trivial thing to setup. If you want updates to happen more often, I think the path of least resistance would be to automate the formatting of the Criterion results in the format I publish them here.

Running the benchmarks on GitHub Actions might result in really variable performance, which might affect the relative results

What makes you say that? Github action runners use Standard_DS2_v2 virtual machines in Microsoft Azure and the rust team uses them to track performance regressions

I've added a feature flag to build.rs that when ran looks through Cargo.toml finds all of the dependencies that aren't whitelisted and gets their description and homepage/repo from crates.io using crates_io_api (there's a lot more potentially to grab from there) and creates the markdown format links.

I have made a script that runs this, then the benchmarks, and then gets the benchmark data from /target/criterion/ and formats the data into markdown tables. The end results is below.

Rust template engine benchmarks

This repo tries to assess Rust template engine performance. Following the
download ratings from crates.io, these nine projects are assessed:

Results

These results were produced by github actions.

As a violin plot generated by Criterion:

Big table violin plot
Teams violin plot

Numbers, as output by Criterion:

Big Table

Library Lower bound Estimate Upper bound
Askama 396.40 us 397.29 us 398.34 us
fomat 187.56 us 187.70 us 187.84 us
Handlebars 4.3653 ms 4.3748 ms 4.3912 ms
Horrorshow 264.86 us 265.87 us 266.86 us
Liquid 5.0699 ms 5.0741 ms 5.0783 ms
Markup 70.159 us 75.223 us 80.683 us
Maud 215.73 us 227.47 us 240.76 us
Ructe 749.90 us 750.31 us 750.80 us
Sailfish 26.754 us 28.762 us 30.840 us
Tera 3.4972 ms 3.5284 ms 3.5465 ms
write 354.09 us 381.41 us 409.24 us

Teams

Library Lower bound Estimate Upper bound
Askama 756.79 ns 814.72 ns 875.17 ns
fomat 493.12 ns 529.63 ns 566.09 ns
Handlebars 6.2966 us 6.7593 us 7.2227 us
Horrorshow 443.48 ns 445.23 ns 447.03 ns
Liquid 9.2885 us 9.8717 us 10.436 us
Markup 102.74 ns 104.12 ns 105.63 ns
Maud 460.82 ns 463.12 ns 465.62 ns
Ructe 761.29 ns 817.00 ns 878.03 ns
Sailfish 99.475 ns 99.552 ns 99.637 ns
Tera 5.8723 us 6.3235 us 6.7808 us
write 621.93 ns 671.17 ns 722.17 ns

Running the benchmarks

$ cargo bench

Plots will be rendered if gnuplot is installed and will be available in the target/criterion folder.

The script could probably be made to sort the numbers in ascending order.

Well, we can try it on a branch to see how variable the results are. Note that the initial crates list should remain ordered by popularity (crates.io recent downloads) and I think I want to stick to testing Git dependencies.

What is your use case anyway -- why is all this important to you?

Cool beans, I'll try to get a PR submitted before next week.

May I ask why you like using git dependencies? It seems to me like a specific version is more common way to use crates and is maybe a little more fair to the crate authors.

I don't really have a use case. I browse this repo sometimes to see what's the fastest templating library right now and I thought why not automate things to keep everything up to date and while we're at why not read dependencies so when someone wants to add a new benchmark that's all they have to do and the README will update itself for them. I'm also procrastinating on a project.

I have it mostly setup here. I still need setup the cron aspect of it up but other than that it does mostly what I've discussed. I am waiting to get a PR accepted on the crates_io_api repo before I submit a PR to you.

It generates a table of all the templating libraries including their rank, name, description, recent downloads, and when it was last updated. The tables with the results are sorted by average performance now as well.

I added a relative performance column in results tables to address #10

The README looks nice @Th3Whit3Wolf! I find the violin plots impossible to read as well. Are they useful for anyone if we have these new tables? Should they be removed @djc?

@utkarshkukreti thank you!

@djc cron is setup now as well.

@Th3Whit3Wolf the README looks nice unfortunately the cronjob is disabled. @djc is there any chance to merge it into this library?

Another idea is to test released version only and use dependentBot to track latest releases of each template engine.