andreesteve / voronoi-benchmark-rs

A naive benchmark of Voronoi implementations in Rust

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A naive benchmark of Voronoi implementations in Rust

This is a simplistic benchmark of the different Voronoi diagram generating crates in Rust I could find on crates.io.

Disclaimer

Please take the results here with a grain of salt. Different crates take different design approaches that are optimized for distinct scenarios and thus will vary in their execution time. For instance, some implementations provide an iterative approach in which the diagram is updated as each site/point is added; others collect all the sites before computing the diagram.

Also not all crates provide the same level of functionality, which may impact their execution time. Please refer to each crate's documentation for more details.

Motivation

Voronoi diagrams are fun. I wrote my own Voronoi library in Rust, voronoice. Besides that, there are a few crates that provide the ability to generate them. I wanted to understand their differences and performance characteristics, so I created this benchmark to see how they compared.

Methodology

  1. Criterion.rs is used to drive the benchmark execution
  2. Random sites/points are generated in a preparation step (not measured as part of final result)
  3. For each crate a create_benchmark_fn was implemented to take the generated points and pass it to the library to compute the voronoi diagram based on their documented examples
  4. The execution returns all data generated by the library execution to Criterion. This is to prevent the benchmark from measuring any deallocation overhead, as documented here.
  5. Each execution generates its own set of random points (not included in the measure). So crates run with different points, but they are constraint to the same space (-1.0..1.0) and all libraries were configured to use a bounding box that encompases that range

Fairness

I tried to be as fair as I could when writing the benchmark for each implementation. Having that said, the benchmark function for each library is heavely based on their documentation examples. If you see some code that could be introducing bias in any of the brenchmarks, please do let me know by filing an issue or sending a PR!

Running the benchmark

You can run the benchmark by cloning this repository and executing the following on the terminal:

> cargo bench

Results

Last updated: 2022-02-27

Crate versions used

Crate Version
delaunay2d 0.0.2
voronator 0.2.0
voronoi 0.1.4
voronoice 0.2.0

rustc 1.58.1 (db9d1b20b 2022-01-20)

Scenario: Build all Voronoi Cells

This scenario gives a set of N sites (f64,f64) and calculates and returns the polygons representing each Voronoi cell. Each set of N sites is run 100 times. The values of N can be found here. This benchmark should run for about 1 hour and take about 1GB of memory at peak.

Results Summary

Mean values shown below.

Crate N=100 N=500 N=1k N=10k N=100k N=500k N=1M N=2M
delaunay2d 661 us 9945 us 37259 us N/A N/A N/A N/A N/A
voronator 142 us 412 us 688 us 6.1 ms 87 ms 0.57 s 1.17 s 2.60 s
voronoi 166 us 900 us 1863 us 25.5 ms 351 ms 2.08 s 4.52 s 9.67 s
voronoice 57 us 277 us 556 us 6.0 ms 84 ms 0.56 s 1.23 s 2.65 s
  • N/A = not computed (see note below)
  • k = 1,000
  • M = 1,000,000

NOTE: delaunay2d's algorithmic complexity seems to be non-quasilinear, so I resticted its tests to only N in [100, 500, 1000] to keep the entire benchmark within 60 minutes execution time. Due to calculating just a few N, it doesn't show nicely in the graph below. But you can find its report here.

Report links

About

A naive benchmark of Voronoi implementations in Rust


Languages

Language:HTML 98.9%Language:Rust 1.1%