faasm / experiment-microbench

Faasm microbenchmarks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Faasm Microbenchmarks

These benchmarks aim to test the performance of Faasm's internals, rather than the full end-to-end request cycle.

They include:

  • Polybench/C
  • Python performance benchmarks

Local Faasm set-up

First you need to clone the Faasm repo somewhere on your host.

Then set FAASM_ROOT to the location of this checkout.

From there, set up a local cluster that is able to execute Python functions, according to the Python quick-start.

Then you need to build the benchmark runner:

cd ${FAASM_ROOT}
mkdir -p bench

# Start the faasm-cli container in the background
docker-compose up -d --no-recreate faasm-cli

# Get a terminal
docker-compose exec faasm-cli /bin/bash

# Set up the release build
inv dev.cmake --build=Release

# Build the benchmarker
inv dev.cc microbench_runner

Polybench/C

The Polybench functions are checked into this repo, and can be built with the Faasm C++ toolchain.

To set up the functions:

# Enter the container
docker-compose run polybench

# Compile and upload
inv polybench.wasm
inv polybench.upload

# Exit
exit

Then run with:

./bin/run.sh polybench

Results are found at results/polybench_out.csv.

Native Polybench run

To run the benchmarks natively and ensure a like-for-like comparison, you can set up and run the native Polybench benchmark runner with:

docker-compose run polybench

inv polybench.native-build

inv polybench.native-run

Results are found at results/polybench_native_out.csv.

Python performance benchmarks

Faasm's Python support includes the Python performance benchmarks library and the transitive dependencies for the benchmarks, hence we just need to upload the functions.

To set up the functions:

# Run container
docker-compose run pyperf

# Upload Python benchmark functions
inv pyperf.upload

# Leave container
exit

Then run with:

./bin/run.sh pyperf

Results are found at results/pyperf_out.csv.

Native Python run

To run the benchmarks natively and ensure a like-for-like comparison, you can set up and run the native Python benchmark runner with:

docker-compose run pyperf

inv pyperf.native-build

inv pyperf.native-run

Results are found at results/pyperf_native_out.csv.

Plotting results

Once you've got both the Faasm and native runs for either experiment, you can plot them with:

# If you have a display
inv plot.polybench
inv plot.pyperf

# Headless
inv plot.polybench --headless
inv plot.pyperf --headless

Docker images

To rebuild the Docker images, set up the virtualenv, then:

inv container.polybench --push

inv container.pyperf --push

About

Faasm microbenchmarks

License:Apache License 2.0


Languages

Language:C 81.5%Language:Python 14.2%Language:C++ 2.6%Language:CMake 1.2%Language:Dockerfile 0.3%Language:Shell 0.2%