Faasm is a high-performance stateful serverless runtime.
Faasm provides multi-tenant isolation, yet allows functions to share regions of memory. These shared memory regions give low-latency concurrent access to data, and are synchronised globally to support large-scale parallelism.
Faasm combines software fault isolation from WebAssembly with standard Linux tools, to provide security and resource isolation at low cost. Faasm runs functions side-by-side as threads of a single runtime process, with low overheads and fast boot times.
Faasm is built on Faabric which provides the distributed messaging and state layer.
The underlying WebAssembly execution and code generation is handled by WAVM.
Faasm defines a custom host interface which extends WASI to include function inputs and outputs, chaining functions, managing state, accessing the distributed filesystem, dynamic linking, pthreads, OpenMP and MPI.
Our paper from Usenix ATC '20 on Faasm can be found here.
You can start a Faasm cluster locally using the docker-compose.yml
file in the
root of the project:
docker-compose up --scale worker=2
Then run the Faasm CLI, from which you can build, deploy and invoke functions:
# Start the CLI
./bin/cli.sh
# Upload the demo "hello" function
inv upload demo hello
# Invoke the function
inv invoke demo hello
Note that the first time you run the local set-up it will generate some machine
code specific to your host. This is stored in the container/machine-code
directory in the root of the project and reused on subsequent runs.
More detail on some key features and implementations can be found below:
- Usage and set-up - using the CLI and other features.
- C/C++ functions - writing and deploying Faasm functions in C/C++.
- Python functions - isolating and executing functions in Python.
- Rust functions - links and resources for writing Faasm Rust functions.
- Distributed state - sharing state between functions.
- Faasm host interface - the serverless-specific interface between functions and the underlying host.
- Kubernetes and Knative integration- deploying Faasm as part of a full serverless platform.
- Bare metal/ VM deployment - deploying Faasm on bare metal or VMs as a stand-alone system.
- API - invoking and managing functions and state through Faasm's HTTP API.
- MPI and OpenMP - executing existing MPI and OpenMP applications in Faasm.
- Developing Faasm - developing and modifying Faasm.
- Faasm.js - executing Faasm functions in the browser and on the server.
- Threading - executing multi-threaded applications.
- Proto-Faaslets - snapshot-and-restore to reduce cold starts.
- WAMR support - support for the wasm-micro-runtime (WIP).
- SGX - information on executing functions with SGX (WIP).
Faasm experiments and benchmarks live in the Faasm experiments repo:
- Tensorflow Lite - performing inference in Faasm with Tensorflow Lite
- Polybench - benchmarking with Polybench/C
- ParRes Kernels - benchmarking with the ParRes Kernels
- Python performance - executing the Python performance benchmarks