navzam / functions-benchmarking

Measuring cold start times for Azure Functions

Repository from Github https://github.comnavzam/functions-benchmarkingRepository from Github https://github.comnavzam/functions-benchmarking

functions-benchmarking

Measuring cold start times for serverless compute platforms

Hosting the Function

The function code should be hosted on the appropriate platform: Azure Functions (using a Consumption plan), AWS Lambda, or Google Cloud Functions. For Node, make sure you install the required Node modules (alexa-sdk, request, async, and underscore depending on how many you want to test) as instructed by the platforms' docs.

Running

The runner will periodically call your function, measure the request time, and save the results to a file locally. It requires a config file to specify run parameters. See Configuration for details and the sample config for an example. Note that the sample config is incomplete - you still need to fill in your function's uri.

cd runner
npm i
tsc
node ./lib/runner.js ./sample_config.json

The results will be saved in a results directory.

Configuration

All platforms

  • numRuns: the number of times to call the function (i.e. your sample size)
  • numModules: the number of Node modules the function will require() (0 or 1 should be sufficient)
  • delay: the time between requests (in seconds)
  • uri: the full URI of your hosted function
  • language: the language used to write the function
  • platform: the platform hosting the function

Azure Functions

About

Measuring cold start times for Azure Functions


Languages

Language:JavaScript 39.3%Language:C# 30.5%Language:TypeScript 30.2%