Measuring cold start times for serverless compute platforms
The function code should be hosted on the appropriate platform: Azure Functions (using a Consumption plan), AWS Lambda, or Google Cloud Functions. For Node, make sure you install the required Node modules (alexa-sdk, request, async, and underscore depending on how many you want to test) as instructed by the platforms' docs.
The runner will periodically call your function, measure the request time, and save the results to a file locally. It requires a config file to specify run parameters. See Configuration for details and the sample config for an example. Note that the sample config is incomplete - you still need to fill in your function's uri.
cd runner
npm i
tsc
node ./lib/runner.js ./sample_config.json
The results will be saved in a results directory.
numRuns: the number of times to call the function (i.e. your sample size)numModules: the number of Node modules the function willrequire()(0or1should be sufficient)delay: the time between requests (in seconds)uri: the full URI of your hosted functionlanguage: the language used to write the functionplatform: the platform hosting the function
fromZip: whether your function is using the new Run-From-Zip optionscaleEnabled: whether your function is using the new HTTP scaling behavior