alexmarqs / artillery-poc

🧪 Artillery.io for load/performance testing (PoC)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Artillery PoC

License: MIT

Example of load/performance tests using Artillery.io for proof of concept (poc) purposes.

System requirements

  • Node;
  • Docker (docker-compose);

Test overview

Artillery allows any number of scenarios to be defined in the same test script, however it is often useful to keep individual scenarios in their own files. For that reason, this example uses a common config file tests/config.yml to be reused for the test scenarios that you may have. You can define the endpoint(s) on which the system under test can be accessed, define load phases, load data from external CSV files, configure plugins etc. In the folder tests/scenarios you can find the scenarios describing the actions which a virtual user created by Artillery will take.

With Artillery you can write custom logic using JS functions. In this example the file tests/utils/data-processor.js is used to generate request payloads with random data (generated by Faker.js).

An "local" environment was configured to perform tests against the local rest api. The test scenario tests/scenarios/register-search-user.yml contains a scenario that exercises creation/search API endpoints.

How to run locally

Install dependencies

With NPM:

npm install

With Yarn:

yarn install

Set up dummy REST API

Run docker-compose command to start a json-server container (available at http://localhost:3333):

docker-compose up -d

The file data/db.json contains the dummy db data to be loaded by the json-server.

Execute test

With NPM:

npm run test:local

With Yarn:

yarn test:local

If you’d like to see the details of every HTTP request/response and metrics that Artillery is sending, run it in DEBUG mode appending :debug to the command (test:local:debug). After running the test, a JSON/HTML report will appear in the folder reports.

To avoid having specific script commands for all your configured environments, you can export/set the environment variable TEST_ENV and then use only the script command test/test:debug.

Publish metrics to Datadog (via HTTP API)

In order to send Artillery metrics (such as latency and response codes) to Datadog, this example uses the official artillery-plugin-publish-metrics plugin (check the configuration in tests/config.yml).

Set your Datadog API KEY in the environment variable DD_API_KEY. If you want to publish metrics using this local example, replace the DD_API_KEY value already defined in the command script test:local (see package.json).

Note: Use a Datadog account in the US region (see this open issue).

Configure thresholds to CI/CD

If you want Artillery exiting with a non-zero code when a condition is not met (useful when running CI/CD pipelines), you must define some service-level objectives (SLO's) for your application. This can be achieved by configuring pass/failed criterias (thresholds):

Latency

The following percentile aggregations are available: min, max, median, p95, and p99. Example:

config:
  ensure:
    p95: 200 # make Artillery exit with a non-zero if the aggregate p95 latency is more than 200ms

Error rate

The error rate is defined as the ratio of virtual users that didn’t complete their scenarios successfully to the total number of virtual users created during the test. Example:

config:
  ensure:
    maxErrorRate: 1 # make Artillery exit with a non-zero if the total error rate exceeded 1%

References

https://artillery.io/docs/guides/guides/command-line.html#Overview

https://artillery.io/docs/guides/guides/test-script-reference.html#Overview

https://artillery.io/docs/guides/guides/http-reference.html#Overview

https://artillery.io/docs/guides/plugins/plugin-publish-metrics.html#Published-metrics

https://artillery.io/blog/using-fakerjs-with-artillery

About

🧪 Artillery.io for load/performance testing (PoC)

License:MIT License


Languages

Language:JavaScript 100.0%