camelmasa / go-ftw

Web Application Firewall Testing Framework - Go version

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Go-FTW - Framework for Testing WAFs in Go!

pre-commit Go Report Card Go Doc PkgGoDev Release Total alerts Coverage Quality Gate Status

This software should be compatible with the Python version.

I wrote this one to get more insights on the original version, and trying to shed some light on the internals. There are many assumptions on the inner workings that I needed to dig into the code to know how they worked.

My goals are:

  • get a compatible ftw version, with no dependencies and easy to deploy
  • be extremely CI/CD friendly
  • be fast (if possible)
  • add features like:
    • syntax checking on the test files
    • use docker API to get logs (if possible), so there is no need to read files
    • add different outputs for CI (junit xml?, github, gitlab, etc.)

Install

Go to the releases page and get the one that matches your OS.

If you have Go installed and configured to run Go binaries from your shell you can also run

go install github.com/coreruleset/go-ftw@latest

Example Usage

The go-ftw is designed to run Web Application Firewall (WAF) unit tests. The primary focus is the OWASP ModSecurity Core Rule Set.

In order to run the tests, you need to prepare the following:

  1. Active WAF
  2. Log where the WAF writes the alert messages
  3. go-ftw config file .ftw.yaml in the local folder or in your home folder (see YAML Config file for more information).
  4. At least one unit test in (go)-ftw's yaml format.

YAML Config file

With the configuration, you can set paths for your environment, enable and disable features and you can also use it to alter the test results.

The config file has six basic settings:

  • logfile : path to WAF log with alert messages, relative or absolute
  • testoverride : a list of things to override (see "Overriding tests" below)>
  • mode : "default" or "cloud" (only change it if you need "cloud")
  • logmarkerheadername : name of a HTTP header used for marking log messages, usually X-CRS-TEST (see How log parsing works below)
  • maxmarkerretries : the maximum number of times the search for log markers will be repeated; each time an additional request is sent to the web server, eventually forcing the log to be flushed
  • maxmarkerloglines the maximum number of lines to search for a marker before aborting

You can probably leave the last three alone, they are set to sane defaults.

Example with absolute logfile:

logfile: /apache/logs/error.log
logmarkerheadername: X-CRS-TEST
testoverride:
mode: "default"

Example with relative logfile:

logfile: ../logs/error.log
logmarkerheadername: X-CRS-TEST
testoverride:
mode: "default"

Example with minimal definitions:

The minimal requirement for go-ftw is to have a logfile when running in default mode:

logfile: ../logs/error.log

By default, go-ftw looks for a file in $PWD / local folder with the name .ftw.yaml. If this can not be found, it will look in the user's HOME folder. You can pass the --config <config file name> to point it to a different file.

WAF Server

I normally perform my testing using the Core Rule Set.

You can start the containers from that repo using docker compose:

git clone https://github.com/coreruleset/coreruleset.git
cd coreruleset
docker compose -f tests/docker-compose.yml up -d modsec2-apache

Logfile

Running in default mode implies you have access to a logfile for checking the WAF behavior against test results. For this example, assuming you are in the base directory of the coreruleset repository, these are the configurations for apache and nginx:

---
logfile: 'tests/logs/modsec2-apache/error.log'
---
logfile: 'tests/logs/modsec3-nginx/error.log'

Running

This is the help for the run command:

./ftw run --help
Run all tests below a certain subdirectory. The command will search all y[a]ml files recursively and pass it to the test engine.

Usage:
  ftw run [flags]

Flags:
      --connect-timeout duration   timeout for connecting to endpoints during test execution (default 3s)
  -d, --dir string                 recursively find yaml tests in this directory (default ".")
  -e, --exclude string             exclude tests matching this Go regexp (e.g. to exclude all tests beginning with "91", use "91.*").
                                   If you want more permanent exclusion, check the 'testoverride' option in the config file.
  -h, --help                       help for run
      --id string                  (deprecated). Use --include matching your test only.
  -i, --include string             include only tests matching this Go regexp (e.g. to include only tests beginning with "91", use "91.*").
      --max-marker-log-lines int   maximum number of lines to search for a marker before aborting (default 500)
      --max-marker-retries int     maximum number of times the search for log markers will be repeated.
                                   Each time an additional request is sent to the web server, eventually forcing the log to be flushed (default 20)
  -o, --output string              output type for ftw tests (default "normal")
      --read-timeout duration      timeout for receiving responses during test execution (default 1s)
      --show-failures-only         shows only the results of failed tests
  -t, --time                       show time spent per test

Global Flags:
      --cloud           cloud mode: rely only on HTTP status codes for determining test success or failure (will not process any logs)
      --config string   override config file (default is $PWD/.ftw.yaml)
      --debug           debug output
      --trace           trace output: really, really verbose

Here's an example on how to run your tests recursively in the folder tests:

ftw run -d tests -t

And the result should be similar to:

❯ ./ftw run -d tests -t

πŸ› οΈ  Starting tests!
πŸš€ Running!
πŸ‘‰ executing tests in file 911100.yaml
	running 911100-1: βœ” passed 6.382692ms
	running 911100-2: βœ” passed 4.590739ms
	running 911100-3: βœ” passed 4.833236ms
	running 911100-4: βœ” passed 4.675082ms
	running 911100-5: βœ” passed 3.581742ms
	running 911100-6: βœ” passed 6.426949ms
...
	running 944300-322: βœ” passed 13.292549ms
	running 944300-323: βœ” passed 8.960695ms
	running 944300-324: βœ” passed 7.558008ms
	running 944300-325: βœ” passed 5.977716ms
	running 944300-326: βœ” passed 5.457394ms
	running 944300-327: βœ” passed 5.896309ms
	running 944300-328: βœ” passed 5.873305ms
	running 944300-329: βœ” passed 5.828122ms
βž• run 2354 total tests in 18.923445528s
⏭ skipped 7 tests
πŸŽ‰ All tests successful!

Happy testing!

Output

Now you can choose how the output of the test session is shown by passing the -o flag. The default output is -o normal, and it will show the emojis in all the supported terminals. If yours doesn't support emojis, or you want a plain format, you can use -o plain:

./ftw run -d tests -o plain -i 932240

** Running go-ftw!
	skipping 920360-1 - (enabled: false) in file.
	skipping 920370-1 - (enabled: false) in file.
	skipping 920380-1 - (enabled: false) in file.
	skipping 920390-1 - (enabled: false) in file.
=> executing tests in file 932240.yaml
	running 932240-1: + passed in 39.928201ms (RTT 67.096865ms)
	running 932240-2: + passed in 29.299056ms (RTT 65.650821ms)
	running 932240-3: + passed in 30.426324ms (RTT 63.173202ms)
	running 932240-4: + passed in 29.111381ms (RTT 66.593728ms)
	running 932240-5: + passed in 30.627351ms (RTT 67.101436ms)
	running 932240-6: + passed in 40.735442ms (RTT 79.628474ms)
+ run 6 total tests in 200.127755ms
>> skipped 3322 tests
\o/ All tests successful!

To support automation for processing the test results, there is also a new JSON output available using -o json:

{
  "run": 8,
  "success": [
    "911100-1",
    "911100-2",
    "911100-3",
    "911100-4",
    "911100-5",
    "911100-6",
    "911100-7",
    "911100-8"
  ],
  "failed": null,
  "skipped": [
    "913100-1",
    "913100-2",
    "913100-3",
    ...
    "980170-2"
  ],
  "ignored": null,
  "forced-pass": null,
  "forced-fail": null,
  "runtime": {
    "911100-1": 20631077,
    "911100-2": 14112617,
    "911100-3": 14524897,
    "911100-4": 14699391,
    "911100-5": 16137499,
    "911100-6": 16589660,
    "911100-7": 16741235,
    "911100-8": 20658905
  },
  "TotalTime": 134095281
}

Then it is easy to use your jq skils to get the information you want.

The list of supported outputs is:

  • "normal"
  • "quiet"
  • "github"
  • "json"
  • "plain"

Only show failures

If you are only interested to see when tests fail, there is a new flag --show-only-failures that does exactly that. This is helpful when running in CI/CD systems like GHA to get shorter outputs.

Additional features

  • templates with the power of Go text/template. Add your template to any data: sections and enjoy!
  • Sprig functions can be added to templates as well.
  • Override test results.
  • Cloud mode! This new mode will ignore log files and rely solely on the HTTP status codes of the requests for determining success and failure of tests.

With templates and functions, you can simplify bulk test writing, or even read values from the environment while executing. These features allow you to write tests like this:

data: 'foo=%3d{{ "+" | repeat 34 }}'

Will be expanded to:

data: 'foo=%3d++++++++++++++++++++++++++++++++++'

But also, you can get values from the environment dynamically when the test is run:

data: 'username={{ env "USERNAME" }}

Will give you, as you expect, the username running the tests:

data: 'username=fzipi

Other interesting functions you can use are: randBytes, htpasswd, encryptAES, etc.

Overriding tests

Sometimes you have tests that work well for some platform combinations, e.g. Apache + modsecurity2, but fail for others, e.g. NGiNX + modsecurity3. Taking that into account, you can override test results using the testoverride config param. The test will be skipped, and the result forced as configured.

Tests can be altered using four lists:

  • input allows you to override global parameters in tests. An example usage is if you want to change the dest_addr of all tests to point to an external IP or host
  • ignore is for tests you want to ignore. You should add a comment on why you ignore the test
  • forcepass is for tests you want to pass unconditionally. You should add a comment on why you force to pass the test
  • forcefail is for tests you want to fail unconditionally. You should add a comment on why you force to fail the test

Each list is populated by regular expressions (see https://pkg.go.dev/regexp), which match against test IDs. The following is an example using all the lists mentioned above:

...
testoverride:
  input:
    dest_addr: "192.168.1.100"
    port: 8080
    protocol: "http"
  ignore:
    # text comes from our friends at https://github.com/digitalwave/ftwrunner
    '941190-3$': 'known MSC bug - PR #2023 (Cookie without value)'
    '941330-1$': 'know MSC bug - #2148 (double escape)'
    '942480-2$': 'known MSC bug - PR #2023 (Cookie without value)'
    '944100-11$': 'known MSC bug - PR #2045, ISSUE #2146'
    '^920': 'All the tests about Protocol Attack (rules starting with "920") will be ignored'
  forcefail:
    '123456-01$': 'I want this specific test to fail, even if passing'
  forcepass:
    '123456-02$': 'This test will always pass'
    '123457-.*': 'All the tests about rule 123457 will always pass'

You can combine any of ignore, forcefail and forcepass to make it work for you.

☁️ Cloud mode

Most of the tests rely on having access to a logfile to check for success or failure. Sometimes that is not possible, for example, when testing cloud services or servers where you don't have access to logfiles and/or logfiles won't have the information you need to decide if the test was good or bad.

With cloud mode, we move the decision on test failure or success to the HTTP status code received after performing the test. The general idea is that you setup your WAF in blocking mode, so anything matching will return a block status (e.g. 403), and if not we expect a 2XX return code.

An example config file for this is:

---
mode: 'cloud'

Or you can just run: ./ftw run --cloud

How log parsing works

The WAF's log file with the alert messages is parsed and compared to the expected output defined in the unit test under log_contains or no_log_contains.

The problem with log files is that go-ftw is very, very fast and the log files are not updated in real time. Frequently, the web server / WAF is not syncing the file fast enough. That results in a situation where go-ftw won't find the log messages it has triggered.

To make log parsing consistent and guarantee that we will see output when we need it, go-ftw will send a request that is meant to write a marker into the log file before the individual test and another marker after the individual test.

If go-ftw does not see the finishing marker after executing the request, it will send the marker request again until the webserver is forced to write the log file to the disk and the marker can be found.

The container images for Core Rule Set can be configured to write these marker log lines by setting the CRS_ENABLE_TEST_MARKER environment variable. If you are testing a different test setup, you will need to instrument it with a rule that generated the marker in the log file via a rule alert (unless you are using "cloud mode").

The rule for CRS looks like this:

# Write the value from the X-CRS-Test header as a marker to the log
SecRule REQUEST_HEADERS:X-CRS-Test "@rx ^.*$" \
  "id:999999,\
  pass,\
  phase:1,\
  log,\
  msg:'X-CRS-Test %{MATCHED_VAR}',\
  ctl:ruleRemoveById=1-999999"

The rule looks for an HTTP header named X-CRS-Test and writes its value to the log, the value being the UUID of a test stage. If the header does not exist, the rule will be skipped and no marker will be written. If the header is found, the rule will also disable all further matching against the request to ensure that reported matches only concern actual test requests.

You can configure the name of the HTTP header by setting the logmarkerheadername option in the configuration to a custom value (the value is case insensitive).

Library usage

go-ftw can be used as a library also. Just include it in your project:

go get github.com/coreruleset/go-ftw

Then, for the example below, import at least these:

import (
    "net/url"
    "os"
    "path/filepath"
    "strconv"

    "github.com/bmatcuk/doublestar/v4"
    "github.com/coreruleset/go-ftw/config"
    "github.com/coreruleset/go-ftw/output"
    "github.com/coreruleset/go-ftw/runner"
    "github.com/coreruleset/go-ftw/test"
    "github.com/rs/zerolog"
)

And a sample code:

     // sample from https://github.com/corazawaf/coraza/blob/v3/dev/testing/coreruleset/coreruleset_test.go#L215-L251
    var tests []test.FTWTest
    err = doublestar.GlobWalk(crsReader, "tests/regression/tests/**/*.yaml", func(path string, d os.DirEntry) error {
        yaml, err := fs.ReadFile(crsReader, path)
        if err != nil {
            return err
        }
        t, err := test.GetTestFromYaml(yaml)
        if err != nil {
            return err
        }
        tests = append(tests, t)
        return nil
    })
    if err != nil {
        log.Fatal(err)
    }

    u, _ := url.Parse(s.URL)
    host := u.Hostname()
    port, _ := strconv.Atoi(u.Port())
    zerolog.SetGlobalLevel(zerolog.InfoLevel)
    cfg, err := config.NewConfigFromFile(".ftw.yml")
    if err != nil {
        log.Fatal(err)
    }
    cfg.WithLogfile(errorPath)
    cfg.TestOverride.Input.DestAddr = &host
    cfg.TestOverride.Input.Port = &port

    res, err := runner.Run(cfg, tests, runner.RunnerConfig{
                    ShowTime: false,
                    }, output.NewOutput("quiet", os.Stdout))
    if err != nil {
        log.Fatal(err)
    }


    if len(res.Stats.Failed) > 0 {
		log.Errorf("failed tests: %v", res.Stats.Failed)
	}

License

FOSSA Status

About

Web Application Firewall Testing Framework - Go version

License:Apache License 2.0


Languages

Language:Go 99.9%Language:Dockerfile 0.1%