unfor19 / frigga

Scrape only relevant metrics in Prometheus, according to your Grafana dashboards

Home Page:https://meirg.co.il

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

frigga

testing

frigga-logo

Do you have a Grafana instance? frigga makes sure you don’t scrape metrics in Prometheus, which you don’t present in Grafana dashboards.

Scrape only relevant metrics in Prometheus, according to your Grafana dashboards, see the before and after snapshot. frigga generates keep filters on metric_relabel_configs, and adds them to your prometheus.yml file

frigga is extremely useful for Grafana Cloud customers since the pricing is per DataSeries ingestions.

Illustration

Expand/Collapse
frigga-logo

Requirements

Python 3.6.7+

Installation

$ pip install frigga

Docker

docker run --rm -it unfor19/frigga

For ease of use, add an alias in your ~/.bashrc file

alias frigga="docker run --rm -it unfor19/frigga"

Available Commands

Auto-generated by unfor19/replacer-action, see readme.yml

Usage: frigga [OPTIONS] COMMAND [ARGS]...

  No confirmation prompts

Options:
  -ci, --ci  Use this flag to avoid confirmation prompts
  --help     Show this message and exit.

Commands:
  client-start       Alias: cs
  grafana-list       Alias: gl
  prometheus-apply   Alias: pa
  prometheus-get     Alias: pg
  prometheus-reload  Alias: pr
  version            Print the installed version
  webserver-start    Alias: ws

Getting Started

  1. Grafana - Import the dashboard frigga - Jobs Usage (ID: 12537) to Grafana, and check out the number of DataSeries

  2. Grafana - Generate an API Key for Viewer

  3. frigga - Get the list of metrics that are used in your Grafana dasboards

    $ frigga gl
    
    # gl is grafana-list, or good luck :)
    
    Grafana url [http://localhost:3000]: http://my-grafana.grafana.net
    Grafana api key: (hidden)
    >> [LOG] Getting the list of words to ignore when scraping from Grafana
    ...
    >> [LOG] Found a total of 269 unique metrics to keep

    .metrics.json - automatically generated in pwd

    {
        "all_metrics": [
            "cadvisor_version_info",
            "container_cpu_usage_seconds_total",
            "container_last_seen",
            "container_memory_max_usage_bytes",
            ...
        ]
    }
  4. Add the following snippet to the bottom of your prometheus.yml file. Check the example in docker-compose/prometheus-original.yml

     ---
     name: frigga
     exclude_jobs: []
  5. frigga - Use the .metrics.json file to apply the rules to your existing prometheus.yml

    $ frigga pa
    
    # pa is prometheus-apply, or pam-tada-dam
    
    Prom yaml path [docker-compose/prometheus.yml]: /etc/prometheus/prometheus.yml
    Metrics json path [./.metrics.json]: /home/willywonka/.metrics.json
    >> [LOG] Reading documents from docker-compose/prometheus.yml
    ...
    >> [LOG] Done! Now reload docker-compose/prometheus.yml with 'frigga pr -u http://localhost:9090'
  6. As mentioned in the previous step, reload the prometheus.yml to Prometheus, here are two ways of doing it

    • "Kill" Prometheus
      $ docker exec $PROM_CONTAINER_NAME kill -HUP 1
    • Send a POST request to /-/reload - this requires Prometheus to be loaded with --web.enable-lifecycle, for example, see docker-compose.yml
      $ frigga prometheus-reload --prom-url http://localhost:9090
      Or with curl
      $ curl -X POST http://localhost:9090/-/reload
      
  7. Make sure the prometheus.yml was loaded successfully to Prometheus

    $ docker logs --tail 10 $PROM_CONTAINER_NAME
    
     level=info ts=2020-06-27T15:45:34.514Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
     level=info ts=2020-06-27T15:45:34.686Z caller=main.go:827 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml
  8. Grafana - Now check frigga - Jobs Usage dashboard, the numbers should be signifcantly lower (up to 60% or even more)

Test it locally

Requirements

  1. Docker
  2. docker-compose
  3. jq

Getting Started

  1. git clone this repository

  2. Run Docker daemon (Docker for Desktop)

  3. Make sure ports 3000,8080,9100 are not in use (state=closed)

    docker run --rm -it --network=host unfor19/net-tools nmap -p 8080,3000,9100 -n localhost
  4. Deploy locally the services: Prometheus, Grafana, node-exporter and cadvisor

    $ bash docker-compose/deploy_stack.sh
    
    Creating network "frigga_net1" with the default driver
    ...
    >> Grafana - Generating API Key - for Viewer
    eyJrIjoiT29hNGxGZjAwT2hZcU1BSmpPRXhndXVwUUE4ZVNFcGQiLCJuIjoibG9jYWwiLCJpZCI6MX0=
    # Save this key ^^^
  5. Open your browser, navigate to http://localhost:3000

    • Username and password are admin:admin
    • You'll be prompted to update your password, so just keep using admin or hit Skip
  6. Go to Jobs Usage dashboard, you'll see that Prometheus is processing ~2800 DataSeries

  7. Get all the metrics that are used in your Grafana dasboards

    $ export GRAFANA_API_KEY=the-key-that-was-generated-in-the-deploy-locally-step
    $ frigga gl -gurl http://localhost:3000 -gkey $GRAFANA_API_KEY
    
    >> [LOG] Getting the list of words to ignore when scraping from Grafana
    ...
    >> [LOG] Found a total of 269 unique metrics to keep
    # Generated .metrics.json in pwd
  8. Check the number of data series BEFORE filtering with frigga

    $ frigga pg -u http://localhost:9090
    
    # prometheus-get
    
    >> [LOG] Total number of data-series: 1863
  9. Apply the rules to prometheus.yml, keep the defaults

    $ frigga pa
    
    # prometheus-apply
    
    Prom yaml path [docker-compose/prometheus.yml]:
    Metrics json path [./.metrics.json]:
    ...
    >> [LOG] Done! Now reload docker-compose/prometheus.yml with 'docker exec $PROM_CONTAINER_NAME kill -HUP 1'
  10. Reload prometheus.yml to Prometheus

    $  frigga pr -u http://localhost:9090
    
    # prometheus-reload
    
    >> [LOG] Successfully reloaded Prometheus - http://localhost:9090/-/reload
  11. Check the number of data series AFTER filtering with frigga

    $ frigga pg -u http://localhost:9090
    
    # prometheus-get
    
    >> [LOG] Total number of data-series: 898
    # Decreased from 1863 to 898,  decreased 51% !
  12. Go to Jobs Usage, you'll see that Prometheus is processing only ~898 DataSeries (previously ~1863)

    • In case you don't see the change, don't forget to hit the refersh button
  13. Cleanup

    $ docker-compose -p frigga --file docker-compose/docker-compose.yml down

Pros and Cons of this tool

Pros

  1. Grafana-Cloud - As a Grafana Cloud customer, the main reason for writing this tool was lowering the costs. This goal was achieved by sending only the relevant DataSeries to Grafana Cloud
  2. Saves disk-space on the machine running Prometheus
  3. Improves PromQL performance by querying less metrics; significant only when processing high volumes

Cons

  1. After applying the rules in prometheus.yml, it makes the file less readable. Due to the fact it's not a file that you play with on a daily basis, it's okayish
  2. The memory usage of Prometheus increases slightly, around ~30MB, not significant, but worth mentioning
  3. If you intend to use more metrics, for example, you've added a new dashboard which uses more metrics, you'll need to do the same process again; frigga gl and frigga pa

References

Contributing

Report issues/questions/feature requests on the Issues section.

Pull requests are welcome! Ideally, create a feature branch and issue for every single change you make. These are the steps:

  1. Fork this repo
  2. Create your feature branch from master (git checkout -b my-new-feature)
  3. Install from source
     $ git clone https://github.com/${GITHUB_OWNER}/frigga.git && cd frigga
     ...
     $ pip install --upgrade pip
     ...
     $ python -m venv ./ENV
     $ . ./ENV/bin/activate
     ...
     $ (ENV) pip install --editable .
     ...
     # Done! Now when you run 'frigga' it will get automatically updated when you modify the code
  4. Add the code of your new feature
  5. Test - make sure frigga grafana-list and frigga prometheus-apply commands work
  6. Commit your remarkable changes (git commit -am 'Added new feature')
  7. Push to the branch (git push --set-up-stream origin my-new-feature)
  8. Create a new Pull Request and tell us about your changes

Authors

Created and maintained by Meir Gabay

License

This project is licensed under the MIT License - see the LICENSE file for details

About

Scrape only relevant metrics in Prometheus, according to your Grafana dashboards

https://meirg.co.il

License:MIT License


Languages

Language:Python 64.3%Language:Shell 24.9%Language:Dockerfile 6.0%Language:HTML 4.9%