free / sql_exporter

Database agnostic SQL exporter for Prometheus

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Clickhouse sql exporter is timing out

shashankkoppar opened this issue · comments

mssql_standard.collector.yml

collector_name: clickhouse

metrics:
  - metric_name: device_skip_count
    type: counter
    help: 'The number of device skips  that were incurred by the SQL Server process.'
    values: [device_skip]
    query_ref: device_skip

queries:
  - query_name: device_skip
    query: |
      WITH uqDeviceIdSkip
      AS
      (SELECT COUNT(DISTINCT deviceId) as NumA FROM ads.v5_action_events WHERE action = 'appskip'
      HAVING dateTime < NOW() - INTERVAL 1 HOUR),
      uqDeviceId
      AS
      (SELECT COUNT(DISTINCT deviceId) as NumB FROM ads.v5_action_events HAVING dateTime < NOW() - INTERVAL 1 HOUR)
      SELECT uqDeviceIdSkip.NumA/uqDeviceId.NumB * 100 as PctSkipRate
      FROM uqDeviceIdSkip, uqDeviceId

sql-exporter.yml

# Global defaults.
global:
  scrape_timeout: 5000s
  # Subtracted from Prometheus' scrape_timeout to give us some headroom and prevent Prometheus from timing out first.
  scrape_timeout_offset: 50s
  # Minimum interval between collector runs: by default (0s) collectors are executed on every scrape.
  min_interval: 0s
  # Maximum number of open connections to any one target. Metric queries will run concurrently on multiple connections,
  # as will concurrent scrapes.
  max_connections: 13
  # Maximum number of idle connections to any one target. Unless you use very long collection intervals, this should
  # always be the same as max_connections.
  max_idle_connections: 13

# The target to monitor and the collectors to execute on it.
target:
  # Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
  # the schema gets dropped or replaced to match the driver expected DSN format.
  data_source_name: 'clickhouse://xxxxx:8123?username=admin&password=xxxxx&database=ads&read_timeout=1099&write_timeout=2099&debug=true'
  # Collectors (referenced by name) to execute on the target.
  collectors: [clickhouse]

# Collector files specifies a list of globs. One collector definition is read from each matching file.
collector_files:
  - "*.collector.yml"
I0308 15:27:39.607808   21261 main.go:52] Starting SQL exporter (version=, branch=, revision=) (go=go1.13.1, user=, date=)
I0308 15:27:39.608300   21261 config.go:18] Loading configuration from sql_exporter2.yml
I0308 15:27:39.609118   21261 config.go:131] Loaded collector "clickhouse" from mssql_standard.collector.yml
I0308 15:27:39.609351   21261 main.go:67] Listening on :9399
[clickhouse]host(s)=xxxxx:8123, database=ads, username=admin
[clickhouse][dial] secure=false, skip_verify=false, strategy=random, ident=1, server=0 -> xxxxx:8123
[clickhouse][connect=1][hello] -> Golang SQLDriver 1.1.54213

I0308 15:29:12.093492   21261 promhttp.go:38] Error gathering metrics: [from Gatherer #1] context deadline exceeded

It just times out. Please let me know what I am doing wrong, and if possible please share any clickhouse implementation example using plain clickhouse connection and also using clikchouse proxy too.

@free, I would appreciate if you help out here :)