pmacct / pmacct

pmacct is a small set of multi-purpose passive network monitoring tools [NetFlow IPFIX sFlow libpcap BGP BMP RPKI IGP Streaming Telemetry].

Home Page:http://www.pmacct.net

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sampling directive broken

MoreDelay opened this issue · comments

Description
Using the directive sampling_rate within nfacctd.conf is the cause that no data gets pushed to output plugins. It does not matter to which value it is set, tried setting it to values 1 and 50. This has not been an issue with version 1.7.7 and only started happening with 1.7.8. The data only gets pushed out again when removing this directive from the configuration.

Version
Using docker container pmacct/nfacctd:v1.7.8

nfacctd.conf

debug: true

nfacctd_port: 2100
nfacctd_time_new: true

sampling_rate: 1

plugins: kafka[netflow]

plugin_pipe_zmq[netflow]: true
plugin_pipe_zmq_profile[netflow]: xlarge

kafka_output[netflow]: json
kafka_topic[netflow]: netflow
kafka_refresh_time[netflow]: 60
kafka_history[netflow]: 1m
kafka_history_roundoff[netflow]: m
kafka_broker_host[netflow]: kafka

Logs

unima_core_dev-pmacct-1  | INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.7.8-git (20221231-1 (723b0cb2))
unima_core_dev-pmacct-1  | INFO ( default/core ):  '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka' '--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' '--enable-nflog' '--enable-ndpi' '--enable-zmq' '--enable-avro' '--enable-serdes' '--enable-redis' '--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'
unima_core_dev-pmacct-1  | INFO ( default/core ): Reading configuration file '/etc/pmacct/nfacctd.conf'.
unima_core_dev-pmacct-1  | WARN ( netflow/kafka ): defaulting to SRC HOST aggregation.
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): p_zmq_send_setup() addr=tcp://127.0.0.1:38461 username=G9HJhNA7UJgzJCTXb4FpXOEBa81DTdKjkc2PxCXPUDOnOpLxtamOxyaV6ZiZA1IubIKYTpxx0KiO73L1c5NX3x29VigfjwaKDsqgfmbf4J2BLLAvRYSSVUayAGeuDCCQ234hpdKtuM4FiFA7laXE27DDMp5X1Hx password=1Kzi7c2yUN3i3GsAzRfDTnQg7FjeE1ZFJYxP8yNcJOKMcBUbc9p5uFk1TTfhSEvmkqBsoYT7vnRXwv9Y5ycxBKzencMErH0Z79SufvzagR8dKFBNDlLO5I3ruN5jc3JhClZpeXOKwWvGl7ewqxIwFJn9gqtGtAw
unima_core_dev-pmacct-1  | INFO ( netflow/kafka ): cache entries=16411 base cache memory=67875896 bytes
...
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Received NetFlow/IPFIX packet from [***] version [9] seqno [9173941]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [313] from [***] seqno [9173941]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Received NetFlow/IPFIX packet from [***] version [9] seqno [2560488]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [313] from [***] seqno [2560488]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Received NetFlow/IPFIX packet from [***] version [9] seqno [42687842]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [313] from [***] seqno [42687842]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Received NetFlow/IPFIX packet from [***] version [9] seqno [102498]
unima_core_dev-pmacct-1  | DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [324] from [***] seqno [102498]
...
...
unima_core_dev-pmacct-1  | INFO ( netflow/kafka ): *** Purging cache - START (PID: 14) ***
...
unima_core_dev-pmacct-1  | INFO ( netflow/kafka ): *** Purging cache - END (PID: 14, QN: 0/0, ET: 0) ***
...

This is the kafka config printed in debug mode. I put this separate to not clutter the logs above.

unima_core_dev-pmacct-1  | INFO ( netflow/kafka ): JSON: setting object handlers.
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): p_zmq_recv_setup() addr=tcp://127.0.0.1:38461 username=G9HJhNA7UJgzJCTXb4FpXOEBa81DTdKjkc2PxCXPUDOnOpLxtamOxyaV6ZiZA1IubIKYTpxx0KiO73L1c5NX3x29VigfjwaKDsqgfmbf4J2BLLAvRYSSVUayAGeuDCCQ234hpdKtuM4FiFA7laXE27DDMp5X1Hx password=1Kzi7c2yUN3i3GsAzRfDTnQg7FjeE1ZFJYxP8yNcJOKMcBUbc9p5uFk1TTfhSEvmkqBsoYT7vnRXwv9Y5ycxBKzencMErH0Z79SufvzagR8dKFBNDlLO5I3ruN5jc3JhClZpeXOKwWvGl7ewqxIwFJn9gqtGtAw
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: builtin.features = gzip,snappy,ssl,sasl,regex,lz4,sasl_plain,sasl_scram,plugins,sasl_oauthbearer,http,oidc
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: client.id = rdkafka
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: client.software.name = librdkafka
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: message.max.bytes = 1000000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: message.copy.max.bytes = 65535
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: receive.message.max.bytes = 100000000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: max.in.flight.requests.per.connection = 1000000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: metadata.request.timeout.ms = 10
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: topic.metadata.refresh.interval.ms = 300000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: metadata.max.age.ms = 900000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: topic.metadata.refresh.fast.interval.ms = 250
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: topic.metadata.refresh.fast.cnt = 10
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: topic.metadata.refresh.sparse = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: topic.metadata.propagation.max.ms = 30000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: debug = 
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.timeout.ms = 60000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.blocking.max.ms = 1000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.send.buffer.bytes = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.receive.buffer.bytes = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.keepalive.enable = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.nagle.disable = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.max.fails = 1
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: broker.address.ttl = 1000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: broker.address.family = any
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket.connection.setup.timeout.ms = 30000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: connections.max.idle.ms = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.sparse.connections = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: reconnect.backoff.jitter.ms = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: reconnect.backoff.ms = 100
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: reconnect.backoff.max.ms = 10000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: statistics.interval.ms = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enabled_events = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: error_cb = 0x400008b980
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: stats_cb = 0x400008b990
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: log_cb = 0x400008b9d0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: log_level = 6
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: log.queue = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: log.thread.name = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.random.seed = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: log.connection.close = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: socket_cb = 0x4002af87f0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: open_cb = 0x4002b16800
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: opaque = 0x40002f1fc0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: internal.termination.signal = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: api.version.request = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: api.version.request.timeout.ms = 10000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: api.version.fallback.ms = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: broker.version.fallback = 0.10.0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: allow.auto.create.topics = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: security.protocol = plaintext
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: ssl.ca.certificate.stores = Root
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: ssl.engine.id = dynamic
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.ssl.certificate.verification = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: ssl.endpoint.identification.algorithm = https
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sasl.mechanisms = GSSAPI
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sasl.kerberos.service.name = kafka
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sasl.kerberos.principal = kafkaclient
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sasl.kerberos.kinit.cmd = kinit -R -t "%{sasl.kerberos.keytab}" -k %{sasl.kerberos.principal} || kinit -t "%{sasl.kerberos.keytab}" -k %{sasl.kerberos.principal}
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sasl.kerberos.min.time.before.relogin = 60000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.sasl.oauthbearer.unsecure.jwt = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable_sasl_queue = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sasl.oauthbearer.method = default
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: test.mock.num.brokers = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: test.mock.broker.rtt = 0
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: partition.assignment.strategy = range,roundrobin
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: session.timeout.ms = 45000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: heartbeat.interval.ms = 3000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: group.protocol.type = consumer
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: coordinator.query.interval.ms = 600000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: max.poll.interval.ms = 300000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.auto.commit = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: auto.commit.interval.ms = 5000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.auto.offset.store = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: queued.min.messages = 100000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: queued.max.messages.kbytes = 65536
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: fetch.wait.max.ms = 500
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: fetch.message.max.bytes = 1048576
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: fetch.max.bytes = 52428800
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: fetch.min.bytes = 1
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: fetch.error.backoff.ms = 500
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: offset.store.method = broker
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: isolation.level = read_committed
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.partition.eof = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: check.crcs = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: client.rack = 
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: transaction.timeout.ms = 60000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.idempotence = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: enable.gapless.guarantee = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: queue.buffering.max.messages = 100000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: queue.buffering.max.kbytes = 1048576
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: queue.buffering.max.ms = 5
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: message.send.max.retries = 2147483647
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: retry.backoff.ms = 100
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: queue.buffering.backpressure.threshold = 1
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: compression.codec = none
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: batch.num.messages = 10000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: batch.size = 1000000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: delivery.report.only.error = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: dr_msg_cb = 0x400008bc60
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka global config: sticky.partitioning.linger.ms = 10
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: request.required.acks = -1
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: request.timeout.ms = 30000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: message.timeout.ms = 300000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: queuing.strategy = fifo
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: produce.offset.report = false
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: partitioner = consistent_random
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: compression.codec = inherit
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: compression.level = -1
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: auto.commit.enable = true
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: auto.commit.interval.ms = 60000
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: auto.offset.reset = largest
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: offset.store.path = .
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: offset.store.sync.interval.ms = -1
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: offset.store.method = broker
unima_core_dev-pmacct-1  | DEBUG ( netflow/kafka ): librdkafka 'netflow' topic config: consume.callback.max.messages = 0

Hi @MoreDelay ,

Thanks for reporting this. I guess this has been "fixed" recently by this commit: c63b24c . In other words, as comment says, bad idea applying sampling to something that is already sampled and/or processed, ie. packed in flows. So the directive basically does not apply anymore to nfacctd and sfacctd (but only to pmacctd and uacctd) that perform packet capturing.

Paolo

Hey @paololucente ,

yeah, looks like you did already take care of it. Thank you for looking into the issue.

Best regards,
Dennis