grafana / helm-charts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Promtail crio json truncated log

BenetatosG opened this issue · comments

I am running Loki distributed and Promtail on a bare metal kubernetes cluster. Everything works well except from some logs that fail to reach Loki.

The log looks truncated and I'm not sure how to make it work with multiline, I am using crio with json logs

It looks like the log was not split by crio, it's a full log but promtail takes part of it.

Example log:

`level=error ts=2024-03-05T17:50:01.147164351Z caller=client.go:430 component=client host=loki-d-loki-distributed-gateway msg="final error sending batch" status=400 tenant= error="server returned HTTP status 400 Bad Request (400): stream '{app="admin-app", container="admin-app", filename="/var/log/pods/dev_admin-app-dev-7d97745895-8xw9f_7f680792-fae9-48d3-b511-bc170801c119/admin-app/0.log", instance="admin-app-dev", job="dev/admin-app", level="DEBUG", logger_name="com.itdynamics.ergo.admin.common.monitoring.MonitoredAspect", message="response getProducts | status 200 OK - body: [ProductDTO[id=pdct_SklMG8OCQnu8t-XZvUiiYg, name=Apple, unitPrice=1.00, category=ProductCategoryDTO[id=pc_65-n-NkGSICdUO3lPjG0YQ, name=fruits, priority=3], workingZone=WorkingZoneDTO[id=wz_osgE9C18ShGtVlCI98oX7g, name=work zone 2], description=Delicious red Apple, ingredients=Apple, minQuantity=1, maxQuantity=12, measureUnit=MeasureUnit[id=1, name=piece], referenceUnit=0.00, specialProduct=false, enabled=true, outOfStock=false, priority=1, vatCode=RO03, imageUrl=https://api-somehost.com/static/images/restaurant/rst_QWpokzyqSRSyaNoo10LEmg/products/pdct_SklMG8OCQnu8t-XZvUiiYg_ca39123c-2772-4a9d-b4b5-bf70f6b50c6b.jpeg], ProductDTO[id=pdct_y0fk68HoR4OPWRZsYiIU"

level=info ts=2024-03-05T17:50:01.646556353Z caller=tailer.go:206 component=tailer msg="skipping update of position for a file which does not currently exist" path=/var/log/pods/monitoring_loki-d-loki-distributed-query-frontend-7bbc4d888f-82h6c_fabd3c2f-48a6-4f80-bc8e-3582274d97a7/query-frontend/2.log

level=info ts=2024-03-05T17:50:11.646796973Z caller=tailer.go:206 component=tailer msg="skipping update of position for a file which does not currently exist" path=/var/log/pods/monitoring_loki-d-loki-distributed-query-frontend-7bbc4d888f-82h6c_fabd3c2f-48a6-4f80-bc8e-3582274d97a7/query-frontend/2.log`

this is my config for loki distributed

config:
clients:
- url: http://loki-d-loki-distributed-gateway/loki/api/v1/push
snippets:
scrapeConfigs: |
- job_name: kubernetes-pods
pipeline_stages:
- cri: {}
- json:
expressions:
timestamp: timestamp
message: message
thread_name: thread_name
logger_name: logger_name
stack_trace: stack_trace
level: level
trace_id: trace_id
span_id: span_id
user_id: user_id
- drop:
expression: ".kube-probe."
- labels:
level:
trace_id:
span_id:
message:
user_id:
thread_name:
logger_name:
stack_trace:
- timestamp:
source: timestamp
format: RFC3339
- output:
source: message
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;([^;]+)(;.)?$
action: replace
target_label: app
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_instance
regex: ^;([^;]+)(;.)?$
action: replace
target_label: instance
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;([^;]+)(;.)?$
action: replace
target_label: component
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: $1
separator: /
source_labels:
- namespace
- app
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/$1/.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: path
- action: replace
regex: true/(.)
replacement: /var/log/pods/
$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: path

Realised that some configuration I did was responsible, not sure which right now, reverted to promtail default helm config