cilium / tetragon

eBPF-based Security Observability and Runtime Enforcement

Home Page:https://tetragon.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TracingPolicies do not get applied in WSL2

joshuajorel opened this issue · comments

What happened?

I am building the main branch from my local WSL2 environment and TracingPolicy and TracingPolicyNamespaced do not get applied. I am running the following example:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicyNamespaced
metadata:
  name: "fd-install"
spec:
  kprobes:
  - call: "fd_install"
    syscall: false
    args:
    - index: 0
      type: "int"
    - index: 1
      type: "file"
    selectors:
    - matchArgs:
      - index: 1
        operator: "Equal"
        values:
        - "/tmp/tetragon"
      matchActions:
      - action: Sigkill

Tetragon Version

Tetragon build from 1dee96d7d58b7ccc57e955eb71b4c1e72f87293d

Kernel Version

Linux DESKTOP-KI004JQ 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes Version

Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2

Bugtool

tetragon-bugtool.tar.gz

Relevant log output

No response

Anything else?

Only process_events get captured and no other event gets captured.

Thanks! Can you please provide a sysdump or the tetragon pod logs?

For the sysdump, please see https://tetragon.io/docs/troubleshooting/#automatic-log--state-collection.

Not clear to me what exactly the issue is, I'll add some speculation and notes for future reference

We dont' seem to have a proper /procRoot:

2024-04-17T06:46:13.367309227Z time="2024-04-17T06:46:13Z" level=warning msg="Tetragon pid file creation failed" error="readlink /procRoot/self: no such file or directory" pid=0

But at least in terms of metrics, everything seems fine:

tetragon_policyfilter_metrics_total{error="",op="add",subsys="pod-handlers"} 15
tetragon_policyfilter_metrics_total{error="",op="add",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="",op="add-container",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="",op="add-container",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="",op="delete",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="",op="delete",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="",op="update",subsys="pod-handlers"} 171
tetragon_policyfilter_metrics_total{error="",op="update",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add-container",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="add-container",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="delete",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="delete",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="update",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="generic-error",op="update",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add-container",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="add-container",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="delete",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="delete",subsys="rthooks"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="update",subsys="pod-handlers"} 0
tetragon_policyfilter_metrics_total{error="pod-namespace-conflict",op="update",subsys="rthooks"} 0

And we do have some entries in the policyfilter map:

$ cat policy_filter_maps.json 
{"1":{"8253":{},"8283":{},"8313":{},"8343":{}}}                                             

But maybe they are namespaced and do not correspond to the real cgroups in the kernel.

We also seem to have some exit events, but definitely less than exec:

tetragon_msg_op_total{msg_op="23"} 92 /* CLONE */
tetragon_msg_op_total{msg_op="24"} 5  /* DATA */
tetragon_msg_op_total{msg_op="5"} 4460 /* EXEC */
tetragon_msg_op_total{msg_op="7"} 72 /* EXIT */