k8snetworkplumbingwg / multus-service-archived

(TBD)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What if the request on 2nd interface detours into 1st interface on the worker nodes?

seungweonpark opened this issue · comments

I have deployed k8s cluster on baremetal with multus-service + sriov(1 controller node+ 2 worker nodes). I have checked the container has two interfaces (default CNI and additional sriov network) and multus-service up and running on each worker node.

When requesting the service via 2nd interface on the controller, the controller received the request and detours the request into 1st interface.

From the controller (IP: 10.10.10.123, 2nd interface, IP:192.168.1.123, 1st interface):

kubectl --namespace minio-tenant port-forward svc/minio-multus-service-2 9002:9000 --address=10.10.10.123

From the client (out of k8s cluster), the request to 10.10.10.123:9002 detours into the 1st interface when checking each worker node. What do you think I miss, as it doesn't go through the 2nd interface for the request?

I'm still unclear your situation, including configuration.

Could you please share the following information to us?

  • pod yaml (service pods and the pods consuming the service)
  • service yaml
  • kubectl get pod (for all namespaces)
  • net-attach-def yaml
  • topology for the cluster (including sr-iov network. is that isolated from cluster network or unified?)

In addition, the following info also helps us to troubleshooting:

  • iptables output in pod (in service pod and service consumer pod)
  • ip route output in pod (in service pod and service consumer pod)

When deploying the cluster, I attached two additional networks on the pod which had the total 3 networks including default. I am redeploying with just one additional network which will have 2 networks total, once I tested, I will get back to you today with all requested information. Thank you for the quick response.

pod_def.yml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: fe5962e5664d644cca9f4cc4fb31865a1469a2a6b44be55d00b1334aeed2338e
    cni.projectcalico.org/podIP: 10.244.168.41/32
    cni.projectcalico.org/podIPs: 10.244.168.41/32
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "default-cni-network",
          "ips": [
              "10.244.168.41"
          ],
          "default": true,
          "dns": {}
      },{
          "name": "minio-tenant/minio-sriov-1",
          "interface": "net1",
          "ips": [
              "10.56.217.101"
          ],
          "mac": "0e:30:54:67:72:8e",
          "dns": {},
          "device-info": {
              "type": "pci",
              "version": "1.0.0",
              "pci": {
                  "pci-address": "0000:b1:12.4"
              }
          }
      }]
    k8s.v1.cni.cncf.io/networks: minio-sriov-1
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "default-cni-network",
          "ips": [
              "10.244.168.41"
          ],
          "default": true,
          "dns": {}
      },{
          "name": "minio-tenant/minio-sriov-1",
          "interface": "net1",
          "ips": [
              "10.56.217.101"
          ],
          "mac": "0e:30:54:67:72:8e",
          "dns": {},
          "device-info": {
              "type": "pci",
              "version": "1.0.0",
              "pci": {
                  "pci-address": "0000:b1:12.4"
              }
          }
      }]
    meta.helm.sh/release-name: minio-tenant
    meta.helm.sh/release-namespace: minio-tenant
    min.io/revision: "0"
  creationTimestamp: "2022-09-13T16:23:34Z"
  generateName: minio-tenant-pool-0-
  labels:
    app: minio
    app.kubernetes.io/managed-by: Helm
    controller-revision-hash: minio-tenant-pool-0-6f7bdb467
    statefulset.kubernetes.io/pod-name: minio-tenant-pool-0-0
    v1.min.io/console: minio-tenant-console
    v1.min.io/pool: pool-0
    v1.min.io/tenant: minio-tenant
  name: minio-tenant-pool-0-0
  namespace: minio-tenant
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: minio-tenant-pool-0
    uid: b2a3798d-cfec-42a8-8740-9fce63e9aa26
  resourceVersion: "6348110"
  uid: f7acdf30-b946-4c4d-8c18-d0f8dbf0de7e
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: v1.min.io/tenant
            operator: In
            values:
            - minio-tenant
        topologyKey: kubernetes.io/hostname
  containers:
  - args:
    - server
    - --certs-dir
    - /tmp/certs
    - --console-address
    - :9090
    - --json
    - --anonymous
    - --quiet
    env:
    - name: MINIO_ARGS
      valueFrom:
        secretKeyRef:
          key: MINIO_ARGS
          name: operator-webhook-secret
    - name: MINIO_CONFIG_ENV_FILE
      value: /tmp/minio-config/config.env
    - name: MINIO_LOG_QUERY_AUTH_TOKEN
      valueFrom:
        secretKeyRef:
          key: MINIO_LOG_QUERY_AUTH_TOKEN
          name: minio-tenant-log-secret
    - name: MINIO_LOG_QUERY_URL
      value: http://minio-tenant-log-search-api:8080
    - name: MINIO_OPERATOR_VERSION
      value: v4.4.28
    - name: MINIO_PROMETHEUS_JOB_ID
      value: minio-job
    - name: MINIO_PROMETHEUS_URL
      value: http://minio-tenant-prometheus-hl-svc:9090
    - name: MINIO_SERVER_URL
      value: http://minio.minio-tenant.svc.cluster.local:80
    - name: MINIO_UPDATE
      value: "on"
    - name: MINIO_UPDATE_MINISIGN_PUBKEY
      value: RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav
    image: localhost:30500/minio:RELEASE.2022-08-22T23-53-06Z
    imagePullPolicy: IfNotPresent
    name: minio
    ports:
    - containerPort: 9000
      protocol: TCP
    - containerPort: 9090
      protocol: TCP
    resources:
      limits:
        intel.com/ens801f1_intelnics_1: "1"
      requests:
        intel.com/ens801f1_intelnics_1: "1"
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /export0
      name: data0
    - mountPath: /export1
      name: data1
    - mountPath: /export2
      name: data2
    - mountPath: /export3
      name: data3
    - mountPath: /tmp/certs
      name: minio-tenant-tls
    - mountPath: /tmp/minio-config
      name: configuration
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-c97bt
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: minio-tenant-pool-0-0
  nodeName: ar09-15-cyp
  nodeSelector:
    storage: minio
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000
    fsGroupChangePolicy: OnRootMismatch
    runAsGroup: 1000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccount: default
  serviceAccountName: default
  subdomain: minio-tenant-hl
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: data0
    persistentVolumeClaim:
      claimName: data0-minio-tenant-pool-0-0
  - name: data1
    persistentVolumeClaim:
      claimName: data1-minio-tenant-pool-0-0
  - name: data2
    persistentVolumeClaim:
      claimName: data2-minio-tenant-pool-0-0
  - name: data3
    persistentVolumeClaim:
      claimName: data3-minio-tenant-pool-0-0
  - name: minio-tenant-tls
  - name: minio-tenant-tls
    projected:
      defaultMode: 420
      sources:
      - secret:
          items:
          - key: public.crt
            path: CAs/operator.crt
          name: operator-tls
  - name: configuration
    projected:
      defaultMode: 420
      sources:
      - secret:
          name: minio-tenant-env-configuration
  - name: kube-api-access-c97bt
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-09-13T16:23:49Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-09-13T16:23:54Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-09-13T16:23:54Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-09-13T16:23:49Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://f9bcbfbd48ee04d4191b87ee70ec89594ac8dcbee193f1c881ef06d1a79ae28e
    image: localhost:30500/minio:RELEASE.2022-08-22T23-53-06Z
    imageID: localhost:30500/minio@sha256:878a374b2b87aae0f50f6a98669b603ab9d5f07495d15642756c4ac31c2263dd
    lastState: {}
    name: minio
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-09-13T16:23:53Z"
  hostIP: 10.166.31.74
  phase: Running
  podIP: 10.244.168.41
  podIPs:
  - ip: 10.244.168.41
  qosClass: BestEffort
  startTime: "2022-09-13T16:23:49Z"

service.yml

apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.v1.cni.cncf.io/service-network: minio-sriov-1
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"k8s.v1.cni.cncf.io/service-network":"minio-sriov-1"},"labels":{"service.kubernetes.io/service-proxy-name":"multus-proxy"},"name":"minio-multus-service-1","namespace":"minio-tenant"},"spec":{"ports":[{"port":9000,"protocol":"TCP"}],"selector":{"app":"minio"}}}
  creationTimestamp: "2022-09-13T16:13:37Z"
  labels:
    service.kubernetes.io/service-proxy-name: multus-proxy
  name: minio-multus-service-1
  namespace: minio-tenant
  resourceVersion: "6343597"
  uid: 618fd9e5-113b-4bb8-ba26-a68da608f65c
spec:
  clusterIP: 10.233.46.229
  clusterIPs:
  - 10.233.46.229
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: minio
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

kubectl get pod (all)

kube-system               cadvisor-967cq                                                1/1     Running                  0               10d                                                                                                                                                                       [20/223436]
kube-system               cadvisor-gt2vj                                                1/1     Running                  0               10d
kube-system               calico-node-94r9z                                             1/1     Running                  1               11d
kube-system               calico-node-n2shl                                             1/1     Running                  1               11d
kube-system               calico-node-z6m4x                                             1/1     Running                  0               11d
kube-system               container-registry-688755fbbd-px6j8                           2/2     Running                  0               11d
kube-system               coredns-74d6c5659f-tsqc5                                      1/1     Running                  0               11d
kube-system               coredns-74d6c5659f-v5w57                                      1/1     Running                  0               11d
kube-system               dns-autoscaler-59b8867c86-jk8hb                               1/1     Running                  0               11d
kube-system               intel-sgx-plugin-p9kc8                                        1/1     Running                  0               11d
kube-system               intel-sgx-plugin-znvp2                                        1/1     Running                  0               11d
kube-system               inteldeviceplugins-controller-manager-59b46b7949-2bzxz        2/2     Running                  0               11d
kube-system               kube-afxdp-device-plugin-e2e-76xdb                            1/1     Running                  0               11d
kube-system               kube-afxdp-device-plugin-e2e-hxfq9                            1/1     Running                  0               11d
kube-system               kube-afxdp-device-plugin-e2e-j87xg                            1/1     Running                  0               11d
kube-system               kube-apiserver-ar09-01-cyp                                    1/1     Running                  0               11d
kube-system               kube-controller-manager-ar09-01-cyp                           1/1     Running                  2 (11d ago)     11d
kube-system               kube-multus-ds-amd64-268tc                                    1/1     Running                  1               11d
kube-system               kube-multus-ds-amd64-p9s8l                                    1/1     Running                  1               11d
kube-system               kube-multus-ds-amd64-x5b56                                    1/1     Running                  0               11d
kube-system               kube-proxy-nhl87                                              1/1     Running                  0               11d
kube-system               kube-proxy-rf48f                                              1/1     Running                  1               11d
kube-system               kube-proxy-v9b78                                              1/1     Running                  1               11d
kube-system               kube-scheduler-ar09-01-cyp                                    1/1     Running                  0               11d
kube-system               kubernetes-dashboard-648989c4b4-pwdqt                         1/1     Running                  0               11d
kube-system               kubernetes-metrics-scraper-84bbbc8b75-tq5hj                   1/1     Running                  0               11d
kube-system               multus-proxy-ds-amd64-ctt2g                                   1/1     Running                  0               11d
kube-system               multus-proxy-ds-amd64-l7rkg                                   1/1     Running                  0               11d
kube-system               multus-proxy-ds-amd64-pgk95                                   1/1     Running                  0               11d
kube-system               multus-service-controller-6676d877ff-dqw6f                    1/1     Running                  0               11d
kube-system               nginx-proxy-ar09-09-cyp                                       1/1     Running                  1               11d
kube-system               nginx-proxy-ar09-15-cyp                                       1/1     Running                  1               11d
kube-system               node-feature-discovery-master-7d8b59cbcf-6mmkk                1/1     Running                  0               11d
kube-system               node-feature-discovery-worker-ttpdh                           1/1     Running                  2 (16h ago)     11d
kube-system               node-feature-discovery-worker-xtsc5                           1/1     Running                  2 (16h ago)     11d
kube-system               tas-telemetry-aware-scheduling-6f6b8988dc-qbvqn               1/1     Running                  0               11d
kube-system               whereabouts-lll8z                                             1/1     Running                  0               24m
kube-system               whereabouts-q4vwl                                             1/1     Running                  0               24m
kube-system               whereabouts-t7fsz                                             1/1     Running                  0               24m
minio-operator            console-75ddccfbb-rp8jh                                       1/1     Running                  0               24m
minio-operator            minio-operator-699d559bf5-8lxhg                               1/1     Running                  0               24m
minio-operator            minio-operator-699d559bf5-k4wd8                               1/1     Running                  0               24m
minio-tenant-ingress      minio-kubernetes-ingress-nginx-ingress-56568fcfc6-2v89k       1/1     Running                  0               24m
minio-tenant              minio-tenant-log-0                                            1/1     Running                  0               20m
minio-tenant              minio-tenant-log-search-api-5bc4645d58-84g78                  1/1     Running                  3 (19m ago)     20m
minio-tenant              minio-tenant-pool-0-0                                         1/1     Running                  0               11m
minio-tenant              minio-tenant-pool-0-1                                         1/1     Running                  0               11m
minio-tenant              minio-tenant-prometheus-0                                     2/2     Running                  1 (9m57s ago)   19m
monitoring                kube-state-metrics-5d7b5d5bfc-tvk6m                           3/3     Running                  0               11d
monitoring                node-exporter-95496                                           2/2     Running                  0               11d
monitoring                node-exporter-flwnq                                           2/2     Running                  0               11d
monitoring                node-exporter-mdlmg                                           2/2     Running                  0               11d
monitoring                prometheus-k8s-0                                              4/4     Running                  0               11d
monitoring                prometheus-operator-68d5d49646-xh92r                          2/2     Running                  0               11d
monitoring                telegraf-s5lwq                                                2/2     Running                  0               113m
monitoring                telegraf-v99ht                                                2/2     Running                  0               7m36s
olm                       catalog-operator-6587ff6f69-mz6c8                             1/1     Running                  0               11d
olm                       olm-operator-6ccdf8f464-c74f7                                 1/1     Running                  0               11d
olm                       operatorhubio-catalog-79swd                                   1/1     Running                  0               44m
olm                       packageserver-5644c586b9-lwz6p                                1/1     Running                  0               11d
olm                       packageserver-5644c586b9-vc9s5                                1/1     Running                  0               11d
sriov-network-operator    sriov-device-plugin-6k2z5                                     1/1     Running                  0               11d
sriov-network-operator    sriov-device-plugin-8hpsx                                     1/1     Running                  0               11d
sriov-network-operator    sriov-network-config-daemon-4x5jd                             3/3     Running                  0               11d
sriov-network-operator    sriov-network-config-daemon-ggwt2                             3/3     Running                  0               11d
sriov-network-operator    sriov-network-operator-69bbd699f8-2hnxn                       1/1     Running                  0               11d

net-attach-def.yml

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  annotations:
    k8s.v1.cni.cncf.io/resourceName: intel.com/ens801f1_intelnics_1
  creationTimestamp: "2022-09-13T16:13:32Z"
  generation: 1
  name: minio-sriov-1
  namespace: minio-tenant
  resourceVersion: "6343550"
  uid: 7b2485c2-3ec0-42a9-ab00-79df9e35e054
spec:
  config: '{ "cniVersion":"0.3.1", "name":"minio-sriov-1","type":"sriov","vlan":0,"spoofchk":"on","trust":"off","vlanQoS":0,"capabilities":{
    "ips": true },"ipam":{"type":"whereabouts","log_file":"/tmp/whereabouts.log","log_level":"debug","range":"10.56.217.0/16","range_start":"10.56.217.100","range_end":"10.56.217.200","routes":[{"dst":"0.0.0.0/0"}],"gateway":"10.56.217.1"}
    }'

Topology: 1 controller + 2 worker nodes. All nodes are connected via the primary network with ens802f0 and secondary with end801f1. 10.166.x.x/23 is for the primary, 10.56.x.x/16 for the secondary. I think this network is isolated.

IP tables/Ip route output

root@ar09-01-cyp:/opt/cek/temp# kubectl exec -it minio-tenant-pool-0-0  -n minio-tenant -- ip route
default via 169.254.1.1 dev eth0
10.56.0.0/16 dev net1 proto kernel scope link src 10.56.217.101
10.233.16.221 dev net1 proto kernel src 10.56.217.101
169.254.1.1 dev eth0 scope link
root@ar09-01-cyp:/opt/cek/temp# kubectl exec -it minio-tenant-pool-0-1  -n minio-tenant -- ip route
default via 169.254.1.1 dev eth0
10.56.0.0/16 dev net1 proto kernel scope link src 10.56.217.100
10.233.16.221 dev net1 proto kernel src 10.56.217.100
169.254.1.1 dev eth0 scope link
root@ar09-01-cyp:/opt/cek/temp# kubectl exec -it minio-tenant-pool-0-0  -n minio-tenant -- iptables --list
iptables v1.8.7 (nf_tables): Could not fetch rule set generation id: Permission denied (you must be root)

command terminated with exit code 4
root@ar09-01-cyp:/opt/cek/temp# kubectl exec -it minio-tenant-pool-0-1  -n minio-tenant -- iptables --list
iptables v1.8.7 (nf_tables): Could not fetch rule set generation id: Permission denied (you must be root)

command terminated with exit code 4

Tomo@ Let me know if you need more information?

From the controller node: (iptables)

root@ar09-01-cyp:~# ip route
default via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.30.34 metric 100
10.3.86.116 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.30.34 metric 100
10.22.224.196 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.30.34 metric 100
10.56.0.0/16 dev ens801f1 proto kernel scope link src 10.56.218.201
10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1 linkdown
10.166.30.0/23 dev ens802f0 proto kernel scope link src 10.166.30.34 metric 100
10.166.30.1 dev ens802f0 proto dhcp scope link src 10.166.30.34 metric 100
10.244.40.192/26 via 10.166.31.41 dev ens802f0 proto 80 onlink
blackhole 10.244.116.192/26 proto 80
10.244.116.193 dev cali28a02b36326 scope link
10.244.116.194 dev cali87e1d889c4f scope link
10.244.116.195 dev caliec48f1156af scope link
10.244.116.196 dev calic8c6536c2c1 scope link
10.244.116.197 dev cali74b93406f4c scope link
10.244.116.198 dev cali3b92da7edd8 scope link
10.244.116.199 dev cali0cf343a54c9 scope link
10.244.168.0/26 via 10.166.31.74 dev ens802f0 proto 80 onlink
10.248.2.1 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.30.34 metric 100

From worker node#1:

root@ar09-09-cyp:~# ip route
default via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.41 metric 100
10.3.86.116 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.41 metric 100
10.22.224.196 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.41 metric 100
10.166.30.0/23 dev ens802f0 proto kernel scope link src 10.166.31.41 metric 100
10.166.30.1 dev ens802f0 proto dhcp scope link src 10.166.31.41 metric 100
blackhole 10.244.40.192/26 proto 80
10.244.40.194 dev cali0c9b5237240 scope link
10.244.40.195 dev cali9fa2babcb63 scope link
10.244.40.202 dev cali1424a3c2438 scope link
10.244.40.203 dev cali1698d834830 scope link
10.244.40.205 dev calif942acb9419 scope link
10.244.40.207 dev calia198257e573 scope link
10.244.40.208 dev cali997302d0301 scope link
10.244.40.210 dev cali65f5854177f scope link
10.244.40.211 dev calidf31766df1c scope link
10.244.40.216 dev calid089b846f70 scope link
10.244.40.217 dev calicef17201f76 scope link
10.244.40.218 dev cali6041b6a32c7 scope link
10.244.40.219 dev calib9ff7edaaeb scope link
10.244.40.220 dev calid3d342ccb5a scope link
10.244.40.221 dev cali02e7fb2fda8 scope link
10.244.40.223 dev cali08e9ff7688d scope link
10.244.40.224 dev cali21cb355800c scope link
10.244.40.226 dev cali1b6389d94e3 scope link
10.244.40.229 dev calib9c5b04c8de scope link
10.244.40.230 dev caliaa93bedc01f scope link
10.244.40.231 dev cali24bdf59af42 scope link
10.244.40.233 dev cali4185d20e84d scope link
10.244.40.236 dev cali8834c7b3b66 scope link
10.244.40.237 dev cali2b697405db2 scope link
10.244.40.239 dev cali83053a0d275 scope link
10.244.40.241 dev cali17568dcb899 scope link
10.244.40.243 dev calid75abf4f5e0 scope link
10.244.40.244 dev cali868fde1e6d8 scope link
10.244.40.248 dev calic3b160bbf66 scope link
10.244.40.255 dev caliad98619ef29 scope link
10.244.116.192/26 via 10.166.30.34 dev ens802f0 proto 80 onlink
10.244.168.0/26 via 10.166.31.74 dev ens802f0 proto 80 onlink
10.248.2.1 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.41 metric 100

From worker node#2:

root@ar09-15-cyp:~# ip route
default via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.74 metric 100
10.3.86.116 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.74 metric 100
10.22.224.196 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.74 metric 100
10.166.30.0/23 dev ens802f0 proto kernel scope link src 10.166.31.74 metric 100
10.166.30.1 dev ens802f0 proto dhcp scope link src 10.166.31.74 metric 100
10.244.40.192/26 via 10.166.31.41 dev ens802f0 proto 80 onlink
10.244.116.192/26 via 10.166.30.34 dev ens802f0 proto 80 onlink
10.244.168.3 dev cali9cf84c096fe scope link
10.244.168.11 dev calidfaf3085bbf scope link
10.244.168.22 dev cali20682c6ad11 scope link
10.244.168.29 dev calibe9ac94ca25 scope link
10.244.168.39 dev calic309945bd6f scope link
10.244.168.42 dev cali17825f62240 scope link
10.244.168.51 dev cali47a2ee29e61 scope link
10.244.168.60 dev cali0fd976c8b0b scope link
10.248.2.1 via 10.166.30.1 dev ens802f0 proto dhcp src 10.166.31.74 metric 100

If I look at the logs of multus-service proxy, two proxy from worker nodes, had some error messages like below, I don't know how to interpret this?

root@ar09-01-cyp:~# kubectl get pods -A -o wide |grep multus
kube-system               kube-multus-ds-amd64-268tc                                    1/1     Running                  1               12d     10.166.31.74     ar09-15-cyp   <none>           <none>
kube-system               kube-multus-ds-amd64-p9s8l                                    1/1     Running                  1               12d     10.166.31.41     ar09-09-cyp   <none>           <none>
kube-system               kube-multus-ds-amd64-x5b56                                    1/1     Running                  0               12d     10.166.30.34     ar09-01-cyp   <none>           <none>
kube-system               multus-proxy-ds-amd64-ctt2g                                   1/1     Running                  0               11d     10.166.31.41     ar09-09-cyp   <none>           <none>
kube-system               multus-proxy-ds-amd64-l7rkg                                   1/1     Running                  0               11d     10.166.31.74     ar09-15-cyp   <none>           <none>
kube-system               multus-proxy-ds-amd64-pgk95                                   1/1     Running                  0               11d     10.166.30.34     ar09-01-cyp   <none>           <none>
kube-system               multus-service-controller-6676d877ff-dqw6f                    1/1     Running                  0               11d     10.244.116.199   ar09-01-cyp   <none>           <none>
root@ar09-01-cyp:~# kubectl logs multus-service-controller-6676d877ff-dqw6f -n kube-system
I0901 21:28:53.941331       1 server.go:99] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0901 21:28:53.942953       1 options.go:88] hostname: multus-service-controller-6676d877ff-dqw6f
I0901 21:28:53.942986       1 leaderelection.go:243] attempting to acquire leader lease kube-system/multus-service-controller...
I0901 21:28:53.968512       1 leaderelection.go:253] successfully acquired lease kube-system/multus-service-controller
I0901 21:28:53.968707       1 endpointslice_controller.go:259] Starting endpoint slice controller
I0901 21:28:53.968740       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0901 21:28:54.169653       1 shared_informer.go:247] Caches are synced for endpoint_slice
W0913 22:33:12.101430       1 endpointslice_controller.go:308] Error syncing endpoint slices for service "minio-tenant/minio-multus-service-1", retrying. Error: EndpointSlice informer cache is out of date

I think the above multus-service-controller error is the problem in synching endpoint slices.

Proxy Logs

proxy log of the controller seems OK, but logs of the worker nodes have many errors.

root@ar09-01-cyp:~# kubectl logs multus-proxy-ds-amd64-pgk95  -n kube-system
I0901 21:28:53.278326       1 server.go:186] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0901 21:28:53.288133       1 options.go:71] hostname: ar09-01-cyp
I0901 21:28:53.288156       1 options.go:72] container-runtime: cri
I0901 21:28:53.288465       1 pod.go:123] Starting pod config controller
I0901 21:28:53.288522       1 server.go:172] Starting multus-proxy
I0901 21:28:53.288542       1 shared_informer.go:240] Waiting for caches to sync for pod config
I0901 21:28:53.289080       1 endpointslice.go:89] Starting EndpointSlice config controller
I0901 21:28:53.289128       1 shared_informer.go:240] Waiting for caches to sync for EndpointSlice config
I0901 21:28:53.289201       1 service.go:84] Starting Service config controller
I0901 21:28:53.290906       1 shared_informer.go:240] Waiting for caches to sync for Service config
I0901 21:28:53.389393       1 shared_informer.go:247] Caches are synced for pod config
I0901 21:28:53.391699       1 shared_informer.go:247] Caches are synced for EndpointSlice config
I0901 21:28:53.391756       1 shared_informer.go:247] Caches are synced for Service config



root@ar09-01-cyp:~# kubectl logs multus-proxy-ds-amd64-ctt2g  -n kube-system
I0901 21:28:52.699689       1 server.go:186] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0901 21:28:52.709168       1 options.go:71] hostname: ar09-09-cyp
I0901 21:28:52.709191       1 options.go:72] container-runtime: cri
I0901 21:28:52.709618       1 server.go:172] Starting multus-proxy
I0901 21:28:52.709634       1 pod.go:123] Starting pod config controller
I0901 21:28:52.709684       1 shared_informer.go:240] Waiting for caches to sync for pod config
I0901 21:28:52.709981       1 endpointslice.go:89] Starting EndpointSlice config controller
I0901 21:28:52.710016       1 shared_informer.go:240] Waiting for caches to sync for EndpointSlice config
I0901 21:28:52.710161       1 service.go:84] Starting Service config controller
I0901 21:28:52.710195       1 shared_informer.go:240] Waiting for caches to sync for Service config
I0901 21:28:52.810294       1 shared_informer.go:247] Caches are synced for pod config
I0901 21:28:52.810353       1 shared_informer.go:247] Caches are synced for Service config
I0901 21:28:52.810418       1 shared_informer.go:247] Caches are synced for EndpointSlice config
E0901 21:28:52.844750       1 server.go:423] cannot get kube-system/inteldeviceplugins-controller-manager-59b46b7949-2bzxz podInfo: not found
E0901 21:28:52.844800       1 server.go:423] cannot get olm/olm-operator-6ccdf8f464-c74f7 podInfo: not found
E0901 21:28:52.844821       1 server.go:423] cannot get cert-manager/cert-manager-758558b8bd-5mqkd podInfo: not found
E0901 21:28:52.844845       1 server.go:423] cannot get monitoring/prometheus-k8s-0 podInfo: not found
E0901 21:28:52.844868       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.844886       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.844909       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
E0901 21:28:52.844959       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.844979       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.844996       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
E0901 21:28:52.845027       1 server.go:423] cannot get kube-system/inteldeviceplugins-controller-manager-59b46b7949-2bzxz podInfo: not found
E0901 21:28:52.845054       1 server.go:423] cannot get olm/olm-operator-6ccdf8f464-c74f7 podInfo: not found
E0901 21:28:52.845073       1 server.go:423] cannot get cert-manager/cert-manager-758558b8bd-5mqkd podInfo: not found
E0901 21:28:52.845094       1 server.go:423] cannot get monitoring/prometheus-k8s-0 podInfo: not found
E0901 21:28:52.845160       1 server.go:423] cannot get kube-system/inteldeviceplugins-controller-manager-59b46b7949-2bzxz podInfo: not found
E0901 21:28:52.845185       1 server.go:423] cannot get olm/olm-operator-6ccdf8f464-c74f7 podInfo: not found
E0901 21:28:52.845202       1 server.go:423] cannot get cert-manager/cert-manager-758558b8bd-5mqkd podInfo: not found
E0901 21:28:52.845224       1 server.go:423] cannot get monitoring/prometheus-k8s-0 podInfo: not found
E0901 21:28:52.845246       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.845264       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.845278       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
E0901 21:28:52.845356       1 server.go:423] cannot get kube-system/inteldeviceplugins-controller-manager-59b46b7949-2bzxz podInfo: not found
E0901 21:28:52.845378       1 server.go:423] cannot get olm/olm-operator-6ccdf8f464-c74f7 podInfo: not found
E0901 21:28:52.845396       1 server.go:423] cannot get cert-manager/cert-manager-758558b8bd-5mqkd podInfo: not found
E0901 21:28:52.845418       1 server.go:423] cannot get monitoring/prometheus-k8s-0 podInfo: not found
E0901 21:28:52.845438       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.845455       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.845470       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
E0901 21:28:52.845545       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.845564       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.845582       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
E0901 21:28:52.845614       1 server.go:423] cannot get kube-system/inteldeviceplugins-controller-manager-59b46b7949-2bzxz podInfo: not found
E0901 21:28:52.845635       1 server.go:423] cannot get olm/olm-operator-6ccdf8f464-c74f7 podInfo: not found
E0901 21:28:52.845668       1 server.go:423] cannot get cert-manager/cert-manager-758558b8bd-5mqkd podInfo: not found
E0901 21:28:52.845690       1 server.go:423] cannot get monitoring/prometheus-k8s-0 podInfo: not found
E0901 21:28:52.845742       1 server.go:423] cannot get kube-system/inteldeviceplugins-controller-manager-59b46b7949-2bzxz podInfo: not found
E0901 21:28:52.845768       1 server.go:423] cannot get olm/olm-operator-6ccdf8f464-c74f7 podInfo: not found
E0901 21:28:52.845785       1 server.go:423] cannot get cert-manager/cert-manager-758558b8bd-5mqkd podInfo: not found
E0901 21:28:52.845806       1 server.go:423] cannot get monitoring/prometheus-k8s-0 podInfo: not found
E0901 21:28:52.845826       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.845845       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.845860       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
E0901 21:28:52.845914       1 server.go:423] cannot get monitoring/kube-state-metrics-5d7b5d5bfc-tvk6m podInfo: not found
E0901 21:28:52.845932       1 server.go:423] cannot get kube-system/coredns-74d6c5659f-v5w57 podInfo: not found
E0901 21:28:52.845949       1 server.go:423] cannot get minio-operator/minio-operator-699d559bf5-z24mx podInfo: not found
...
E0912 23:23:26.561112       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "8da05902f523465e42816e6e21f7f2101ad6ecd2656554e67d6ca4e67397c202": specified container not found: 8da0[7343/568071]
2816e6e21f7f2101ad6ecd2656554e67d6ca4e67397c202
E0912 23:23:26.561685       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "8da05902f523465e42816e6e21f7f2101ad6ecd2656554e67d6ca4e67397c202": specified container not found: 8da05902f523465e4
2816e6e21f7f2101ad6ecd2656554e67d6ca4e67397c202
E0912 23:23:26.562625       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "8da05902f523465e42816e6e21f7f2101ad6ecd2656554e67d6ca4e67397c202": specified container not found: 8da05902f523465e4
2816e6e21f7f2101ad6ecd2656554e67d6ca4e67397c202
E0912 23:34:47.993847       1 server.go:438] cannot get netns /host//proc/4077720/ns/net(<nil>): failed to Statfs "/host//proc/4077720/ns/net": no such file or directory
E0912 23:34:48.446408       1 server.go:438] cannot get netns /host//proc/4077720/ns/net(<nil>): failed to Statfs "/host//proc/4077720/ns/net": no such file or directory
E0912 23:34:48.448030       1 server.go:438] cannot get netns /host//proc/4077720/ns/net(<nil>): failed to Statfs "/host//proc/4077720/ns/net": no such file or directory
E0912 23:34:49.026313       1 server.go:438] cannot get netns /host//proc/4077720/ns/net(<nil>): failed to Statfs "/host//proc/4077720/ns/net": no such file or directory
E0912 23:34:49.751377       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "06730118b98f7c2ec16bf603dbac3163d7fb4910492eb562b98e4e10dd860976": specified container not found: 06730118b98f7c2ec16bf603dbac3163d7fb4910492eb562b98e4e10dd860976
E0912 23:34:49.752078       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "06730118b98f7c2ec16bf603dbac3163d7fb4910492eb562b98e4e10dd860976": specified container not found: 06730118b98f7c2ec
16bf603dbac3163d7fb4910492eb562b98e4e10dd860976                                                                                                                                                                                                                                                                               E0912 23:34:49.752748       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "06730118b98f7c2ec16bf603dbac3163d7fb4910492eb562b98e4e10dd860976": specified container not found: 06730118b98f7c2ec
16bf603dbac3163d7fb4910492eb562b98e4e10dd860976                                                                                                                                                                                                                                                                               E0912 23:34:49.753557       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "06730118b98f7c2ec16bf603dbac3163d7fb4910492eb562b98e4e10dd860976": specified container not found: 06730118b98f7c2ec
16bf603dbac3163d7fb4910492eb562b98e4e10dd860976
E0912 23:34:57.771982       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory                                                                                                                                                                 E0912 23:34:57.788704       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0912 23:34:57.789234       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0912 23:34:58.448770       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0912 23:34:58.450628       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0912 23:34:58.567955       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory                                                                                                                                                                 E0912 23:34:59.799537       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "bf6fb6d8c96716fed4b938ca1b2f9139c5675f8f1c39173d73cd426f091b1386": specified container not found: bf6fb6d8c96716fed
4b938ca1b2f9139c5675f8f1c39173d73cd426f091b1386                                                                                                                                                                                                                                                                               E0912 23:34:59.800589       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "bf6fb6d8c96716fed4b938ca1b2f9139c5675f8f1c39173d73cd426f091b1386": specified container not found: bf6fb6d8c96716fed
4b938ca1b2f9139c5675f8f1c39173d73cd426f091b1386                                                                                                                                                                                                                                                                               E0913 00:47:47.207570       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-jwrs7) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4b23a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b": specified container not found: d4b23
a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b                                                                                                                                                                                                                                                                   E0913 00:47:47.246310       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-jwrs7) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4b23a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b": container with ID starting with d4b2
3a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b not found: ID does not exist
E0913 00:47:47.246971       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-jwrs7) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4b23a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b": container with ID starting with d4b2
3a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b not found: ID does not exist                                                                                                                                                                                                                                     E0913 00:47:47.286389       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-jwrs7) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4b23a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b": container with ID starting with d4b2
3a0e0e0b2307232b7082d3c6193ddc3bc7cdbb7b7921abd152fbfb2da18b not found: ID does not exist                                                                                                                                                                                                                                     E0913 00:47:48.873174       1 server.go:438] cannot get netns /host//proc/4092095/ns/net(<nil>): failed to Statfs "/host//proc/4092095/ns/net": no such file or directory
E0913 00:47:49.038779       1 server.go:438] cannot get netns /host//proc/4092095/ns/net(<nil>): failed to Statfs "/host//proc/4092095/ns/net": no such file or directory                                                                                                                                                     E0913 00:47:49.039037       1 server.go:438] cannot get netns /host//proc/4092095/ns/net(<nil>): failed to Statfs "/host//proc/4092095/ns/net": no such file or directory
E0913 00:47:50.216537       1 server.go:438] cannot get netns /host//proc/4092095/ns/net(<nil>): failed to Statfs "/host//proc/4092095/ns/net": no such file or directory                                                                                                                                                     E0913 00:47:50.226059       1 server.go:438] cannot get netns /host//proc/4092095/ns/net(<nil>): failed to Statfs "/host//proc/4092095/ns/net": no such file or directory
E0913 00:58:02.031303       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-p86jk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a807871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0": specified container not found: a8078
71a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0
E0913 00:58:02.072615       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-p86jk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a807871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0": container with ID starting with a807871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0 not found: ID does not exist
E0913 00:58:02.073157       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-p86jk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a807871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0": container with ID starting with a807871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0 not found: ID does not exist
E0913 00:58:02.112238       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-p86jk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a807871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0": container with ID starting with a807
871a07afd516c02a61a3b76943d34302d9b8bd3bcd1d4f297580473dd3a0 not found: ID does not exist                                                                                                                                                                                                                                     E0913 00:58:02.707875       1 server.go:438] cannot get netns /host//proc/139105/ns/net(<nil>): failed to Statfs "/host//proc/139105/ns/net": no such file or directory
E0913 00:58:03.738812       1 server.go:438] cannot get netns /host//proc/139105/ns/net(<nil>): failed to Statfs "/host//proc/139105/ns/net": no such file or directory                                                                                                                                                       E0913 00:58:04.118423       1 server.go:438] cannot get netns /host//proc/139105/ns/net(<nil>): failed to Statfs "/host//proc/139105/ns/net": no such file or directory
E0913 00:58:04.118617       1 server.go:438] cannot get netns /host//proc/139105/ns/net(<nil>): failed to Statfs "/host//proc/139105/ns/net": no such file or directory
E0913 00:58:05.040426       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a": specified container not found: d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a
E0913 00:58:05.041414       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a": specified container not found: d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a
E0913 00:58:05.041697       1 server.go:438] cannot get netns /host//proc/139105/ns/net(<nil>): failed to Statfs "/host//proc/139105/ns/net": no such file or directory                                                                                                                                                       E0913 00:58:05.050307       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a": specified container not found: d3bc41b1ba7f364a2
3cc760dc573f1b15d2a542409517490a04dc3cc06925e3a                                                                                                                                                                                                                                                                               E0913 00:58:05.051112       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a": specified container not found: d3bc41b1ba7f364a2
3cc760dc573f1b15d2a542409517490a04dc3cc06925e3a                                                                                                                                                                                                                                                                               E0913 00:58:05.052077       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d3bc41b1ba7f364a23cc760dc573f1b15d2a542409517490a04dc3cc06925e3a": specified container not found: d3bc41b1ba7f364a2
3cc760dc573f1b15d2a542409517490a04dc3cc06925e3a
E0913 01:09:21.067396       1 server.go:438] cannot get netns /host//proc/169860/ns/net(<nil>): failed to Statfs "/host//proc/169860/ns/net": no such file or directory                                                                                                                                                       E0913 01:09:22.041364       1 server.go:438] cannot get netns /host//proc/169860/ns/net(<nil>): failed to Statfs "/host//proc/169860/ns/net": no such file or directory
E0913 01:09:23.304548       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a": specified container not found: 091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a
E0913 01:09:23.305526       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a": specified container not found: 091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a
E0913 01:09:23.306305       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a": specified container not found: 091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a
E0913 01:09:23.307082       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "091a11a60f728cba3a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a": specified container not found: 091a11a60f728cba3
a91009ff5f88c55e004a4219bf79ac299a421d70fdd969a
E0913 01:09:29.324975       1 server.go:438] cannot get netns /host//proc/183632/ns/net(<nil>): failed to Statfs "/host//proc/183632/ns/net": no such file or directory
E0913 01:09:29.325230       1 server.go:438] cannot get netns /host//proc/183632/ns/net(<nil>): failed to Statfs "/host//proc/183632/ns/net": no such file or directory
E0913 01:09:30.342345       1 server.go:438] cannot get netns /host//proc/183632/ns/net(<nil>): failed to Statfs "/host//proc/183632/ns/net": no such file or directory
E0913 01:09:30.885805       1 server.go:438] cannot get netns /host//proc/183632/ns/net(<nil>): failed to Statfs "/host//proc/183632/ns/net": no such file or directory
E0913 01:09:30.886332       1 server.go:438] cannot get netns /host//proc/183632/ns/net(<nil>): failed to Statfs "/host//proc/183632/ns/net": no such file or directory
E0913 01:09:31.247129       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "58004cdc87b9b4449f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9": specified container not found: 58004cdc87b9b4449
f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9
E0913 01:09:31.248135       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "58004cdc87b9b4449f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9": specified container not found: 58004cdc87b9b4449
f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9
E0913 01:09:31.248928       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "58004cdc87b9b4449f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9": specified container not found: 58004cdc87b9b4449
f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9
E0913 01:09:31.249875       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "58004cdc87b9b4449f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9": specified container not found: 58004cdc87b9b4449
f82cd85432b3dbfee3af3011d8411de1b2ebd16e9a677a9
E0913 15:39:15.514169       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:15.515298       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-hgt9d) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a72b1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e": container with ID starting with a72b
1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e not found: ID does not exist
E0913 15:39:15.516105       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-hgt9d) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a72b1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e": container with ID starting with a72b
1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e not found: ID does not exist
E0913 15:39:15.516361       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:15.517123       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-hgt9d) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a72b1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e": container with ID starting with a72b
1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e not found: ID does not exist
E0913 15:39:15.517798       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-hgt9d) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a72b1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e": container with ID starting with a72b
1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e not found: ID does not exist
E0913 15:39:15.518050       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:15.518788       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-hgt9d) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a72b1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e": container with ID starting with a72b
1570b12aaa600712db2d7cf31effdfab92b8245a7a4881fc8ad5840ef80e not found: ID does not exist
E0913 15:39:15.519031       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:15.519140       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:16.222831       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:16.223024       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:16.429713       1 server.go:438] cannot get netns /host//proc/184449/ns/net(<nil>): failed to Statfs "/host//proc/184449/ns/net": no such file or directory
E0913 15:39:17.320882       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "17ba3f9b45ae9755ce7191233cd754b8c542475fe2339729f8f7f012fff8196e": specified container not found: 17ba3f9b45ae9755c
e7191233cd754b8c542475fe2339729f8f7f012fff8196e
E0913 15:39:17.321863       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "17ba3f9b45ae9755ce7191233cd754b8c542475fe2339729f8f7f012fff8196e": specified container not found: 17ba3f9b45ae9755c
e7191233cd754b8c542475fe2339729f8f7f012fff8196e
E0913 15:39:17.322678       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "17ba3f9b45ae9755ce7191233cd754b8c542475fe2339729f8f7f012fff8196e": specified container not found: 17ba3f9b45ae9755c
e7191233cd754b8c542475fe2339729f8f7f012fff8196e
E0913 15:39:17.323608       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "17ba3f9b45ae9755ce7191233cd754b8c542475fe2339729f8f7f012fff8196e": specified container not found: 17ba3f9b45ae9755c
e7191233cd754b8c542475fe2339729f8f7f012fff8196e
E0913 16:05:48.684867       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-g4fvj) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "6826bd07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5": specified container not found: 6826b
d07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5
E0913 16:05:48.685822       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-g4fvj) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "6826bd07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5": specified container not found: 6826b
d07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5
E0913 16:05:48.686486       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-g4fvj) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "6826bd07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5": specified container not found: 6826b
d07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5
E0913 16:05:48.687439       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-g4fvj) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "6826bd07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5": specified container not found: 6826b
d07acf7bc60c80b56e101c886af9c982bff54fe57230b05276d14852bd5
E0913 16:23:19.372727       1 server.go:438] cannot get netns /host//proc/2808538/ns/net(<nil>): failed to Statfs "/host//proc/2808538/ns/net": no such file or directory
E0913 16:23:20.578791       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "88559b0e3014dfe761bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b": specified container not found: 88559b0e3014dfe76
1bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b
E0913 16:23:20.579606       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "88559b0e3014dfe761bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b": specified container not found: 88559b0e3014dfe76
1bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b
E0913 16:23:20.579869       1 server.go:438] cannot get netns /host//proc/2808538/ns/net(<nil>): failed to Statfs "/host//proc/2808538/ns/net": no such file or directory
E0913 16:23:20.588820       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "88559b0e3014dfe761bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b": container with ID starting with 88559b0e3014dfe7
61bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b not found: ID does not exist
E0913 16:23:20.589422       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "88559b0e3014dfe761bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b": container with ID starting with 88559b0e3014dfe7
61bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b not found: ID does not exist
E0913 16:23:20.589738       1 server.go:438] cannot get netns /host//proc/2808538/ns/net(<nil>): failed to Statfs "/host//proc/2808538/ns/net": no such file or directory
E0913 16:23:20.592380       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "88559b0e3014dfe761bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b": container with ID starting with 88559b0e3014dfe7
61bdfc78bd11df954fcfa4fb034a4398c826b24a5a59035b not found: ID does not exist
E0913 16:23:20.592571       1 server.go:423] cannot get minio-tenant/minio-tenant-pool-0-1 podInfo: not found
E0913 16:23:20.593044       1 server.go:423] cannot get minio-tenant/minio-tenant-pool-0-1 podInfo: not found
E0913 16:23:26.721330       1 server.go:438] cannot get netns /host//proc/2834266/ns/net(<nil>): failed to Statfs "/host//proc/2834266/ns/net": no such file or directory
E0913 16:23:27.386977       1 server.go:438] cannot get netns /host//proc/2834266/ns/net(<nil>): failed to Statfs "/host//proc/2834266/ns/net": no such file or directory
E0913 16:23:28.538234       1 server.go:438] cannot get netns /host//proc/2834266/ns/net(<nil>): failed to Statfs "/host//proc/2834266/ns/net": no such file or directory
E0913 16:23:28.638207       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "242ea85c14c3030733bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094": specified container not found: 242ea85c14c303073
3bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094
E0913 16:23:28.638396       1 server.go:438] cannot get netns /host/(<nil>): unknown FS magic on "/host/": ef53
E0913 16:23:28.648264       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "242ea85c14c3030733bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094": specified container not found: 242ea85c14c303073
3bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094
E0913 16:23:28.652041       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "242ea85c14c3030733bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094": container with ID starting with 242ea85c14c30307
33bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094 not found: ID does not exist
E0913 16:23:28.652256       1 server.go:438] cannot get netns /host/(<nil>): unknown FS magic on "/host/": ef53
E0913 16:23:28.653890       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "242ea85c14c3030733bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094": container with ID starting with 242ea85c14c30307
33bbce8469fdbb2bc5d1647fddf9fdb72dde72a2597c7094 not found: ID does not exist
E0913 16:42:34.684593       1 server.go:438] cannot get netns /host//proc/2835195/ns/net(<nil>): failed to Statfs "/host//proc/2835195/ns/net": no such file or directory
E0913 16:42:34.685543       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-8lxhg) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72": container with ID starting with 331b
f8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72 not found: ID does not exist
E0913 16:42:34.686145       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-8lxhg) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72": container with ID starting with 331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72 not found: ID does not exist
E0913 16:42:34.686366       1 server.go:438] cannot get netns /host//proc/2835195/ns/net(<nil>): failed to Statfs "/host//proc/2835195/ns/net": no such file or directory
E0913 16:42:34.687208       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-8lxhg) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72": container with ID starting with 331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72 not found: ID does not exist
E0913 16:42:34.687824       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-8lxhg) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72": container with ID starting with 331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72 not found: ID does not exist
E0913 16:42:34.688159       1 server.go:438] cannot get netns /host//proc/2835195/ns/net(<nil>): failed to Statfs "/host//proc/2835195/ns/net": no such file or directory
E0913 16:42:34.688741       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-8lxhg) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72": container with ID starting with 331bf8950d5d6d59f80cfca018f727722fb7fdc70b0b07afe7cb2d44e3129b72 not found: ID does not exist
E0913 16:42:34.688842       1 server.go:438] cannot get netns /host//proc/2835195/ns/net(<nil>): failed to Statfs "/host//proc/2835195/ns/net": no such file or directory
E0913 16:42:34.689084       1 server.go:438] cannot get netns /host//proc/2835195/ns/net(<nil>): failed to Statfs "/host//proc/2835195/ns/net": no such file or directory
E0913 16:42:35.392616       1 server.go:438] cannot get netns /host//proc/2835195/ns/net(<nil>): failed to Statfs "/host//proc/2835195/ns/net": no such file or directory
E0913 16:42:36.108478       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c": specified container not found: a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c
E0913 16:42:36.109552       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c": specified container not found: a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c
E0913 16:42:36.110360       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c": specified container not found: a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c
E0913 16:42:36.111243       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c": specified container not found: a229eb5895ffd9501881c760e94e78468331d702bfee1c4f0bcb924a3945aa5c
E0913 17:05:15.173206       1 server.go:438] cannot get netns /host//proc/2920200/ns/net(<nil>): failed to Statfs "/host//proc/2920200/ns/net": no such file or directory
E0913 17:05:15.742520       1 server.go:438] cannot get netns /host//proc/2920200/ns/net(<nil>): failed to Statfs "/host//proc/2920200/ns/net": no such file or directory
E0913 17:05:16.575745       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca": specified container not found: 01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca
E0913 17:05:16.576909       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca": specified container not found: 01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca
E0913 17:05:16.577738       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca": specified container not found: 01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca
E0913 17:05:16.578618       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca": specified container not found: 01bf53744022b7c5d6611367cffa14847c0314afd221635debafeaa19c10a3ca
E0913 17:05:22.709719       1 server.go:438] cannot get netns /host//proc/2963338/ns/net(<nil>): failed to Statfs "/host//proc/2963338/ns/net": no such file or directory
E0913 17:05:23.380329       1 server.go:438] cannot get netns /host//proc/2963338/ns/net(<nil>): failed to Statfs "/host//proc/2963338/ns/net": no such file or directory
E0913 17:05:24.605089       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d": specified container not found: ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d
E0913 17:05:24.605885       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d": specified container not found: ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d
E0913 17:05:24.606162       1 server.go:438] cannot get netns /host//proc/2963338/ns/net(<nil>): failed to Statfs "/host//proc/2963338/ns/net": no such file or directory
E0913 17:05:24.612935       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d": specified container not found: ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d
E0913 17:05:24.613852       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d": container with ID starting with ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d not found: ID does not exist
E0913 17:05:24.614098       1 server.go:438] cannot get netns /host//proc/2963338/ns/net(<nil>): failed to Statfs "/host//proc/2963338/ns/net": no such file or directory
E0913 17:05:24.619321       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d": container with ID starting with ea977fa3db43ba29c5010ec4fa0db4608fd75050e37d340a808c57be7620642d not found: ID does not exist
E0913 17:33:32.579159       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-r729m) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd": specified container not found: 930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd
E0913 17:33:32.618217       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-r729m) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd": container with ID starting with 930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd not found: ID does not exist
E0913 17:33:32.618768       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-r729m) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd": container with ID starting with 930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd not found: ID does not exist
E0913 17:33:32.657640       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-r729m) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd": container with ID starting with 930da8c3a235ed3bd469c88d2fe1f572cc27ecaecbec2c6487dce0e04f20f9dd not found: ID does not exist
E0913 17:33:33.797950       1 server.go:438] cannot get netns /host//proc/2964063/ns/net(<nil>): failed to Statfs "/host//proc/2964063/ns/net": no such file or directory
E0913 17:33:34.591339       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf": specified container not found: 07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf
E0913 17:33:34.592218       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf": specified container not found: 07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf
E0913 17:33:34.592810       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf": specified container not found: 07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf
E0913 17:33:34.593796       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf": specified container not found: 07f7c7a026d13170f9ef21b93d072430d10f9bd3b9df8b351f0babfa17ae3dcf
E0913 17:44:22.340671       1 server.go:438] cannot get netns /host//proc/3070580/ns/net(<nil>): failed to Statfs "/host//proc/3070580/ns/net": no such file or directory
E0913 17:44:22.340905       1 server.go:438] cannot get netns /host//proc/3070580/ns/net(<nil>): failed to Statfs "/host//proc/3070580/ns/net": no such file or directory
E0913 17:44:23.007922       1 server.go:438] cannot get netns /host//proc/3070580/ns/net(<nil>): failed to Statfs "/host//proc/3070580/ns/net": no such file or directory
E0913 17:44:23.651957       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be": specified container not found: 5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be
E0913 17:44:23.653190       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be": specified container not found: 5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be
E0913 17:44:23.653945       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be": specified container not found: 5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be
E0913 17:44:23.654850       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be": specified container not found: 5ff45c973a263f64b8a3e149cafa2935732101f279dbc30fb6bbad2c392a72be
E0913 17:44:29.670299       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0913 17:44:30.107627       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0913 17:44:30.681175       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "9b731bf673fe752597dd58dcbe95d256a04054976f121ed4f5aa033897d712ae": specified container not found: 9b731bf673fe752597dd58dcbe95d256a04054976f121ed4f5aa033897d712ae
E0913 17:44:30.682011       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "9b731bf673fe752597dd58dcbe95d256a04054976f121ed4f5aa033897d712ae": specified container not found: 9b731bf673fe752597dd58dcbe95d256a04054976f121ed4f5aa033897d712ae
E0913 17:44:30.682311       1 server.go:438] cannot get netns /host//proc/0/ns/net(<nil>): failed to Statfs "/host//proc/0/ns/net": no such file or directory
E0913 17:44:30.685765       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "9b731bf673fe752597dd58dcbe95d256a04054976f121ed4f5aa033897d712ae": specified container not found: 9b731bf673fe752597dd58dcbe95d256a04054976f121ed4f5aa033897d712ae
E0913 17:44:30.686869       1 server.go:423] cannot get minio-tenant/minio-tenant-pool-0-1 podInfo: not found
E0913 17:44:30.687470       1 server.go:423] cannot get minio-tenant/minio-tenant-pool-0-1 podInfo: not found
root@ar09-01-cyp:~#

root@ar09-01-cyp:~/.kube# kubectl get endpointslices -n minio-tenant

NAME                                   ADDRESSTYPE   PORTS   ENDPOINTS                     AGE
minio-4kghr                            IPv4          9000    10.244.40.255,10.244.168.29   3h53m
minio-multus-service-1-jplxm           IPv4          9000    10.244.40.255,10.244.168.29   3h52m
minio-multus-service-1-multus-9dfl5    IPv4          9000    10.56.217.101,10.56.217.100   3h52m
minio-tenant-console-x7q6m             IPv4          9090    10.244.40.255,10.244.168.29   3h53m
minio-tenant-hl-mvhnd                  IPv4          9000    10.244.40.255,10.244.168.29   3h53m
minio-tenant-log-hl-svc-vq7xz          IPv4          5432    10.244.168.31                 3h51m
minio-tenant-log-search-api-vcnjt      IPv4          8080    10.244.168.16                 3h51m
minio-tenant-prometheus-hl-svc-rxsmk   IPv4          9090    10.244.168.15                 3h49m

root@ar09-01-cyp:~/.kube# kubectl get endpointslices minio-multus-service-1-jplxm -n minio-tenant -o yaml

addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
  - 10.244.40.255
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: ar09-09-cyp
  targetRef:
    kind: Pod
    name: minio-tenant-pool-0-1
    namespace: minio-tenant
    uid: 2737c86a-9254-46bb-afbc-f7b8c6dc3a9a
- addresses:
  - 10.244.168.29
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: ar09-15-cyp
  targetRef:
    kind: Pod
    name: minio-tenant-pool-0-0
    namespace: minio-tenant
    uid: 0725c72b-778d-4cec-a54f-4fa892e1b1bc
kind: EndpointSlice
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2022-09-13T17:44:42Z"
  creationTimestamp: "2022-09-13T17:39:14Z"
  generateName: minio-multus-service-1-
  generation: 16
  labels:
    endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
    kubernetes.io/service-name: minio-multus-service-1
    service.kubernetes.io/service-proxy-name: multus-proxy
  name: minio-multus-service-1-jplxm
  namespace: minio-tenant
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: minio-multus-service-1
    uid: fc28c41d-fcda-43d2-9e4c-724518c1a8d0
  resourceVersion: "6380765"
  uid: 16d5c674-5c87-4eab-92fe-beacf48b119d
ports:
- name: ""
  port: 9000
  protocol: TCP

root@ar09-01-cyp:~/.kube# kubectl get endpointslices minio-multus-service-1-multus-9dfl5 -n minio-tenant -o yaml

addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
  - 10.56.217.101
  conditions:
    ready: true
  nodeName: ar09-09-cyp
  targetRef:
    kind: Pod
    name: minio-tenant-pool-0-1
    namespace: minio-tenant
    resourceVersion: "6380665"
    uid: 2737c86a-9254-46bb-afbc-f7b8c6dc3a9a
- addresses:
  - 10.56.217.100
  conditions:
    ready: true
  nodeName: ar09-15-cyp
  targetRef:
    kind: Pod
    name: minio-tenant-pool-0-0
    namespace: minio-tenant
    resourceVersion: "6380757"
    uid: 0725c72b-778d-4cec-a54f-4fa892e1b1bc
kind: EndpointSlice
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2022-09-13T17:44:42Z"
  creationTimestamp: "2022-09-13T17:39:14Z"
  generateName: minio-multus-service-1-multus-
  generation: 7
  labels:
    endpointslice.kubernetes.io/managed-by: multus-endpointslice-controller.npwg.k8s.io
    kubernetes.io/service-name: minio-multus-service-1
    service.kubernetes.io/service-proxy-name: multus-proxy
  name: minio-multus-service-1-multus-9dfl5
  namespace: minio-tenant
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: false
    controller: true
    kind: Service
    name: minio-multus-service-1
    uid: fc28c41d-fcda-43d2-9e4c-724518c1a8d0
  resourceVersion: "6380768"
  uid: bf81a1f0-92ec-4e73-915b-82ec484f1f41
ports:
- name: ""
  port: 9000
  protocol: TCP

iptables --list: controller node and worker nodes, can't run iptables from the container due to permission issue.

root@ar09-01-cyp:~# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
cali-INPUT  all  --  anywhere             anywhere             /* cali:Cz_u1IQiXIMmKD4c */
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
cali-FORWARD  all  --  anywhere             anywhere             /* cali:wUHhoiAYhphO9Mso */
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
ACCEPT     all  --  anywhere             anywhere             /* cali:S93hcgKJrXEqnTfs */ /* Policy explicitly accepted packet. */ mark match 0x10000/0x10000
MARK       all  --  anywhere             anywhere             /* cali:mp77cMpurHhyjLrM */ MARK or 0x10000

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
cali-OUTPUT  all  --  anywhere             anywhere             /* cali:tVnHkvAo15HuiPy0 */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain KUBE-EXTERNAL-SERVICES (2 references)
target     prot opt source               destination

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             /* minio-tenant-ingress/minio-kubernetes-ingress-nginx-ingress:https health check node port */ tcp dpt:32012
ACCEPT     tcp  --  anywhere             anywhere             /* minio-tenant-ingress/minio-kubernetes-ingress-nginx-ingress:http health check node port */ tcp dpt:32012

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
REJECT     tcp  --  anywhere             tcs-metrics-service.tcs.svc.cluster.local  /* tcs/tcs-metrics-service:https has no endpoints */ tcp dpt:8443 reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             tac-metrics-service.tca.svc.cluster.local  /* tca/tac-metrics-service:https has no endpoints */ tcp dpt:8443 reject-with icmp-port-unreachable

... (too long)

Thank you for the information. I suspect that the multus-proxy error at worker node may cause the error, but I cannot find the root cause from multus-proxy logs.

Could you please let me know how to install multus-proxy and share your worker node information (linux kernel version, linux distribution and kubernetes distribution)?

Simply, deployed on Ubuntu 22.04 for test purpose, but need to support Redhat/Rocky. I have created the video for troubleshooting and better understanding for you. link If you don't mind, can we have quick zoom or teams meeting?

I am sorry but I couldn't make it because I do not have a time. I suppose that the root cause comes configuration between cluster and multus-proxy (at least multus-proxy cannot get the pod information and pod namespace information, so that could be the root cause).

I try to find the time to reproduce it in local.

So if you proceed the troubleshooting, please check multus-proxy its service account and so on. There may be some reason why multus-proxy cannot get these information.

Fundamentally, I should not see the errors like the below, as minio-tenant* minio-operator* pods are not accessible by the multus-proxy. Maybe because of the service account or namespace issues, right? I will investigate more on it then. Thank you.

E0915 11:46:42.550763       1 server.go:438] cannot get netns /host/(<nil>): unknown FS magic on "/host/": ef53
E0915 11:46:42.575956       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "09c517e2d1dff09976157e9f944c6c4e28e9216a231956b029cf2da04e3c0a63": container with ID starting with 0
9c517e2d1dff09976157e9f944c6c4e28e9216a231956b029cf2da04e3c0a63 not found: ID does not exist
E0915 11:46:42.576515       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "09c517e2d1dff09976157e9f944c6c4e28e9216a231956b029cf2da04e3c0a63": container with ID starting with 0
9c517e2d1dff09976157e9f944c6c4e28e9216a231956b029cf2da04e3c0a63 not found: ID does not exist
E0915 11:46:42.576789       1 server.go:438] cannot get netns /host/(<nil>): unknown FS magic on "/host/": ef53
E0915 11:46:42.619064       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "09c517e2d1dff09976157e9f944c6c4e28e9216a231956b029cf2da04e3c0a63": container with ID starting with 0
9c517e2d1dff09976157e9f944c6c4e28e9216a231956b029cf2da04e3c0a63 not found: ID does not exist
E0915 11:48:55.791889       1 server.go:507] failed to remove route: <nil>
E0915 11:54:42.063952       1 server.go:438] cannot get netns /host//proc/1796795/ns/net(<nil>): failed to Statfs "/host//proc/1796795/ns/net": no such file or directory
E0915 11:54:43.070309       1 server.go:438] cannot get netns /host//proc/1796795/ns/net(<nil>): failed to Statfs "/host//proc/1796795/ns/net": no such file or directory
E0915 11:54:43.645097       1 server.go:438] cannot get netns /host//proc/1796795/ns/net(<nil>): failed to Statfs "/host//proc/1796795/ns/net": no such file or directory
E0915 11:54:43.957952       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "0471c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4": specified container not found: 04
71c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4
E0915 11:54:43.960529       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "0471c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4": specified container not found: 04
71c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4
E0915 11:54:43.961822       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "0471c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4": specified container not found: 04
71c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4
E0915 11:54:43.963650       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "0471c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4": specified container not found: 04
71c53e70917ab2d776c0e006543ca3b05be34b2d6046e28362255d35ce7bc4
E0915 11:54:50.096199       1 server.go:438] cannot get netns /host//proc/1813882/ns/net(<nil>): failed to Statfs "/host//proc/1813882/ns/net": no such file or directory
E0915 11:54:50.684245       1 server.go:438] cannot get netns /host//proc/1813882/ns/net(<nil>): failed to Statfs "/host//proc/1813882/ns/net": no such file or directory
E0915 11:54:52.011008       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "694687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe": specified container not found: 69
4687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe
E0915 11:54:52.011223       1 server.go:438] cannot get netns /host/(<nil>): unknown FS magic on "/host/": ef53
E0915 11:54:52.013949       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "694687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe": specified container not found: 69
4687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe
E0915 11:54:52.014820       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "694687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe": specified container not found: 69
4687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe
E0915 11:54:52.015091       1 server.go:438] cannot get netns /host/(<nil>): unknown FS magic on "/host/": ef53
E0915 11:54:52.017015       1 pod.go:351] failed to get pod(minio-tenant/minio-tenant-pool-0-1) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "694687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe": specified container not found: 69
4687d7d6ca4e767eb9af2ff79d06cca29a69721ed147577046c00b7ed013fe
E0915 11:59:12.307259       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-c6jvk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a": specified container n
ot found: d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a
E0915 11:59:12.346965       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-c6jvk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a": container with ID sta
rting with d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a not found: ID does not exist
E0915 11:59:12.347750       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-c6jvk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a": container with ID sta
rting with d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a not found: ID does not exist
E0915 11:59:12.387035       1 pod.go:351] failed to get pod(minio-operator/minio-operator-699d559bf5-c6jvk) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a": container with ID sta
rting with d4812714cf90145dde5b9fc4550e6335902358dd964d84e52f0c02cfa04fcf4a not found: ID does not exist

Right. As I told, muluts-proxy should see that and in such case (success case), we should not see such error message.

MinIO Tenant deploys its pods using statefulsets, so I have added:

    resources:
      - pods
      - namespaces
      - nodes
      - services
      - statefulsets

and deployed deploy.yml in kube-system namespace. Now I don't see many pod.go:351 error like the previous, but I observed like the below, Any comments or thoughts?

root@ar09-01-cyp:/opt/cek/charts# kubectl logs multus-proxy-ds-amd64-rpkfq -n kube-system

I0915 16:11:34.119641       1 server.go:186] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0915 16:11:34.129168       1 options.go:71] hostname: ar09-01-cyp
I0915 16:11:34.129190       1 options.go:72] container-runtime: cri
I0915 16:11:34.129555       1 pod.go:123] Starting pod config controller
I0915 16:11:34.129591       1 server.go:172] Starting multus-proxy
I0915 16:11:34.129602       1 shared_informer.go:240] Waiting for caches to sync for pod config
I0915 16:11:34.129657       1 endpointslice.go:89] Starting EndpointSlice config controller
I0915 16:11:34.129669       1 shared_informer.go:240] Waiting for caches to sync for EndpointSlice config
I0915 16:11:34.129684       1 service.go:84] Starting Service config controller
I0915 16:11:34.129706       1 shared_informer.go:240] Waiting for caches to sync for Service config
I0915 16:11:34.230540       1 shared_informer.go:247] Caches are synced for pod config
I0915 16:11:34.230576       1 shared_informer.go:247] Caches are synced for Service config
I0915 16:11:34.230546       1 shared_informer.go:247] Caches are synced for EndpointSlice config
W0915 17:00:44.807734       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 17:00:44.807705       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 17:00:44.807879       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 17:00:44.807845       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 17:00:44.807918       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding

root@ar09-01-cyp:/opt/cek/charts# kubectl logs multus-proxy-ds-amd64-np74f -n kube-system

I0915 16:11:34.177730       1 server.go:186] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0915 16:11:34.187238       1 options.go:71] hostname: ar09-15-cyp
I0915 16:11:34.187260       1 options.go:72] container-runtime: cri
I0915 16:11:34.187644       1 pod.go:123] Starting pod config controller
I0915 16:11:34.187684       1 server.go:172] Starting multus-proxy
I0915 16:11:34.187700       1 shared_informer.go:240] Waiting for caches to sync for pod config
I0915 16:11:34.188393       1 service.go:84] Starting Service config controller
I0915 16:11:34.188421       1 endpointslice.go:89] Starting EndpointSlice config controller
I0915 16:11:34.188435       1 shared_informer.go:240] Waiting for caches to sync for Service config
I0915 16:11:34.188452       1 shared_informer.go:240] Waiting for caches to sync for EndpointSlice config
I0915 16:11:34.288485       1 shared_informer.go:247] Caches are synced for pod config
I0915 16:11:34.288575       1 shared_informer.go:247] Caches are synced for EndpointSlice config
I0915 16:11:34.288584       1 shared_informer.go:247] Caches are synced for Service config
E0915 16:13:57.814819       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-p4jpt) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff": specified container not found: 5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff
E0915 16:13:57.855774       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-p4jpt) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff": container with ID starting with 5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff not found: ID does not exist
E0915 16:13:57.856645       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-p4jpt) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff": container with ID starting with 5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff not found: ID does not exist
E0915 16:13:57.897738       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-p4jpt) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff": container with ID starting with 5ebcf0a84e5574f253f6893d0dc80f8e500526b426eb168d8975232cefc3b8ff not found: ID does not exist
E0915 17:13:57.418564       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-qxdpl) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a": specified container not found: 72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a
E0915 17:13:57.704148       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-qxdpl) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a": container with ID starting with 72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a not found: ID does not exist
E0915 17:13:57.704824       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-qxdpl) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a": container with ID starting with 72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a not found: ID does not exist
E0915 17:13:57.745227       1 pod.go:351] failed to get pod(olm/operatorhubio-catalog-qxdpl) network namespace: cannot get containerStatus: rpc error: code = NotFound desc = could not find container "72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a": container with ID starting with 72a7e259e72eff5f5f74e5c882cc12c4d00f65aa6366c7e977838fe48ea22b6a not found: ID does not exist

root@ar09-01-cyp:/opt/cek/charts# kubectl logs multus-proxy-ds-amd64-xvnjd -n kube-system

I0915 16:11:34.592461       1 server.go:186] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0915 16:11:34.601881       1 options.go:71] hostname: ar09-09-cyp
I0915 16:11:34.601907       1 options.go:72] container-runtime: cri
I0915 16:11:34.602250       1 server.go:172] Starting multus-proxy
I0915 16:11:34.602239       1 pod.go:123] Starting pod config controller
I0915 16:11:34.602303       1 shared_informer.go:240] Waiting for caches to sync for pod config
I0915 16:11:34.602562       1 service.go:84] Starting Service config controller
I0915 16:11:34.602619       1 shared_informer.go:240] Waiting for caches to sync for Service config
I0915 16:11:34.602809       1 endpointslice.go:89] Starting EndpointSlice config controller
I0915 16:11:34.602825       1 shared_informer.go:240] Waiting for caches to sync for EndpointSlice config
I0915 16:11:34.702880       1 shared_informer.go:247] Caches are synced for Service config
I0915 16:11:34.703000       1 shared_informer.go:247] Caches are synced for EndpointSlice config
I0915 16:11:34.803360       1 shared_informer.go:247] Caches are synced for pod config
W0915 16:30:14.691833       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:14.691852       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:14.691871       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:14.691896       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:14.691905       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:39.691817       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:39.691859       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:39.691918       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:39.691951       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:39.692104       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:44.692634       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:44.692679       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:44.692727       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:44.692766       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 16:30:44.692804       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding
W0915 17:14:37.690332       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: tls: oversized record received with length 20527") has prevented the request from succeeding

When running iperf on both pods, using 2nd interface, was able to send traffic via net2 interface.

Right, so multus-proxy works as 'kube-proxy' to do traffic redirection by iptables. So in your situation, multus works well so multus interface can send/receive traffic but service traffic cannot be sent due to missing traffic redirection by iptables, (should be) generated by multus-proxy. In your situation, multus-proxy causes the error before inject iptable rules.

How do I check multus-proxy redirection is generated correctly? When enabling

    args:
    - "--logtostderr"
    - "-v=4"

multus-proxy log shows it generates the rules. Which chain of 'iptables --list' will be generated when redirection happening? any example of rules(for redirection)? If it's not generated, what would be the possible causes?

I0916 04:25:45.929430       1 server.go:398] syncServiceForwarding
I0916 04:25:45.929601       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/205901/ns/net
I0916 04:25:45.929702       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0916 04:25:45.967191       1 pod.go:160] Calling handler.OnPodUpdate
I0916 04:25:45.967220       1 server.go:295] OnPodUpdate
I0916 04:25:45.967273       1 pod.go:337] pod:observability/jaeger-7f8f665d7f-mjsjh ar09-15-cyp/ar09-15-cyp
I0916 04:25:45.969053       1 pod.go:337] pod:observability/jaeger-7f8f665d7f-mjsjh ar09-15-cyp/ar09-15-cyp
I0916 04:25:45.970776       1 server.go:398] syncServiceForwarding
I0916 04:25:45.970952       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/205901/ns/net
I0916 04:25:45.971060       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0916 04:25:46.005163       1 pod.go:160] Calling handler.OnPodUpdate
I0916 04:25:46.005187       1 server.go:295] OnPodUpdate
I0916 04:25:46.005240       1 pod.go:337] pod:observability/jaeger-agent-daemonset-hzbch ar09-15-cyp/ar09-15-cyp
I0916 04:25:46.007040       1 pod.go:337] pod:observability/jaeger-agent-daemonset-hzbch ar09-15-cyp/ar09-15-cyp
I0916 04:25:46.008685       1 server.go:398] syncServiceForwarding
I0916 04:25:46.008833       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/205901/ns/net
I0916 04:25:46.008936       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0916 04:25:46.044726       1 pod.go:160] Calling handler.OnPodUpdate
I0916 04:25:46.044753       1 server.go:295] OnPodUpdate
I0916 04:25:46.044806       1 pod.go:337] pod:kube-system/tas-telemetry-aware-scheduling-684cdd97f4-h7jm7 ar09-15-cyp/ar09-15-cyp
I0916 04:25:46.046938       1 pod.go:337] pod:kube-system/tas-telemetry-aware-scheduling-684cdd97f4-h7jm7 ar09-15-cyp/ar09-15-cyp
I0916 04:25:46.048752       1 server.go:398] syncServiceForwarding
I0916 04:25:46.048899       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/205901/ns/net
I0916 04:25:46.049003       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0916 04:25:46.087618       1 pod.go:160] Calling handler.OnPodUpdate
I0916 04:25:46.087670       1 server.go:295] OnPodUpdate
I0916 04:25:46.087706       1 server.go:398] syncServiceForwarding
I0916 04:25:46.087838       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/205901/ns/net
I0916 04:25:46.087952       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0916 04:25:46.124408       1 pod.go:160] Calling handler.OnPodUpdate
I0916 04:25:46.124438       1 server.go:295] OnPodUpdate
I0916 04:25:46.124474       1 server.go:398] syncServiceForwarding
I0916 04:25:46.124602       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/205901/ns/net
I0916 04:25:46.124700       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0916 04:25:46.160941       1 pod.go:160] Calling handler.OnPodUpdate
I0916 04:25:46.160967       1 server.go:295] OnPodUpdate
        

I don't seem to find the rules that multus-proxy generates. Not sure if any service in the cluster interferes multus-proxy's redirections.

root@ar09-01-cyp:~/.kube# iptables --list |more

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
cali-INPUT  all  --  anywhere             anywhere             /* cali:Cz_u1IQiXIMmKD4c */
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
cali-FORWARD  all  --  anywhere             anywhere             /* cali:wUHhoiAYhphO9Mso */
CNI-FORWARD  all  --  anywhere             anywhere             /* CNI firewall plugin rules */
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
ACCEPT     all  --  anywhere             anywhere             /* cali:S93hcgKJrXEqnTfs */ /* Policy explicitly accepted packet. */ mark match 0x10000/0x10000
MARK       all  --  anywhere             anywhere             /* cali:mp77cMpurHhyjLrM */ MARK or 0x10000
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
cali-OUTPUT  all  --  anywhere             anywhere             /* cali:tVnHkvAo15HuiPy0 */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

 

Chain CNI-ADMIN (1 references)
target     prot opt source               destination

 

Chain CNI-FORWARD (1 references)
target     prot opt source               destination
CNI-ADMIN  all  --  anywhere             anywhere             /* CNI firewall plugin admin overrides */

 

Chain KUBE-EXTERNAL-SERVICES (2 references)
target     prot opt source               destination

 

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

 

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

this is the services running in the cluster:

root@ar09-01-cyp:~/.kube# kubectl get svc -A

NAMESPACE                       NAME                                                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                    AGE
cert-manager                    cert-manager                                                ClusterIP      10.233.60.169   <none>        9402/TCP                                                   11h
cert-manager                    cert-manager-webhook                                        ClusterIP      10.233.2.41     <none>        443/TCP                                                    11h
default                         kubernetes                                                  ClusterIP      10.233.0.1      <none>        443/TCP                                                    11h
intel-ethernet-operator         intel-ethernet-operator-controller-manager-service          ClusterIP      10.233.59.33    <none>        443/TCP                                                    10h
intel-ethernet-operator         intel-ethernet-operator-webhook-service                     ClusterIP      10.233.35.188   <none>        443/TCP                                                    10h
intel-ethernet-operator         intel-ethernet-operators                                    ClusterIP      10.233.24.63    <none>        50051/TCP                                                  10h
istio-system                    istio-ingressgateway                                        LoadBalancer   10.233.5.11     <pending>     15021:30591/TCP,80:30846/TCP,443:30903/TCP                 9h
istio-system                    istiod                                                      ClusterIP      10.233.16.129   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                      9h
kube-system                     cadvisor                                                    ClusterIP      10.233.37.192   <none>        8080/TCP                                                   9h
kube-system                     container-registry                                          NodePort       10.233.58.237   <none>        5043:30500/TCP                                             10h
kube-system                     coredns                                                     ClusterIP      10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP                                     11h
kube-system                     dashboard-metrics-scraper                                   ClusterIP      10.233.61.246   <none>        8000/TCP                                                   11h
kube-system                     inteldeviceplugins-controller-manager-metrics-service       ClusterIP      10.233.11.23    <none>        8443/TCP                                                   10h
kube-system                     inteldeviceplugins-webhook-service                          ClusterIP      10.233.19.209   <none>        443/TCP                                                    10h
kube-system                     kubelet                                                     ClusterIP      None            <none>        10250/TCP,10255/TCP,4194/TCP                               9h
kube-system                     kubernetes-dashboard                                        ClusterIP      10.233.2.204    <none>        443/TCP                                                    11h
kube-system                     node-feature-discovery-master                               ClusterIP      10.233.30.38    <none>        8080/TCP                                                   10h
kube-system                     telemetry-aware-scheduling                                  ClusterIP      10.233.24.94    <none>        9001/TCP                                                   9h
minio-operator                  console                                                     ClusterIP      10.233.5.37     <none>        9090/TCP,9443/TCP                                          9h
minio-operator                  operator                                                    ClusterIP      10.233.10.70    <none>        4222/TCP                                                   9h
minio-tenant-ingress            minio-kubernetes-ingress-nginx-ingress                      LoadBalancer   10.233.28.201   <pending>     80:31811/TCP,443:30605/TCP                                 9h
minio-tenant                    minio                                                       ClusterIP      10.233.48.150   <none>        80/TCP                                                     9h
minio-tenant                    minio-multus-service-1                                      ClusterIP      10.233.21.69    <none>        9000/TCP                                                   9h
minio-tenant                    minio-tenant-console                                        ClusterIP      10.233.42.70    <none>        9090/TCP                                                   9h
minio-tenant                    minio-tenant-hl                                             ClusterIP      None            <none>        9000/TCP                                                   9h
minio-tenant                    minio-tenant-log-hl-svc                                     ClusterIP      None            <none>        5432/TCP                                                   9h
minio-tenant                    minio-tenant-log-search-api                                 ClusterIP      10.233.37.135   <none>        8080/TCP                                                   9h
minio-tenant                    minio-tenant-prometheus-hl-svc                              ClusterIP      None            <none>        9090/TCP                                                   9h
modsec-tadk                     tadk-intel-tadkchart                                        NodePort       10.233.21.14    <none>        8005:30945/TCP                                             9h
monitoring                      kube-state-metrics                                          ClusterIP      None            <none>        8443/TCP,9443/TCP                                          9h
monitoring                      node-exporter                                               ClusterIP      None            <none>        9100/TCP                                                   9h
monitoring                      otel-telegraf-collector-monitoring                          ClusterIP      10.233.45.37    <none>        8888/TCP                                                   9h
monitoring                      prometheus-adapter                                          ClusterIP      10.233.3.208    <none>        443/TCP                                                    9h
monitoring                      prometheus-k8s                                              NodePort       10.233.42.183   <none>        3000:30000/TCP                                             9h
monitoring                      prometheus-operated                                         ClusterIP      None            <none>        9090/TCP                                                   9h
monitoring                      prometheus-operator                                         ClusterIP      None            <none>        8443/TCP                                                   9h
monitoring                      telegraf                                                    ClusterIP      10.233.20.47    <none>        9273/TCP                                                   9h
observability                   jaeger-agent                                                ClusterIP      None            <none>        5775/UDP,5778/TCP,6831/UDP,6832/UDP                        9h
observability                   jaeger-collector                                            ClusterIP      10.233.6.87     <none>        9411/TCP,14250/TCP,14267/TCP,14268/TCP,4317/TCP,4318/TCP   9h
observability                   jaeger-collector-headless                                   ClusterIP      None            <none>        9411/TCP,14250/TCP,14267/TCP,14268/TCP,4317/TCP,4318/TCP   9h
observability                   jaeger-operator-metrics                                     ClusterIP      10.233.8.28     <none>        8443/TCP                                                   9h
observability                   jaeger-operator-webhook-service                             ClusterIP      10.233.62.39    <none>        443/TCP                                                    9h
observability                   jaeger-query                                                ClusterIP      10.233.9.205    <none>        16686/TCP,16685/TCP                                        9h
olm                             operatorhubio-catalog                                       ClusterIP      10.233.55.224   <none>        50051/TCP                                                  10h
olm                             packageserver-service                                       ClusterIP      10.233.36.224   <none>        5443/TCP                                                   8h
opentelemetry-operator-system   opentelemetry-operator-controller-manager-metrics-service   ClusterIP      10.233.8.143    <none>        8443/TCP,8080/TCP                                          9h
opentelemetry-operator-system   opentelemetry-operator-webhook-service                      ClusterIP      10.233.30.218   <none>        443/TCP                                                    9h
tca                             tac-metrics-service                                         ClusterIP      10.233.62.195   <none>        8443/TCP                                                   10h
tcs                             tcs-metrics-service                                         ClusterIP      10.233.53.89    <none>        8443/TCP                                                   10h

The current cluster is deployed with calico and some settings including overlay network which might interfere to create a rule in iptables. There are many k8s CNI providers and settings. Can you share your cross node pod-to-pod network connectivity and settings (detail, nodes network(10.2.1.0/24), pods default network(?), pods 2nd network(10.2.128.0/24), default service network, ip route on each node, ip route on each pod), any NAT used between nodes/pods if you can? And can you provide what chain names are created in 'iptables --list' when multus-proxy creates a rule ?

In this case of blog article, I use openshift version 4.9.13 for testing and also tested with Fedora36/kubeadm/flannel.

https://cloud.redhat.com/blog/how-to-use-kubernetes-services-on-secondary-networks-with-multus-cni

Of course I don't use NAT for multus interface (i.e. macvlan).

Unfortunately I don't have lab equipment due to annual power shutdown but I could share some of output later.

Here is the sample deployment output.
I don't put iptables --list command output because nothing touched in multus-service, as well as worker node iptables and routing.

Environment:

  • Fedora 36
  • Kubernetes v1.25.1 (by kubeadm)
  • cri-o: v1.25.0

Deployed yaml: https://raw.githubusercontent.com/redhat-nfvpe/multus-service-demo/main/multus-service-demo1.yaml

$ kubectl get svc
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP   3h50m
multus-nginx-macvlan   ClusterIP   10.98.184.163   <none>        80/TCP    34m

ip route output in Pod

[root@fedora-net1 /]# ip route
default via 10.244.1.1 dev eth0
10.2.128.0/24 dev net1 proto kernel scope link src 10.2.128.1
10.98.184.163 dev net1 proto kernel src 10.2.128.1  // this is added by multus-proxy
10.244.0.0/16 via 10.244.1.1 dev eth0
10.244.1.0/24 dev eth0 proto kernel scope link src 10.244.1.135

iptables-save command output in Pod. Several chains, begins with MULTUS- , are added.

[root@fedora-net1 /]# iptables-save
# Generated by iptables-save v1.8.7 on Sun Sep 18 17:18:34 2022
*filter
:INPUT ACCEPT [4:528]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8:477]
COMMIT
# Completed on Sun Sep 18 17:18:34 2022
# Generated by iptables-save v1.8.7 on Sun Sep 18 17:18:34 2022
*mangle
:PREROUTING ACCEPT [9:728]
:INPUT ACCEPT [4:528]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [8:477]
:POSTROUTING ACCEPT [8:477]
COMMIT
# Completed on Sun Sep 18 17:18:34 2022
# Generated by iptables-save v1.8.7 on Sun Sep 18 17:18:34 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:MULTUS-SEP-H4OFMGJN6F4YFTQQ - [0:0]
:MULTUS-SEP-M3TURDBOYYJPJSQ5 - [0:0]
:MULTUS-SERVICES - [0:0]
:MULTUS-SVC-HEXNTD6JIC42P6W2 - [0:0]
-A OUTPUT -m comment --comment "multus service portals" -j MULTUS-SERVICES
-A MULTUS-SEP-H4OFMGJN6F4YFTQQ -p tcp -m comment --comment "default/multus-nginx-macvlan" -m tcp -j DNAT --to-destination 10.2.128.2:80
-A MULTUS-SEP-M3TURDBOYYJPJSQ5 -p tcp -m comment --comment "default/multus-nginx-macvlan" -m tcp -j DNAT --to-destination 10.2.128.3:80
-A MULTUS-SERVICES -d 10.98.184.163/32 -p tcp -m comment --comment "default/multus-nginx-macvlan cluster IP" -m tcp --dport 80 -j MULTUS-SVC-HEXNTD6JIC42P6W2
-A MULTUS-SVC-HEXNTD6JIC42P6W2 -m comment --comment "default/multus-nginx-macvlan" -m statistic --mode random --probability 0.50000000000 -j MULTUS-SEP-H4OFMGJN6F4YFTQQ
-A MULTUS-SVC-HEXNTD6JIC42P6W2 -m comment --comment "default/multus-nginx-macvlan" -m statistic --mode random --probability 1.00000000000 -j MULTUS-SEP-M3TURDBOYYJPJSQ5
COMMIT
# Completed on Sun Sep 18 17:18:34 2022

Thank you for the detail information 👍 Looks like I am getting closer but I couldn't figure out why I don't see the created rules by multus-service. Since I have redeployed with flannel, I don't see the iptables-save output with "MULTUS-*" except *mangle *filter *nat. I have looked at the log from multus-proxy, the rules for the pod is created (minio-tenant/minio-tenant-pool-0-0, minio-tenant/minio-tenant-pool-0-1),

kubectl get pods -A -o wide |grep proxy

kube-system                     kube-proxy-g7h6k                                             1/1     Running   0             4h39m   10.166.30.34   ar09-01-cyp   <none>           <none>
kube-system                     kube-proxy-skb4m                                             1/1     Running   0             4h39m   10.166.31.74   ar09-15-cyp   <none>           <none>
kube-system                     kube-proxy-tdskb                                             1/1     Running   0             4h39m   10.166.31.41   ar09-09-cyp   <none>           <none>
kube-system                     multus-proxy-ds-amd64-qz9bv                                  1/1     Running   0             25m     10.166.31.41   ar09-09-cyp   <none>           <none>
kube-system                     multus-proxy-ds-amd64-v9kws                                  1/1     Running   0             25m     10.166.31.74   ar09-15-cyp   <none>           <none>
kube-system                     multus-proxy-ds-amd64-wsc8s                                  1/1     Running   0             25m     10.166.30.34   ar09-01-cyp   <none>           <none>

kubectl ogs -n kube-system multus-proxy-ds-amd64-qz9bv

I0919 23:11:55.871954       1 server.go:295] OnPodUpdate
I0919 23:11:55.871949       1 server.go:398] syncServiceForwarding
I0919 23:11:55.872118       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-1 /host//proc/1514841/ns/net
I0919 23:11:55.872231       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-1

kubectl logs -n kube-system multus-proxy-ds-amd64-v9kws

I0919 23:11:55.560802       1 server.go:295] OnPodUpdate
I0919 23:11:55.560933       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/1368437/ns/net
I0919 23:11:55.560946       1 pod.go:337] pod:kube-system/multus-service-controller-67bddb8989-zpnlw ar09-15-cyp/ar09-15-cyp
I0919 23:11:55.561031       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0
I0919 23:11:55.562713       1 pod.go:337] pod:kube-system/multus-service-controller-67bddb8989-zpnlw ar09-15-cyp/ar09-15-cyp
I0919 23:11:55.592780       1 service.go:121] Calling handler.OnServiceUpdate
I0919 23:11:55.592805       1 server.go:369] OnServiceUpdate
I0919 23:11:55.592828       1 server.go:398] syncServiceForwarding
I0919 23:11:55.592943       1 server.go:441] pod: minio-tenant/minio-tenant-pool-0-0 /host//proc/1368437/ns/net
I0919 23:11:55.593064       1 server.go:462] Generate rules for Pod :minio-tenant/minio-tenant-pool-0-0

kubectl logs -n kube-system multus-proxy-ds-amd64-wsc8s

I0919 23:11:52.480604       1 pod.go:337] pod:kube-system/container-registry-688755fbbd-zz8jl ar09-01-cyp/ar09-01-cyp
I0919 23:11:52.482289       1 pod.go:337] pod:kube-system/container-registry-688755fbbd-zz8jl ar09-01-cyp/ar09-01-cyp
I0919 23:11:52.484058       1 server.go:398] syncServiceForwarding
I0919 23:11:52.484167       1 pod.go:160] Calling handler.OnPodUpdate

However kube-proxy logs don't show those pods name(minio-tenant-pool*) with rules in the log at all except other pod which has the default (single).

I0919 22:58:19.730046       1 endpointslicecache.go:358] "Setting endpoints for service port name" portName="minio-tenant/minio-tenant-log-search-api:http-logsearchapi" endpoints=[10.244.1.156:8080]
I0919 22:58:19.730087       1 endpointslicecache.go:358] "Setting endpoints for service port name" portName="minio-tenant/minio-tenant-log-search-api:http-logsearchapi" endpoints=[10.244.1.156:8080]
I0919 22:58:19.730184       1 proxier.go:853] "Syncing iptables rules"
I0919 22:58:19.791862       1 iptables.go:358] running iptables-save [-t filter]
I0919 22:58:19.802340       1 iptables.go:358] running iptables-save [-t nat]
I0919 22:58:19.821793       1 proxier.go:1464] "Reloading service iptables data" numServices=23 numEndpoints=29 numFilterChains=4 numFilterRules=3 numNATChains=58 numNATRules=146
I0919 22:58:19.821836       1 iptables.go:423] running iptables-restore [-w 5 -W 100000 --noflush --counters]
I0919 22:58:19.865490       1 proxier.go:1492] "Network programming" endpoint="minio-tenant/minio-tenant-log-search-api" elapsed=0.865399051
I0919 22:58:19.865582       1 proxier.go:1516] "Deleting conntrack stale entries for services" IPs=[]
I0919 22:58:19.865630       1 proxier.go:1522] "Deleting conntrack stale entries for services" nodePorts=[]
I0919 22:58:19.865678       1 proxier.go:1529] "Deleting stale endpoint connections" endpoints=[]
I0919 22:58:19.865699       1 proxier.go:820] "SyncProxyRules complete" elapsed="135.722457ms"
I0919 22:58:19.865719       1 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 1s, periodic in 1h0m0s
I0919 22:59:52.331446       1 reflector.go:536] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 58 items received
I0919 22:59:59.570330       1 config.go:336] "Calling handler.OnServiceAdd"
I0919 22:59:59.570409       1 utils.go:211] "Skipping service due to cluster IP" service="minio-tenant/minio-tenant-prometheus-hl-svc" clusterIP="None"
I0919 22:59:59.570514       1 utils.go:211] "Skipping service due to cluster IP" service="minio-tenant/minio-tenant-prometheus-hl-svc" clusterIP="None"
I0919 23:01:06.385421       1 reflector.go:536] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 21 items received
I0919 23:04:19.305257       1 reflector.go:536] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 15 items received
I0919 23:05:08.334719       1 reflector.go:536] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 37 items received
I0919 23:08:48.389220       1 reflector.go:536] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 0 items received
I0919 23:11:34.327872       1 reflector.go:536] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0919 23:12:15.979066       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 23:12:15.979091       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 23:12:15.979089       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync

I do see many pods has the logs like the above, but don't see any log with pods name: minio-tenant-pool-0-0, minio-tenant-pool-0-1 associated with 10.56.217.x

Each pod's ip route

1000@minio-tenant-pool-0-0:/usr/bin$ ip route
default via 10.244.1.1 dev eth0
10.56.217.0/24 dev net1 proto kernel scope link src 10.56.217.101
10.233.61.42 dev net1 proto kernel src 10.56.217.101
10.244.0.0/16 via 10.244.1.1 dev eth0
10.244.1.0/24 dev eth0 proto kernel scope link src 10.244.1.155

I am not sure how multus-service detects the 2nd interface's IPs for the rules. Can you give me some idea what could cause the issue in my environment? BTW, each pod has to install iptables (nft)?

FYI, I deployed with Kubespray 2.19.0, Kubernetes v1.24.3

  • Please check your iptables --version output. If it contains "(nft)", you need to use 'deploy-nft.yml', instead of 'deploy.yml'. If your deployment and iptables --version are mismatched, then multus-proxy cannot inject iptable rules and it may cause such problem.
  • You can see generated iptables rules in /var/lib/multus-proxy/iptables of multus-proxy pod. Could you login to multus-proxy pod (of target node) and see /var/lib/multus-proxy/iptables? <UID>/current-service.iptables are iptables rules in the target pod.

Because of my environment has:

iptables --version
iptables v1.8.7 (nf_tables)

Deployed with deploy-nft.yml.

kubectl get pods -A -o wide|grep proxy

kube-system                     kube-proxy-g7h6k                                             1/1     Running   0              6h35m   10.166.30.34   ar09-01-cyp   <none>           <none>
kube-system                     kube-proxy-skb4m                                             1/1     Running   0              6h35m   10.166.31.74   ar09-15-cyp   <none>           <none>
kube-system                     kube-proxy-tdskb                                             1/1     Running   0              6h35m   10.166.31.41   ar09-09-cyp   <none>           <none>

Worker Node#1

sh-5.1# cat /var/lib/multus-proxy/iptables/616f731d-6d56-4de9-ab93-424a00c59c5e/current-service.iptables
# Generated by iptables-save v1.8.7 on Tue Sep 20 01:14:32 2022
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Tue Sep 20 01:14:32 2022
# Generated by iptables-save v1.8.7 on Tue Sep 20 01:14:32 2022
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Tue Sep 20 01:14:32 2022
# Generated by iptables-save v1.8.7 on Tue Sep 20 01:14:32 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:MULTUS-SEP-6AVDHNQYFHDLXZ6D - [0:0]
:MULTUS-SEP-XQ7QHSGK5R7CHAIE - [0:0]
:MULTUS-SERVICES - [0:0]
:MULTUS-SVC-VHSVTBTC4VIDBW2E - [0:0]
-A OUTPUT -m comment --comment "multus service portals" -j MULTUS-SERVICES
-A MULTUS-SEP-6AVDHNQYFHDLXZ6D -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.100:9000
-A MULTUS-SEP-XQ7QHSGK5R7CHAIE -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.101:9000
-A MULTUS-SERVICES -d 10.233.61.42/32 -p tcp -m comment --comment "minio-tenant/minio-multus-service-1 cluster IP" -m tcp --dport 9000 -j MULTUS-SVC-VHSVTBTC4VIDBW2E
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 0.50000000000 -j MULTUS-SEP-6AVDHNQYFHDLXZ6D
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 1.00000000000 -j MULTUS-SEP-XQ7QHSGK5R7CHAIE
COMMIT
# Completed on Tue Sep 20 01:14:32 2022
sh-5.1# cat /var/lib/multus-proxy/iptables/616f731d-6d56-4de9-ab93-424a00c59c5e/multus_service.iptables
*nat
:MULTUS-SERVICES - [0:0]
:MULTUS-SVC-VHSVTBTC4VIDBW2E - [0:0]
:MULTUS-SEP-6AVDHNQYFHDLXZ6D - [0:0]
:MULTUS-SEP-XQ7QHSGK5R7CHAIE - [0:0]
-A MULTUS-SEP-6AVDHNQYFHDLXZ6D -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.100:9000
-A MULTUS-SEP-XQ7QHSGK5R7CHAIE -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.101:9000
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 0.5000000000 -j MULTUS-SEP-6AVDHNQYFHDLXZ6D
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 1.0000000000 -j MULTUS-SEP-XQ7QHSGK5R7CHAIE
-A MULTUS-SERVICES -d 10.233.61.42/32 -p tcp -m comment --comment "minio-tenant/minio-multus-service-1 cluster IP" -m tcp --dport 9000 -j MULTUS-SVC-VHSVTBTC4VIDBW2E
COMMIT

Worker Node#2

root@ar09-01-cyp:/opt/cek/charts/tenant/temp# kubectl exec -it  multus-proxy-ds-amd64-v9kws -n kube-system -- sh
sh-5.1# cat /var/lib/multus-proxy/iptables/b73a148c-e6df-42ed-a94d-411864abb8d8/current-service.iptables
# Generated by iptables-save v1.8.7 on Tue Sep 20 01:16:02 2022
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on Tue Sep 20 01:16:02 2022
# Generated by iptables-save v1.8.7 on Tue Sep 20 01:16:02 2022
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on Tue Sep 20 01:16:02 2022
# Generated by iptables-save v1.8.7 on Tue Sep 20 01:16:02 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:MULTUS-SEP-6AVDHNQYFHDLXZ6D - [0:0]
:MULTUS-SEP-XQ7QHSGK5R7CHAIE - [0:0]
:MULTUS-SERVICES - [0:0]
:MULTUS-SVC-VHSVTBTC4VIDBW2E - [0:0]
-A OUTPUT -m comment --comment "multus service portals" -j MULTUS-SERVICES
-A MULTUS-SEP-6AVDHNQYFHDLXZ6D -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.100:9000
-A MULTUS-SEP-XQ7QHSGK5R7CHAIE -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.101:9000
-A MULTUS-SERVICES -d 10.233.61.42/32 -p tcp -m comment --comment "minio-tenant/minio-multus-service-1 cluster IP" -m tcp --dport 9000 -j MULTUS-SVC-VHSVTBTC4VIDBW2E
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 0.50000000000 -j MULTUS-SEP-6AVDHNQYFHDLXZ6D
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 1.00000000000 -j MULTUS-SEP-XQ7QHSGK5R7CHAIE
COMMIT
# Completed on Tue Sep 20 01:16:02 2022
sh-5.1# cat /var/lib/multus-proxy/iptables/b73a148c-e6df-42ed-a94d-411864abb8d8/multus_service.iptables
*nat
:MULTUS-SERVICES - [0:0]
:MULTUS-SVC-VHSVTBTC4VIDBW2E - [0:0]
:MULTUS-SEP-6AVDHNQYFHDLXZ6D - [0:0]
:MULTUS-SEP-XQ7QHSGK5R7CHAIE - [0:0]
-A MULTUS-SEP-6AVDHNQYFHDLXZ6D -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.100:9000
-A MULTUS-SEP-XQ7QHSGK5R7CHAIE -p tcp -m comment --comment "minio-tenant/minio-multus-service-1" -m tcp -j DNAT --to-destination 10.56.217.101:9000
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 0.5000000000 -j MULTUS-SEP-6AVDHNQYFHDLXZ6D
-A MULTUS-SVC-VHSVTBTC4VIDBW2E -m comment --comment "minio-tenant/minio-multus-service-1" -m statistic --mode random --probability 1.0000000000 -j MULTUS-SEP-XQ7QHSGK5R7CHAIE
-A MULTUS-SERVICES -d 10.233.61.42/32 -p tcp -m comment --comment "minio-tenant/minio-multus-service-1 cluster IP" -m tcp --dport 9000 -j MULTUS-SVC-VHSVTBTC4VIDBW2E
COMMIT
sh-5.1#

Controller Node ( I don't see those files)

root@ar09-01-cyp:/opt/cek/charts/tenant/temp# kubectl exec -it  multus-proxy-ds-amd64-wsc8s -n kube-system -- sh
sh-5.1# ls /var/lib/multus-proxy/iptables/
sh-5.1# ls /var/lib/multus-proxy/iptables/ -al
total 8
drwx------ 2 root root 4096 Sep 19 22:56 .
drwxr-xr-x 3 root root 4096 Sep 19 22:56 ..

When accessing to the service, I port-forward: kubectl --namespace minio-tenant port-forward svc/minio-multus-service-1 9001:9000 --address=10.56.217.201 from the controller node, then use the AWS client with the target address: aws --profile=minio --endpoint=http://10.56.217.201:9001 s3 cp DJI_0002.MP4 s3://test1 Is this something wrong to access to the service?

When using this command, once the controller node received the file content at the 2nd interface, it redirects via 1st interface from the controller node toward 1st interface of the pod to worker node as shown in the video.

instead of above command for forwarding, I tested with kubectl --namespace minio-tenant port-forward svc/minio-multus-service-1 9000:9000 --address=10.56.217.201 which has the same listening port number 9000 from 9001, the result is same.

Currently we don't support kubectl port-forward command for multus-service because kubectl port-forward does not uses cluster ip for that. Currently multus-service provides to access the service from actual pod to actual pod by cluster ip.

Please verify the connectivity as following:

$ kubectl exec -it <pod> -- bash (or something else)
$ curl (or some other command) <service cluster ip>

BTW, regarding your output, multus-proxy seems to generate iptables rules for minio-tenant/minio-multus-service-1 for now.

curl http://10.233.61.42:9000

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>17167C8C1029ECAE</RequestId><HostId>da46440a-dff4-4089-bfd2-f6a2ecd999ae</HostId></Error>

Looks like I need AWS client in the pod to check but seems like it has connection as it denies the request (will verify tomorrow, can't use curl minio-multus-service-1.default.svc.cluster.local but curl minio-multus-service-1). Then, what would be the best practice to expose the multus-service on the 2nd interface to out of the cluster? Or any idea if it doesn't support yet?

  • I'm not good at AWS, however, I suppose using multus in AWS is still have a challenge because AWS maintain the network, including access control. Currently multus-service and multus focus on the on-prem deployment, from NPWG point of view. AWS or some cloud vendor supports multus in somehow, but it is what it is (they support it but multus team does not support it).
  • Currently multus-service supports primitive function (we only support cluster IP, for inside cluster, as we noted in README) hence we don't support to expose to outside of the cluster. We, NPWG, need to discuss about how to support to expose the multus-service to outside cluster.

Thank you for the update, AWS means AWS client application, not AWS cloud. Is there any schedule to discuss the future plan? If I can, would be good opportunity I can contribute on multus-service but I am not sure if I can have the full support to involve into the activity, but need to check with my team.

Currently our NPWG bi-weekly meeting opens to everyone to discuss various topic and we also have another meeting slot to discuss advanced topics (which should include this topic).

Currently this repository is not mature, so the first focus is to hardening and have more helper tool/function for its maintenance/troubleshooting, I guess. In addition, to expose service, we also need to find how to expose it (I guess load-balancer could be the one candidate, but I don't know how to interwork with current Kubernetes load-balancer), because multus network is mainly isolated network from Kubernetes cluster.

So let me close this issue if you don't mind. Otherwise, I will update README to add some explanation about it to close this PR.

I thought the blog post gave me some hints that the multus-service enables to expose the 2nd interface either ClusterIP and Load Balancer very first time when I read 'no support headless service' link. Now, exposing the service out of the cluster would need extra work required. At least, clients inside the cluster can transmit the traffic via 2nd interface, which is a huge improvement. Thank you very much for your hard work. In the meantime, you would write up the README with this limitation and some verification steps clearly. I have learned a lot during the troubleshooting, hope I can join the NPWG meeting. Yes, you can close the issue. Again. Many thanks for your passion and support.

In the meantime, I can try ingress controller to receive the request from the out of the cluster, and redirect toward the multus-service. I will get back to you once I have tested.

README will be updated by #13

I will close this issue. Thank you for your feedback!