telepresenceio / telepresence

Local development against a remote Kubernetes or OpenShift cluster

Home Page:https://www.telepresence.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

telepresence intercept: error: found no service with a port that matches a container in pod

jiangxiaoqiang opened this issue · comments

Describe the bug


~ ⌚ 17:45:23
$ telepresence loglevel debug                                                                                                                                                                           ‹ruby-2.7.2›
(base)
~ ⌚ 18:15:04
$ telepresence intercept infra-server-service --port 8081:8081 --env-file ./env                                                                                                                         ‹ruby-2.7.2›
telepresence intercept: error: found no service with a port that matches a container in pod .reddwarf-pro
(base)

To Reproduce
Steps to reproduce the behavior:

  1. When I run 'telepresence intercept infra-server-service --port 8081:8081 --env-file ./env'
  2. I see 'telepresence intercept: error: found no service with a port that matches a container in pod .reddwarf-pro'

Expected behavior

the interceptor will success

Versions (please complete the following information):

  • Output of telepresence version
$ telepresence version                                                                                                                                                                                  ‹ruby-2.7.2›
Client         : v2.17.0
Root Daemon    : v2.17.0
User Daemon    : v2.17.0
Traffic Manager: v2.11.1

  • Operating system of workstation running telepresence commands

macOs 14.3.1

  • Kubernetes environment and Version [e.g. Minikube, bare metal, Google Kubernetes Engine]
~ ⌚ 18:16:52
$ kubectl version                                                                                                                                                                                       ‹ruby-2.7.2›
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
(base)

You're using a really old traffic-manager. Would it be possible for you to try a more recent version, say 2.18.0 ?

I have tried to upgrade the traffic-manager, this is the version info:

~ ⌚ 11:54:19
$ telepresence version                                                                                                                                                                                  ‹ruby-2.7.2›
Client         : v2.17.0
Root Daemon    : v2.17.0
User Daemon    : v2.17.0
Traffic Manager: v2.19.4

the command look like:

~ ⌚ 11:54:22
$ telepresence intercept infra-server-service --port 8081:8081 --env-file ./env                                                                                                                         ‹ruby-2.7.2›
telepresence intercept: error: found no service with a port that matches a container in pod .reddwarf-pro
(base)

Take a look at your infra-server-service deployment (or replicaset/statefulset if that's what you're using). Do you have a service with a selector that will find its pod-template's labels? If so, what ports does this service declare? And how does those port match the container ports in the pod-template?

Telepresence will require that you have a service that uses a selector that matches a deployment, replicaset, or statefulset, and that the port-declarations in this service match the container ports in the pod-template.

seems the sshfs not installed:

> tail -f /Users/xiaoqiangjiang/Library/Logs/telepresence/connector.log
2024-05-05 09:53:09.2167 error   connector/server-grpc/conn=13 : Tunnel manager.Send() failed: EOF
2024-05-05 10:18:12.1876 error   connector/session/dial-request-watcher : dial request stream recv: rpc error: code = Unavailable desc = error reading from server: EOF
2024-05-05 10:18:12.1876 error   connector/session/intercept-port-forward : manager.WatchIntercepts recv: rpc error: code = Unavailable desc = error reading from server: EOF
2024-05-05 10:18:12.1876 error   connector/server-grpc/conn=13 : Tunnel manager.Recv() failed: rpc error: code = Unavailable desc = error reading from server: EOF
2024-05-05 10:18:12.1878 error   connector/session/remain : rpc error: code = Unavailable desc = error reading from server: EOF
2024-05-05 10:21:06.7782 error   connector/server-grpc/conn=13 : Tunnel manager.Send() failed: EOF
2024-05-05 10:23:04.1227 error   connector/server-grpc/conn=13 : Tunnel manager.Send() failed: EOF
2024-05-05 10:23:20.1518 error   connector/server-grpc/conn=13 : Tunnel manager.Send() failed: EOF
2024-05-05 21:24:41.3721 error   connector/server-grpc/conn=15 : sshfs not installed: exec: "sshfs": executable file not found in $PATH
2024-05-05 21:26:56.8715 error   connector/server-grpc/conn=16 : sshfs not installed: exec: "sshfs": executable file not found in $PATH


It hard to install sshfs in macoS 14.3.

You don't need sshfs unless you want to mount the container's volumes, and that error is not fatal. You'll get rid of that particular error by passing --mount=false to your intercept. There seem to be other problems though.

This is the newest log output:

> telepresence intercept infra-server-service --mount=false --port 8081:8081 --env-file ./env
telepresence intercept: error: connector.CreateIntercept: found no service with a port that matches a container in pod .reddwarf-pro

See logs for details (8 errors found): "/Users/xiaoqiangjiang/Library/Logs/telepresence/daemon.log"
See logs for details (56 errors found): "/Users/xiaoqiangjiang/Library/Logs/telepresence/connector.log"
If you think you have encountered a bug, please run `telepresence gather-logs` and attach the telepresence_logs.zip to your github issue or create a new one: https://github.com/telepresenceio/telepresence/issues/new?template=Bug_report.md .
> tail -f /Users/xiaoqiangjiang/Library/Logs/telepresence/daemon.log
2024-05-06 22:37:47.1286 error   daemon/session/network : WatchClusterInfo recv: Unavailable: connection error: desc = "transport: Error while dialing: Get \"https://106.14.183.131:6443/api/v1/namespaces/ambassador/services/traffic-manager\": context deadline exceeded"
2024-05-06 22:38:12.1299 error   daemon/session/network : WatchClusterInfo recv: Unavailable: connection error: desc = "transport: Error while dialing: Get \"https://106.14.183.131:6443/api/v1/namespaces/ambassador/services/traffic-manager\": context deadline exceeded"
2024-05-06 22:38:38.1343 error   daemon/session/network : WatchClusterInfo recv: Unavailable: connection error: desc = "transport: Error while dialing: Get \"https://106.14.183.131:6443/api/v1/namespaces/ambassador/services/traffic-manager\": context deadline exceeded"
2024-05-06 22:39:02.4868 info    daemon/session/network : also-proxy subnets []
2024-05-06 22:39:02.4868 info    daemon/session/network : never-proxy subnets [106.14.183.131/32]
2024-05-06 22:39:02.4868 info    daemon/session/network : Adding Service subnet 10.96.0.0/12
2024-05-06 22:39:02.4869 info    daemon/session/network : Adding pod subnet 10.0.0.0/23
2024-05-06 22:39:02.4869 info    daemon/session/network : Adding pod subnet 172.29.128.0/17
2024-05-06 22:39:02.4869 info    daemon/session/network : Setting cluster DNS to 10.0.0.162
2024-05-06 22:39:02.4869 info    daemon/session/network : Setting cluster domain to "cluster.local."
^C

BTW, I can use the telepresence connect command nornally. This is the traffic manager config and works fine:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: traffic-manager
  namespace: ambassador
  uid: d6c05ccf-4457-4a22-890d-d2ae6117a32d
  resourceVersion: '194072611'
  generation: 1
  creationTimestamp: '2024-05-03T03:20:11Z'
  labels:
    app: traffic-manager
    app.kubernetes.io/created-by: Helm
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: 2.19.4
    helm.sh/chart: telepresence-2.19.4
    telepresence: manager
  annotations:
    deployment.kubernetes.io/revision: '1'
    meta.helm.sh/release-name: traffic-manager
    meta.helm.sh/release-namespace: ambassador
  selfLink: /apis/apps/v1/namespaces/ambassador/deployments/traffic-manager
status:
  observedGeneration: 1
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Available
      status: 'True'
      lastUpdateTime: '2024-05-03T03:20:27Z'
      lastTransitionTime: '2024-05-03T03:20:27Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2024-05-03T03:20:27Z'
      lastTransitionTime: '2024-05-03T03:20:11Z'
      reason: NewReplicaSetAvailable
      message: ReplicaSet "traffic-manager-74b6548b67" has successfully progressed.
spec:
  replicas: 1
  selector:
    matchLabels:
      app: traffic-manager
      telepresence: manager
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: traffic-manager
        telepresence: manager
    spec:
      containers:
        - name: traffic-manager
          image: docker.io/datawire/ambassador-telepresence-manager:2.19.4
          ports:
            - name: api
              containerPort: 8081
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: grpc-trace
              containerPort: 15766
              protocol: TCP
          env:
            - name: LOG_LEVEL
              value: info
            - name: REGISTRY
              value: docker.io/datawire
            - name: SERVER_PORT
              value: '8081'
            - name: POD_CIDR_STRATEGY
              value: auto
            - name: MUTATOR_WEBHOOK_PORT
              value: '443'
            - name: AGENT_INJECTOR_SECRET
              value: mutator-webhook-tls
            - name: TRACING_GRPC_PORT
              value: '15766'
            - name: GRPC_MAX_RECEIVE_SIZE
              value: 4Mi
            - name: AGENT_ARRIVAL_TIMEOUT
              value: 30s
            - name: AGENT_INJECT_POLICY
              value: OnDemand
            - name: AGENT_INJECTOR_NAME
              value: agent-injector
            - name: AGENT_PORT
              value: '9900'
            - name: AGENT_APP_PROTO_STRATEGY
              value: http2Probe
            - name: AGENT_IMAGE_PULL_POLICY
              value: IfNotPresent
            - name: MANAGER_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: CLIENT_CONNECTION_TTL
              value: 24h
            - name: CLIENT_DNS_EXCLUDE_SUFFIXES
              value: .com .io .net .org .ru
            - name: SYSTEMA_HOST
              value: app.getambassador.io
            - name: SYSTEMA_PORT
              value: '443'
            - name: INTERCEPT_DISABLE_GLOBAL
              value: 'false'
            - name: INTERCEPT_EXPIRED_NOTIFICATIONS_ENABLED
              value: 'false'
            - name: INTERCEPT_EXPIRED_NOTIFICATIONS_DEADLINE
              value: 15m
            - name: AGENT_ENVOY_LOG_LEVEL
              value: warning
            - name: AGENT_ENVOY_SERVER_PORT
              value: '18000'
            - name: AGENT_ENVOY_ADMIN_PORT
              value: '19000'
            - name: AGENT_ENVOY_HTTP_IDLE_TIMEOUT
              value: 70s
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            runAsUser: 1000
            runAsNonRoot: true
            readOnlyRootFilesystem: true
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: traffic-manager
      serviceAccount: traffic-manager
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600


Can you share the port declaration of the service that you're intercepting together with the container port definition in the workload that it references?

This is the service defined like:

apiVersion: v1
kind: Service
metadata:
  name: infra-server-service
  namespace: reddwarf-pro
  uid: f5aab062-72b1-4cc6-a5b6-f8f6205566cd
  resourceVersion: '191648087'
  creationTimestamp: '2024-04-21T02:29:45Z'
  labels:
    k8slens-edit-resource-version: v1
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"infra-server-service","namespace":"reddwarf-pro"},"spec":{"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http","port":8000,"protocol":"TCP","targetPort":8000}],"selector":{"app":"infra-server-service"},"sessionAffinity":"ClientIP","sessionAffinityConfig":{"clientIP":{"timeoutSeconds":10800}},"type":"ClusterIP"}}
  selfLink: /api/v1/namespaces/reddwarf-pro/services/infra-server-service
status:
  loadBalancer: {}
spec:
  ports:
    - name: http
      protocol: TCP
      port: 8081
      targetPort: 8081
  selector:
    app: infra-server-service
  clusterIP: 10.100.155.34
  clusterIPs:
    - 10.100.155.34
  type: ClusterIP
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster


What does the container ports look like in the pod-template of the deployment, replicaset, or statefulset that matches the selector app: chat-server-service in namespace reddwarf-pro?

this is the infra-server-service deployment look like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: infra-server-service
  namespace: reddwarf-pro
  uid: bda646b6-89f3-4a22-9247-738434de17ec
  resourceVersion: '195661072'
  generation: 114
  creationTimestamp: '2024-04-14T13:10:16Z'
  labels:
    app: infra-server-service
    k8slens-edit-resource-version: v1
  annotations:
    deployment.kubernetes.io/revision: '110'
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"infra-server-service"},"name":"infra-server-service","namespace":"reddwarf-pro"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"infra-server-service"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"2024-03-31T12:12:40Z"},"creationTimestamp":null,"labels":{"app":"infra-server-service"}},"spec":{"containers":[{"env":[{"name":"TEXHUB_REDIS_URL","valueFrom":{"configMapKeyRef":{"key":"texhub_redis_addr","name":"texhub-server-service-pro-config"}}},{"name":"REDIS_URL","valueFrom":{"configMapKeyRef":{"key":"redis_addr","name":"texhub-server-service-pro-config"}}},{"name":"TEX_DATABASE_URL","valueFrom":{"configMapKeyRef":{"key":"tex_database_url","name":"texhub-server-service-pro-config"}}},{"name":"MEILI_MASTER_KEY","valueFrom":{"configMapKeyRef":{"key":"meili_master_key","name":"texhub-server-service-pro-config"}}},{"name":"ENV","valueFrom":{"configMapKeyRef":{"key":"env","name":"texhub-server-service-pro-config"}}}],"image":"registry.cn-hongkong.aliyuncs.com/reddwarf-pro/infra-server:8c77d9a548a7e7994e09e0f075e3d0b93f3b9542","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/texhub/actuator/liveness","port":8000,"scheme":"HTTP"},"initialDelaySeconds":15,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"infra-server-service","ports":[{"containerPort":8000,"protocol":"TCP"}],"resources":{"limits":{"cpu":"100m","memory":"60Mi"},"requests":{"cpu":"20m","memory":"15Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","imagePullSecrets":[{"name":"hongkong-regcred"}],"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}}}
    kubernetes.io/change-cause: >-
      kubectl set image deployment/infra-server-service
      infra-server-service=registry.cn-hongkong.aliyuncs.com/reddwarf-pro/infra-server:7f3e01c70057b4859a4803c8821473d7556c0384
      --record=true --namespace=reddwarf-pro
  selfLink: /apis/apps/v1/namespaces/reddwarf-pro/deployments/infra-server-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: infra-server-service
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: infra-server-service
      annotations:
        kubectl.kubernetes.io/restartedAt: '2024-05-07T23:36:36+08:00'
    spec:
      containers:
        - name: infra-server-service
          image: >-
            registry.cn-hongkong.aliyuncs.com/reddwarf-pro/infra-server:7f3e01c70057b4859a4803c8821473d7556c0384
          ports:
            - containerPort: 8000
              protocol: TCP
          env:
            - name: TEXHUB_REDIS_URL
              valueFrom:
                configMapKeyRef:
                  name: infra-server-service-pro-config
                  key: texhub_redis_addr
            - name: REDIS_URL
              valueFrom:
                configMapKeyRef:
                  name: infra-server-service-pro-config
                  key: redis_addr
            - name: JWT_SECRET
              valueFrom:
                configMapKeyRef:
                  name: infra-server-service-pro-config
                  key: jwt_secret
            - name: DATABASE_URL
              valueFrom:
                configMapKeyRef:
                  name: infra-server-service-pro-config
                  key: database_url
            - name: SELLER_ID
              valueFrom:
                configMapKeyRef:
                  name: infra-server-service-pro-config
                  key: seller_id
            - name: ENV
              valueFrom:
                configMapKeyRef:
                  name: infra-server-service-pro-config
                  key: env
          resources:
            limits:
              cpu: 100m
              memory: 60Mi
            requests:
              cpu: 20m
              memory: 15Mi
          livenessProbe:
            httpGet:
              path: /infra/actuator/liveness
              port: 8081
              scheme: HTTP
            initialDelaySeconds: 15
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      imagePullSecrets:
        - name: hongkong-regcred
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600


the service matched to the deployment correctly.

On the Service config, can you update your targetPort: 8081 to 8000? Your container port is 8000 so if these match that might be enough to get your Telepresence command working.