k8snetworkplumbingwg / multus-service-archived

(TBD)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Headless service is not supported (Original: Seeing it load balance between client interfaces)

dbainbri-ciena opened this issue · comments

I have a set up with two a src and dst pod. Each pod has two interfaces, the default k8s interface and a secondary interface created using multus. The dst pod is listening on all interfaces on a specific port, i.e., :8080.

The src pod continuing creates a GRPC connection to the dst pod, makes a RPC call, and then closes the connection at 1 second intervals.

The dst pod is using the GPRC peer package to get the peer IP from which the request comes. What I am seeing is that the peer is sometimes reporting the default address and sometime reporting the secondary address, when I would have expected it only to report the secondary address. This is leading me to believe the the request is not always proxied via the multus-proxy and sometimes it is using the default proxy.

thoughts?

on a side note, what if the mutus-service-controller created a "second" service without a selector and created endpoints that matched that service. this would allow (i think) the default proxies to work. to make this really work it means we would have to have a MultusService (which isn't great) or use additional annotations to specify the selector.

Thank you for the feedback for that.

To understanding your situation/environment, could you please provide your Pod/Service/NetworkAttachmentDefinition?

on a side note, what if the mutus-service-controller created a "second" service without a selector and created endpoints that matched that service. this would allow (i think) the default proxies to work. to make this really work it means we would have to have a MultusService (which isn't great) or use additional annotations to specify the selector.

To clarify my understanding, could you please provide example in yaml?

Attached are the resource definitions. The issue seems to be that when I specify clusterIP as None, then when a service is resolved i am getting the IP address for both networks. if after applying the manifests then do a kubectl logs -f deploy/hello-client you can see that the host hello-server.default.svc.cluster.local resolves to two IPs (169.244.160.187, 10.244.0.207), where I would expect it to only resolve to the IP of the secondary network.

combined.zip

Currently we cannot support headless service, ClusterIP: None, for multus-service, because of Kubernetes controller manager implementation. You can still configure clusterIP: None however, then Kubernetes controller manager automatically adds endpointslices for primary network interface. These primary IP addresses are not added in kube-proxy, by service.kubernetes.io/service-proxy-name, however, these primary IP addresses in endpointslices affects CoreDNS.

To support headless service, no need to change multus-service, need to change Kubernetes upstream. Currently kpng also have a problem about that and proceed how to fix that, in kubernetes-sigs/kpng#349.

I will update README.md to explicitly mention this and close this issue, thanks.

@s1061123 I think kubernetes-sigs/kpng#349 is closed, so If I correct understand you can check it

Unfortunately, it is not.

Currently the issue is in kubernetes/kubernetes#112491 and still open.

But we have a good news that Kubernetes Multi-network WG is working on multi-network including Kubernetes service feature, so the WG will provide a solution for that.
https://github.com/kubernetes/community/blob/master/sig-network/README.md

As far as I know of, they want the use-cases for service on multi-network. I strongly recommend to join the call and share your use-cases to them.

Hence I decide not to continue this development (because it is prototype and code is obsolate) and I am looking forward to see above implementation. But this code is open source, so you can implement what you want, of course!