k8gb-io / k8gb

A cloud native Kubernetes Global Balancer

Home Page:https://www.k8gb.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Help with DNS resolver

uzmargomez opened this issue · comments

Hello! I'm really new on this topic, and I'm trying to translate what I learned from the "Local playground for testing and development" example to my own 3 cluster setup (one of them with the edgeDNS resolver), these clusters are connected on my company's network. So far, this is the DNSEndpoint on an app I deployed in my 2 different clusters

Cluster 1

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  annotations:
    k8gb.absa.oss/dnstype: local
  creationTimestamp: "2023-05-17T09:06:57Z"
  generation: 2
  labels:
    k8gb.absa.oss/dnstype: local
  name: podinfo
  namespace: podinfo
  ownerReferences:
  - apiVersion: k8gb.absa.oss/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Gslb
    name: podinfo
    uid: abea5965-a215-43c7-8d37-d94043ee643d
  resourceVersion: "80741179"
  uid: 60397507-b758-4570-9aff-dab0c1f04944
spec:
  endpoints:
  - dnsName: localtargets-liqo.cloud.testanim.uzmar
    recordTTL: 30
    recordType: A
    targets:
    - 10.181.61.145
    - 10.181.61.146
    - 10.181.61.20
    - 10.181.61.21
    - 10.181.61.22
  - dnsName: liqo.cloud.testanim.uzmar
    labels:
      strategy: roundRobin
    recordTTL: 30
    recordType: A
    targets:
    - 10.181.61.145
    - 10.181.61.146
    - 10.181.61.20
    - 10.181.61.21
    - 10.181.61.22

Cluster 2

kind: DNSEndpoint
metadata:
  annotations:
    k8gb.absa.oss/dnstype: local
  creationTimestamp: "2023-05-17T09:07:02Z"
  generation: 3
  labels:
    k8gb.absa.oss/dnstype: local
  name: podinfo
  namespace: podinfo
  ownerReferences:
  - apiVersion: k8gb.absa.oss/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Gslb
    name: podinfo
    uid: 29a4fff1-3ffd-414f-93e3-63bcc5b85b75
  resourceVersion: "156700"
  uid: 6a3f759b-2a49-425a-8473-63ee6700d83d
spec:
  endpoints:
  - dnsName: localtargets-liqo.cloud.testanim.uzmar
    recordTTL: 30
    recordType: A
    targets:
    - 10.171.61.136
    - 10.171.61.148
  - dnsName: liqo.cloud.testanim.uzmar
    labels:
      strategy: roundRobin
    recordTTL: 30
    recordType: A
    targets:
    - 10.171.61.136
    - 10.171.61.148

The "localtargets-liqo.cloud.testanim.uzmar" targets correctly show the node IPs for each cluster. However, as I understand, the "liqo.cloud.testanim.uzmar" targets should show all the IPs of both of my cluster's nodes. Any idea what could be going wrong?

Also, my Edge DNS resolver running on ip 10.171.61.137 and with the 30053 port open, shows the following

dig @10.171.61.137 -p 30053 gslb-ns-eu-cloud.testanim.uzmar +short
10.181.61.146
10.181.61.21
10.181.61.22
10.181.61.145
10.181.61.20

and

dig @10.171.61.137 -p 30053 gslb-ns-us-cloud.testanim.uzmar +short
10.171.61.136
10.171.61.148

but when trying to access liqo.cloud.testanim.uzmar directly, I get the following

dig @10.171.61.137 -p 30053 liqo.cloud.testanim.uzmar
;; communications error to 10.171.61.137#30053: timed out
;; communications error to 10.171.61.137#30053: timed out

; <<>> DiG 9.18.12-0ubuntu0.22.04.1-Ubuntu <<>> @10.171.61.137 -p 30053 liqo.cloud.testanim.uzmar
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 56972
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 075cebd109008789010000006464af20a7c41b2bff774b44 (good)
;; QUESTION SECTION:
;liqo.cloud.testanim.uzmar.     IN      A

;; Query time: 99 msec
;; SERVER: 10.171.61.137#30053(10.171.61.137) (UDP)
;; WHEN: Wed May 17 11:40:04 BST 2023
;; MSG SIZE  rcvd: 82

I wonder if there is something I may be misunderstanding. Thanks for any help or advise that you can provide!

I managed to make it work by using a Load Balancer service for the k8gb-coredns service instead of using the nginx ingress.

Hi @uzmargomez Thanks a lot for trying out the project, I am happy that it worked! I was not fast enough here but please don't hesitate to ask if you have any more questions :)

Thanks @ytsarev! No problem at all, this is a really useful and great project!