caicloud / loadbalancer-controller

Kubernetes loadbalancer controller to provision ingress controller dynamically

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Update api to v1alpha2

zoumo opened this issue · comments

commented

According to Kubernetes‘s change :

  • update TPR to CRD

API update:

  • move api to https://github.com/caicloud/clientset
  • automatically generate clients and informers
  • update api to v1apha2, delete useless field.
  • change api group from net.alpha.caicloud.io to loadbalance.caicloud.io
  • adjust the default port to let them less than 1024

Repo convention:

  • readjust directory structure

Bug fix:

  • fix #25
  • update nginx ingress controller to 0.9.0-beta.15, fix #33
  • adjust proxy read/write timeout to 10min
commented

v1alpha2 api spec

spec:
  nodes:
    names:
    - kube-master-1
    - kube-master-2
    - kube-master-3
    replica: 3
    taintEffect: PreferNoSchedule
  providers:
    ipvsdr:
      scheduler: rr
      vip: 192.168.18.60
  proxy:
    type: nginx
    config:
      proxy-read-timeout: "600"
      proxy-send-timeout: "600"
      use-proxy-protocol: "false"
    resources:
      limits:
        cpu: "1"
        memory: 1000Mi
      requests:
        cpu: 300m
        memory: 256Mi

@kdada add task labels and track in project APP

@zoumo Are we still using Deployment?

Does spec.nodes.replica equal to len(spec.nodes.names)?

commented

@ddysher
I think we should take precise control of replicas of loadbalancer.

Consider the following case:

  1. HPA for loadbalancer
  2. user only specify the replicas, let controller choose nodes from a candidate pool

Deployment is easier to be scaled to a precise replica number.

commented

@kdada
spec.nodes.replica doesn't take effect now

@zoumo can't we dynamically taint / un-taint nodes to control replica and which node to run pods?

There are benefits of daemonset that can't be been from deployment, like guaranteed scheduling, easier setting of critical addon, started before other pods, etc. LB is the exact use case for daemonset - label/taint the nodes, then run daemonset.

My setback from using daemonset is due to internal services; if that's no longer supported, i don't see compelling reasons for using deployment now. WDYT?

commented

It seems that LB is the exact use case for daemonset, but there are still some issues need to discuss for the future version.

Unfortunately, there too much work for replacing Deployment with Daemonset to catch up with this release cycle(11.30)

I will consider it in next release.

you mean, to not miss the release cycle? :)

Unfortunately, there too much work for replacing Daemonset with Deployment to miss the release cycle(11.30)

LGTM if it's under ur radar.

commented

Aha, that is a grammar mistake.

囧 (ノへ ̄、)捂脸

I mean, couldn't catch up with this release time.