caicloud / loadbalancer-controller

Kubernetes loadbalancer controller to provision ingress controller dynamically

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FullNat

zjx-caicloud opened this issue · comments

fullnat

Under the fullnat mode, LVS replaces the source ip and port with its ip and port, replaces des ip and port with RS's ip and port. In order not to lose client ip, fullnat add an option to TCP package to store client ip. When realserver gets the package, it gets the client ip through toa module of kernel. It seems to meet the requirement? @zoumo

I will set up fullnat with kubernetes to test whether it can work in a couple of days.

It's workable in theory. But real server can't get real src ip (See taobao/toa).

It puzzles me... Do you mean The fullnat alone can't ensure that real servers get the real ip, but if we do what just taobao/toa does, real servers can get the real src ip.?

I means the workflow is workable. But the main concern is how to get real source ip in real server.

FYI:

  1. For TCP, toa module can help you to get source ip from TCP packets. You just need to install it to kernels of real servers.
  2. For UDP, https://github.com/yubo/ip_vs_ca may help you.

Got it. I will try on it.

commented

:-D
For fullnat
If the ingress controller running container network, I think we should install the toa module into ingress controller's image.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close