telepresenceio / telepresence

Local development against a remote Kubernetes or OpenShift cluster

Home Page:https://www.telepresence.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Traffic-agent using incompatible iptables-variant (legacy instead of nft)

glooms opened this issue · comments

The OS we're using for our clusters AlmaLinux 9.1 doesn't support iptables-legacy, only iptables-nft, which surfaced as an unintelligible error asking us to update the kernel.

The fix was quite simple, we just did a derivative of the Dockerfile.traffic where we symlink iptables to /sbin/xtables-nft-multi (what iptables was originally pointing to), like so:

FROM docker.io/datawire/tel2:2.18.0

RUN ln -sf /sbin/xtables-nft-multi /sbin/iptables
RUN ln -sf /sbin/xtables-nft-multi /sbin/ip6tables

ENTRYPOINT ["traffic"]
CMD []

Albeit the fix is quite simple it was very hard to find so it might be good to solve it or at the very least to add it to your troubleshooting section as it might be useful for others.

Thanks, @glooms . That's a great bit of feedback and a fix. Much appreciated.

Hi @glooms , do you recall what error you got when running into this? It would be great to know if you still have access to that or remember what you first saw.

Hello @cindymullins-dw, the error I get is that the init-container created when doing telepresence intercept <service> crashes with the following logs:

2024-04-02 08:31:52.6682 info    Traffic Agent Init v2.18.0
2024-04-02 08:31:52.6771 error   failed to clear chain TEL_INBOUND_TCP: running [/sbin/iptables -t nat -N TEL_INBOUND_TCP --wait]: exit status 3: iptables v1.8.10 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

2024-04-02 08:31:52.6771 error   quit: failed to clear chain TEL_INBOUND_TCP: running [/sbin/iptables -t nat -N TEL_INBOUND_TCP --wait]: exit status 3: iptables v1.8.10 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

My cluster worker nodes having the same issue as well after we upgraded the Kernel version from 4.19.91 to 5.10.134