Container hangs on startup on iptables-legacy system
thesix opened this issue · comments
Hi,
I have been fiddling around with this for a while now and finally found out, that the command iptables-nft -L
in docker-ipv6nat-compat
hangs indefinitely on our Ubuntu System. This is what I see when I log on to the container:
PID USER TIME COMMAND
1 root 0:00 {docker-ipv6nat-} /bin/sh /docker-ipv6nat-compat -cleanup -debug
7 root 0:04 iptables-nft -L
8 root 0:00 grep -q Chain DOCKER
14 root 0:00 /bin/sh
When I kill PID 7 ipv6nat starts properly and everything seems to work. The container looks like this:
PID USER TIME COMMAND
1 root 0:00 /docker-ipv6nat -cleanup -debug
14 root 0:00 /bin/sh
We run Ubuntu 18.04.3 LTS
here that has Docker version 19.03.4, build 9013bf583a
installed.
Cheers,
T.
Thanks. Could you try running iptables-nft -L
yourself from within the container (docker exec -it ...
) to see what's going on?
It also hangs. Plain iptables -L
spits out the list of rules as expected.
That's strange, I'll have to look into that.. As a workaround, you should be able to use --entrypoint /docker-ipv6nat
in your Docker run command (or entrypoint: /docker-ipv6nat
if you're using Docker Compose).
Let me know if that works for you.
Actually, are you sure your docker run command is correct?
I've just tested and if you start the ipv6nat container without --privileged
or --cap-add=NET_ADMIN --cap-add=SYS_MODULE
the iptables-nft command will hang.
Giving the container the correct privileges should resolve the issue. Can you confirm?
This is the portion of the docker-compose.yml that I use to start the service:
ipv6nat:
image: robbertkl/ipv6nat
hostname: "ipv6nat"
command: "-cleanup -debug"
network_mode: host
cap_add:
- NET_ADMIN
- SYS_MODULE
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /lib/modules:/lib/modules:ro
I just added the entrypoint: /docker-ipv6nat
and now it works like a charm
Your yml looks good to me, so I don't understand why it would be hanging, I could reproduce the hanging with incorrect docker arguments, but that doesn't seem to be the case with you.
Glad to hear the workaround is working, I'll close the issue for now, I'll keep an eye out for similar reports.