moby / libnetwork

networking for containers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

IPv4 and IPv6 addresses are not bound by default anymore

terencehonles opened this issue · comments

After the change in #2604, specifically the change:

1. Allocate either a IPv4 and/or IPv6 Port Binding (HostIP, HostPort, ContainerIP,  ContainerPort) based on the input and system parameters
2. Update the userland proxy as well as dummy proxy (inside port mapper) to specifically listen to either the IPv4 or IPv6 network

The docker containers will no longer respond to an IPv6 address when bound to 0.0.0.0. While this change probably makes sense because the container is not actually seeing the IPv6 request and is seeing the proxy's address instead (i.e. 172.18.0.1) this will be unexpected to a user.

This seems to likely be the problem mentioned in the following comments on #2604

This looks like a breaking change since until now docker-proxy has been binding to both v4 and v6 addresses if both are available on the system which I believe made sense.

@vin01 in #2604 (comment)

According to the Changelog [1] this is the only network related change in 20.10.2.
Now my IPv6 port forwarding is completely broken.
Downgrading to 20.10.1 fixed it.

There are no docker-proxy processes started. 20.10.1 starts a process for each port-forwarding.

Example:

sudo docker run \
    --hostname gitlab \
    --env 'GITLAB_PORT=443' \
    --env 'GITLAB_HTTPS=true' \
    --publish [<ip>]:2222:22 \
    --publish [<ip>]:8080:80 \
    --publish [<ip>]:8443:443 \
    --name gitlab \
    --restart always \
    --volume /srv/gitlab/config:/etc/gitlab:Z \
    --volume /srv/gitlab/logs:/var/log/gitlab:Z \
    --volume /srv/gitlab/data:/var/opt/gitlab:Z \
    gitlab/gitlab-ce:latest

[1] https://docs.docker.com/engine/release-notes/

@fbezdeka in #2604 (comment)

and

I confirm, I have the same problem

@pokotiV in #2604 (comment)

While I did see docker-proxy processes when running systemctl status docker

/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.18.0.2 -container-port 443
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.18.0.2 -container-port 80

I was getting a connection refused by NGINX. Rolling back to 20.10.1 (removing #2604) does change this to be what I expect. For the long term I will look at properly exposing IPv6 addresses, but this probably should have not surprised users without at least a deprecation warning.

@terencehonles by default dockerd's default binding address (--ip), is set to 0.0.0.0 so publishing a port using -p 80:80 should also bind to both the ipv4 and ipv6 address (

// Setup a binding to "::" if Host IP is empty and the default binding IP is 0.0.0.0
)

If either the default binding address set for dockerd or the Host-IP specified in -p <Host-IP>:<Host-Port>:<Container-Port>, is a IPv4 address, we only bind to the IPv4 address family

Will document this change in behavior in the docs, thanks

As already posted to moby/moby#41858, IPv6 forwarding seems to be completely broken (even when using an IPv6 address as host address).

So I guess we have two problems here:

  • IPv6 port forwarding broken
  • No IPv6 port forwarding by default (= when no host IP specified)

Maybe the problems have to be solved in different projects, so moby/libnetwork and moby/moby might be affected.

To be complete once again, that does NOT work anymore with 20.10.2 but worked with 20.10.1:
(2a03:dead:beef::1 is one of the hosts IPv6 addresses)

sudo docker run \
    --hostname gitlab \
    --env 'GITLAB_PORT=443' \
    --env 'GITLAB_HTTPS=true' \
    --publish [2a03:dead:beef::1]:2222:22 \
    --publish [2a03:dead:beef::1]:8080:80 \
    --publish [2a03:dead:beef::1]:8443:443 \
    --name gitlab \
    --restart always \
    --volume /srv/gitlab/config:/etc/gitlab:Z \
    --volume /srv/gitlab/logs:/var/log/gitlab:Z \
    --volume /srv/gitlab/data:/var/opt/gitlab:Z \
    gitlab/gitlab-ce:latest

@fbezdeka IPv6 port forwarding seems to be working fine

[vagrant@centos8 ~]$ sudo docker version
DEBU[2021-01-06T00:22:23.230790315Z] Calling HEAD /_ping                          
DEBU[2021-01-06T00:22:23.231418389Z] Calling GET /v1.41/version                   
Client: Docker Engine - Community
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        2291f61
 Built:             Mon Dec 28 16:17:40 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8891c58
  Built:            Mon Dec 28 16:15:09 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Dockerd config

[vagrant@centos8 ~]$ cat /etc/docker/daemon.json 
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "userland-proxy": true 
}

Binding to a specific IPv6 host address

ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:72:fe:6e brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute eth0
       valid_lft 85299sec preferred_lft 85299sec
    inet6 2001:db8:0:f101::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe72:fe6e/64 scope link 
       valid_lft forever preferred_lft forever

docker run --publish [2001:0db8:0:f101::1]:8080:80 -d nginx

Making sure it works only for IPv6

curl -I http://[2001:0db8:0:f101::1]:8080
HTTP/1.1 200 OK
Server: nginx/1.19.4
Date: Wed, 06 Jan 2021 00:24:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Oct 2020 15:09:20 GMT
Connection: keep-alive
ETag: "5f983820-264"
Accept-Ranges: bytes

curl -I http://10.0.2.15:8080
curl: (7) Failed to connect to 10.0.2.15 port 8080: Connection refused

Now since the default binding address is not specified in the dockerd config, it is 0.0.0.0 and we would bind to both Ipv4 and Ipv6 addresses if we did a port publish w/o specifying the host IP

sudo docker run --publish 9090:80 -d nginx

curl -I http://[2001:0db8:0:f101::1]:9090
HTTP/1.1 200 OK
Server: nginx/1.19.4
Date: Wed, 06 Jan 2021 00:26:50 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Oct 2020 15:09:20 GMT
Connection: keep-alive
ETag: "5f983820-264"
Accept-Ranges: bytes

curl -I http://10.0.2.15:9090
HTTP/1.1 200 OK
Server: nginx/1.19.4
Date: Wed, 06 Jan 2021 00:30:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Oct 2020 15:09:20 GMT
Connection: keep-alive
ETag: "5f983820-264"
Accept-Ranges: bytes

@terencehonles by default dockerd's default binding address (--ip), is set to 0.0.0.0 so publishing a port using -p 80:80 should also bind to both the ipv4 and ipv6 address (

// Setup a binding to "::" if Host IP is empty and the default binding IP is 0.0.0.0

)
If either the default binding address set for dockerd or the Host-IP specified in -p <Host-IP>:<Host-Port>:<Container-Port>, is a IPv4 address, we only bind to the IPv4 address family

Will document this change in behavior in the docs, thanks

@arkodg that's not what I'm seeing. I've not set the default IP address (so it's 0.0.0.0), and as you suggest it should be binding to both IPv4 and IPv6.

One thing to note is that I have no docker daemon config, so whatever is the default is what I have.

[vagrant@centos8 ~]$ cat /etc/docker/daemon.json 
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "userland-proxy": true 
}

When explicitly binding an IPv6 address I see the error @fbezdeka describes:

ubuntu@ip-10-0-1-213:~$ sudo docker run --publish [XXXX::XXX:XXXX:fef8:2eb5]:8080:80 nginx

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration                                                                                       
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

and checking:

ubuntu@ip-10-0-1-213:~$ curl -vv http://[XXXX::XXX:XXXX:fef8:2eb5]:8080                                                                                                                

*   Trying fe80::42f:20ff:fef8:2eb5:8080...                                                                                                                                            
* TCP_NODELAY set                                                                                                                                                                      
* Immediate connect fail for XXXX::XXX:XXXX:fef8:2eb5: Invalid argument                                                                                                                
* Closing connection 0                                                                                                                                                                 
curl: (7) Couldn't connect to server  
ubuntu@ip-10-0-1-213:~$ sudo systemctl status docker

● docker.service - Docker Application Container Engine                                                                                                                                 
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)                                                                                              
     Active: active (running) since Wed 2021-01-06 02:33:54 UTC; 6min ago                                                                                                              
TriggeredBy: ● docker.socket                                                                                                                                                           
       Docs: https://docs.docker.com                                                                                                                                                   
   Main PID: 9151 (dockerd)                                                                                                                                                            
      Tasks: 13                                                                                                                                                                        
     Memory: 211.2M                                                                                                                                                                    
     CGroup: /system.slice/docker.service                                                                                                                                              
             └─9151 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 

With IPv4 it is:

     CGroup: /system.slice/docker.service                                                                                                                                              
             ├─ 9151 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock                                                                                            
             └─10922 /usr/bin/docker-proxy -proto tcp -host-ip 10.0.1.213 -host-port 8080 -container-ip 172.17.0.2 -container-port 80 

But without I'm now noticing (and this might be because of the previous commands)

     CGroup: /system.slice/docker.service                                                                                                                                              
             ├─ 9151 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock                                                                                            
             ├─11124 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8080 -container-ip 172.17.0.2 -container-port 80
             └─11189 set-ipv6 /var/run/docker/netns/97f554c6ed48 all false

but I'd need to look at what the set-ipv6 command is doing since I didn't notice that on runs where I just used the default 0.0.0.0 (I'm not sure if systemctl truncated it though)

@arkodg
I think the issue is that previously the userland proxy was bind against IPv4 and IPv6 even if the container network only has IPv4.
(This causes something like [IPv6] -> host -> userland proxy -> [IPv4] -> container)

I think the containerIPv6 == nil check in

func (n *bridgeNetwork) validatePortBindingIPv6(bnd *types.PortBinding, containerIPv6, defHostIP net.IP) bool {
// Return early if there is no IPv6 container endpoint
if containerIPv6 == nil {
return false
}
// Return early if there is a valid Host IP, which is a IPv4 address
if len(bnd.HostIP) > 0 && bnd.HostIP.To4() != nil {

causes the userland proxy to be only created if the container has a IPv6 address (which was not required in the past).

yah @bboehmke , I wasn't aware that users were relying on docker-proxy to forward IPv6 Host Traffic to an IPv4 container interface, and I cannot find any data which backs that intent in the design/implemtation of docker-proxy

The docker containers will no longer respond to an IPv6 address when bound to 0.0.0.0. While this change probably makes sense because the container is not actually seeing the IPv6 request and is seeing the proxy's address instead (i.e. 172.18.0.1) this will be unexpected to a user.

That's why I had said ^^^ since I'm not sure if that was explicitly expected to happen (and it may not be documented), but if the stance is "sorry you're out of luck" it needs to be documented as a breaking change, otherwise it should be allowed and possibly marked as deprecated now that IPv6 support is gaining.

I'm guessing the problem @fbezdeka and I are seeing is that neither of us have the daemon config:

{
  "ipv6": true,
  "fixed-cidr-v6": "..."
}

so the containers are not getting IPv6 addresses and then falling through where @bboehmke is pointing out.

I'm guessing the problem @fbezdeka and I are seeing is that neither of us have the daemon config:

{
  "ipv6": true,
  "fixed-cidr-v6": "..."
}

Right! No daemon config on my systems as well, so using the defaults I guess.

In my eyes that is really a breaking change, so something for the next major release and nothing that should happen / be part of a minor release.

yah @bboehmke , I wasn't aware that users were relying on docker-proxy to forward IPv6 Host Traffic to an IPv4 container interface, and I cannot find any data which backs that intent in the design/implemtation of docker-proxy

It did work that way by default, so the fact that this was intended or not in the implementation of docker-proxy isn't really relevant to users. The change make sense but that's still a breaking one for users that relied on this default behaviour without knowledge that it was not intended.

yah @bboehmke , I wasn't aware that users were relying on docker-proxy to forward IPv6 Host Traffic to an IPv4 container interface, and I cannot find any data which backs that intent in the design/implemtation of docker-proxy

It did work that way by default, so the fact that this was intended or not in the implementation of docker-proxy isn't really relevant to users. The change make sense but that's still a breaking one for users that relied on this default behaviour without knowledge that it was not intended.

+1

I also see forwarding IPv6 traffic -> container's IPv4 interface as not secure and unexpected, when --ipv6 is disabled on dockerd .
I'll raise this issue in the maintainer's meeting tomorrow and share more thoughts on whether to continue supporting this undocumented/unintentional feature in docker-proxy or not and instead print a depreciation warning

One thing that came to my mind is the option enable IPv6 by default with a private subnet like it is already done for IPv4. (maybe only if the host also has a IPv6 address).

I have no idea if this is an option or if this makes it even worse but maybe it is worth to think about.

commented

this is a serious issue for me, as I'm having a ipv6 stack on the main host interface, but an IPv4 stack on the containers. all my v6 ports were correctly "proxied" to the v4 containers for years.

is this an issue which will be fixed, or an architectural change which will not be fixed?

I'm currently downgraded to 20.10.1 to be able to access my v4 containers through the v6 host network.

commented

I also see forwarding IPv6 traffic -> container's IPv4 interface as not secure and unexpected, when --ipv6 is disabled on dockerd .
I'll raise this issue in the maintainer's meeting tomorrow and share more thoughts on whether to continue supporting this undocumented/unintentional feature in docker-proxy or not and instead print a depreciation warning

@arkodg I'm purely a user here, but security is a "generic" argument which might be assigned to everything; I could also argue that changing something that worked out of the box for years to something that requires the (mis)use of other components (proxies, iptables, or any other magic) is a clear security disadvantage.

I'm using docker to forward container ports to host ports. This is what I read from the docker CLI, and this is what I expect:

  -p, --publish list                   Publish a container's port(s) to the host
  -P, --publish-all                    Publish all exposed ports to random ports
commented

I also see forwarding IPv6 traffic -> container's IPv4 interface as not secure and unexpected, when --ipv6 is disabled on dockerd .
I'll raise this issue in the maintainer's meeting tomorrow and share more thoughts on whether to continue supporting this undocumented/unintentional feature in docker-proxy or not and instead print a depreciation warning

Could you please elaborate associated security risk?

It was actually a useful and desirable thing to forward ipv6 host traffic to ipv4 containers. Given the transient nature of containers and dynamic IP addresses, having firewall rules/security groups for them can be tricky especially while IPv6 NAT with docker had issues like moby/moby#41774. Hosts with static ipv6 addresses on the other hand do exist and rely on security groups to avoid exposing those ipv6 interfaces to the world.

Now since the default binding address is not specified in the dockerd config, it is 0.0.0.0 and we would bind to both Ipv4 and Ipv6 addresses if we did a port publish w/o specifying the host IP

I still don't get what the correct way will be to have containers listen to IPv4 and IPv6 without specifying a certain interface
and what's (if any) the current workaround (other than downgrading).

Is there any info from the developer meeting?

Edit: BTW, afaict, docker-compose does not support IPv6 with v3 docker-compose.yml...

Now since the default binding address is not specified in the dockerd config, it is 0.0.0.0 and we would bind to both Ipv4 and Ipv6 addresses if we did a port publish w/o specifying the host IP

I still don't get what the correct way will be to have containers listen to IPv4 and IPv6 without specifying a certain interface
and what's (if any) the current workaround (other than downgrading).

Is there any info from the developer meeting?

For me, I just worked around this by adding an IPv6 ULA prefix (https://en.wikipedia.org/wiki/Unique_local_address) without a default gateway to the docker network, the container is using.

For me, I just worked around this by adding an IPv6 ULA prefix (https://en.wikipedia.org/wiki/Unique_local_address) without a default gateway to the docker network, the container is using.

I tried to enable IPv6 via daemon.json, but failed, maybe because of docker-compose.

commented

I tried to enable IPv6 via daemon.json, but failed, maybe because of docker-compose.

For docker-compose v2 you have to specify an IPv6 subnet for each network manually. v3 doesn't support it at all. This is clearly not ideal.

https://docs.docker.com/compose/compose-file/compose-file-v2/#ipv4_address-ipv6_address
https://docs.docker.com/compose/compose-file/compose-file-v3/#ipv4_address-ipv6_address

One thing that came to my mind is the option enable IPv6 by default with a private subnet like it is already done for IPv4. (maybe only if the host also has a IPv6 address).

I guess that's the only option to provide the same "plug-and-play" experience for IPv4 and IPv6.

A feedback from production deployed containers with Ansible 2.10.4.

The ansible docker_container module set "HostIp" to "0.0.0.0" by default. See here.
So containers defined with Ansible used to works with both IPv4 and IPv6 are now IPv4 only.

I reported this here : ansible-collections/community.docker#70

I'm not sure if this is the right place to report this, but published ports are not being removed anymore after the container has been removed with the latest version:

docker run --rm -p 3000:3000 alpine:latest && docker run --rm -p 3000:3000 alpine:latest

results in the following error:

docker: Error response from daemon: driver failed programming external connectivity on endpoint laughing_raman (9dbae86597efca06d6a4a6a80f65d36731c6cdd00dc3f1a72e3cf1c56dddad24): Bind for :::3000 failed: port is already allocated.

Same issue here.

# docker run --name foo --publish 8001:80 -d nginx        
253d0e6bf13cd94d09801198c29c9a7946736198d8c464c43c66760d21f8942d

# ss -tulpn|grep docker-proxy                
tcp   LISTEN 0      4096                                 0.0.0.0:8001       0.0.0.0:*    users:(("docker-proxy",pid=1152270,fd=4))    
tcp   LISTEN 0      4096                                    [::]:8001          [::]:*    users:(("docker-proxy",pid=1152279,fd=4))    

# docker stop foo
foo

# ss -tulpn|grep docker-proxy
tcp   LISTEN 0      4096                                    [::]:8001          [::]:*    users:(("docker-proxy",pid=1152279,fd=4))    

# ps aux|grep docker-proxy 
root     1152279  0.0  0.0 1149012 3972 ?        Sl   17:36   0:00 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8001 -container-ip 172.17.0.2 -container-port 80

Why do you bind on both interfaces (and start 2 processes) instead of binding on *:8001 like before ?
I guess this could have an unexpected behavior with sysctl net.ipv6.bindv6only=1

@seblu it was intentional change to split up the bindings if we had to support a better port mapping mechanism using iptables for ipv4 bindings and ip6tables for ipv6 bindings

hmmm, so after an hour of googling (after hours of fixing my systems) i finally landed here. just to find out this has been reported almost a month ago.

guys, seriously, this is a BREAKING change! any sensible project would revert the change immediately and would wait with it till the version bump.

you broke something on a sub-minor version bump (yes, not major/minor) and you're still just debating? this must be a joke...

@rpodgorny just do be fair, I think it's very likely #2608 fixes the issue and it is already merged. What we're waiting for is docker to pull in the changes and then release (which I believe is moby/moby#41908). I'm sure they are trying to get the fix out as soon as possible.

Better communication would definitely be appreciated, but if I were you I'd rollback to 20.10.1 (sounds like you did) and asses when you can move forward since there might be other things that could have broken. It definitely wasn't what I was expecting when I upgraded my systems also, but I'm glad @arkodg was able to identify what was likely the issue and I'll be testing the next release when I can 🙂

Just released docker 20.10.3 doesn't ship the @arkodg fix. 😢

Despite the goal is understandable, and the fix make things better, this is stil a breaking change, which hurt production environments with IPv6 (and requires deployment tools like ansible to be updated). Such changes should not occur in a minor version.

Was not it still fixed?

Or how can I reproduce the old behavior and map ipv6 addresses in the new version?

I made a workaround using socat for my server which seems to work pretty good. Can be automated for example as a systemd unit. Is anyone of you aware of any functional or security implications when doing this?

/usr/bin/socat TCP6-LISTEN:443,ipv6only=1,reuseaddr,fork TCP4:127.0.0.1:443

Was not it still fixed?

It might be fixed in libnetwork, but it will still need to be packaged and release with docker. The related issue is moby/moby#41858 and you can see it's in the high priority column on the 20.10.x bugs / regressions

Just released docker 20.10.4 does not ship a fix.

@thaJeztah / @GordonTheTurtle could you let us know why there is no fix for this IPv6 breakage introduced in a minor release?

commented

@seblu I seem to have the same problem with just released 20.10.5. I'm still using the workaround from 2 months ago, which is downgrading docker to 20.10.1. Downgrading is uncomfortable, but still working. My concern is that some package dependencies will soon break with the 20.10.1 downgrades, and that I'm missing out on (important/security) updates by staying on the earlier version.

Are there any recommendations on how to fix this using third-party options?

Looking at the thread, and my own use case (IPv6 on host network, IPv4 in containers), the only appropriate way is the one described by @balert at #2607 (comment). This still requires socat installed on the host, and most likely some systemd units to start the socat proxy after the container daemon, and I'm uncertain what the performance overhead is, as socat was not designed for this particular use-case. Edit: using socat to forward traffic will most likely change the source IP for any logging you do :-(

Thank you!

i've been using dockerized socat with network=host mode on deloyments where downgrade is not feasible to workaround this regression. so no, you don't need socat to be installed on host.

still, not having this fixed after several minor version bumps only shows complete amaterism of docker core team. :-(

@lvlts

I seem to have the same problem with just released 20.10.5.

From the release notes 20.10.5 is just a docker cli change, so yes, it would be nice if the fix for this made the cut for that, but it looks like it didn't 😕

This still requires socat installed on the host, and most likely some systemd units to start the socat proxy after the container daemon.

You would likely not want to start this after the docker daemon, but instead when you're starting the container in question (which should depend on docker itself). If your container is started with systemd you could have this as a ExecStartPre step and this should be pretty straightforward to add.

Edit: using socat to forward traffic will most likely change the source IP for any logging you do :-(

Just to be clear, you would have already lost the IPv6 address if you were relying on the previous behavior so this is probably not that bad of a change.

@rpodgorny

i've been using dockerized socat with network=host mode on deloyments where downgrade is not feasible to workaround this regression. so no, you don't need socat to be installed on host.

If you're using --network=host do you even need this? If the container is started on the host's network it shouldn't need the forwarding because the IPv6 address of the host is what the container's service bound to. I may be misunderstanding something, but I'm definitely surprised you have to go down that route and this bug affects you too.

If you're using --network=host do you even need this? If the container is started on the host's network it shouldn't need the forwarding because the IPv6 address of the host is what the container's service bound to. I may be misunderstanding something, but I'm definitely surprised you have to go down that route and this bug affects you too.

i meant i'm using the network:host only for the socat container - that way i don't need to install anything on the host but i get the same functionality as the socat+systemd_service workaround. the actual service containers are running isolated just as before.

so, effectively, almost all my docker-compose files now contain one additional socat container just doing the ipv6-to-ipv4 port mapping...

Updating Ubuntu 20.04 hosts that where held back from docker-ce 20.10.1 to 20.10.6 or 20.10.7 still break previously working IPv6 connectivity despite #2608 being included in those releases.

Am I missing something / is there additional configuration needed ?

commented

Updating Ubuntu 20.04 hosts that where held back from docker-ce 20.10.1 to 20.10.6 or 20.10.7 still break previously working IPv6 connectivity despite #2608 being included in those releases.

Am I missing something / is there additional configuration needed ?

I have the same issue. I found the following blog post quite helpfull:https://medium.com/@skleeschulte/how-to-enable-ipv6-for-docker-containers-on-ubuntu-18-04-c68394a219a2

There they mention:https://github.com/robbertkl/docker-ipv6nat#usage
It automatically configures ipv6tables for you, making docker-proxy superfluous.
I hope that helps, for me it works

Updating Ubuntu 20.04 hosts that where held back from docker-ce 20.10.1 to 20.10.6 or 20.10.7 still break previously working IPv6 connectivity despite #2608 being included in those releases.

Am I missing something / is there additional configuration needed ?

Hm... My CentOS systems are working again with 20.10.6 deployed.

I also have Ubuntu 20.04, but it is working as expected.

The guide @comentator posted is how to set up IPv6 properly instead of having the IPv6 support come because the host forwarder is also listening to IPv6 (which was the thing that caused the original issue).

Using curl -6 <url> to request the IPv6 address of a host running NGINX via docker you should see the access log not log an IPv6 address but the IPv4 address of the docker container's gateway (in this case 172.18.0.1). If you set up IPv6 properly as the guide suggests you should now be logging an IPv6 address (I've not checked this as I'm OK with the IP translation and logging the wrong IP address for the host that I have this configured this way on).

The issue was actually linked to my Ansible setup : ansible-collections/community.docker#70

Looking at the docker release notes this should be solved in 20.10.6. Until AWS updates (still at 20.10.4 at the moment) I'm very grateful for @balert 's socat solution :)