moby / moby

The Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

Home Page:https://mobyproject.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Publishing a large port range is very slow (Docker 1.6.0)

dougbtv opened this issue · comments

Description of problem: When using docker run including the -p publish flag with a large port range, it's very slow to start a container.

docker version:

Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 350a636/1.6.0
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 350a636/1.6.0
OS/Arch (server): linux/amd64

docker info:

Containers: 2
Images: 209
Storage Driver: devicemapper
 Pool Name: docker-8:6-2623196-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: 
 Metadata file: 
 Data Space Used: 6.377 GB
 Data Space Total: 107.4 GB
 Data Space Available: 101 GB
 Metadata Space Used: 10.33 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.137 GB
 Udev Sync Supported: true
 Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 3.19.7-200.fc21.x86_64
Operating System: Fedora 21 (Twenty One)
CPUs: 8
Total Memory: 7.709 GiB
Name: localhost.localdomain
ID: KPCC:PRCE:75J7:BGVS:UOGT:5H2T:55MW:HWP5:EEKN:AIAE:DRCX:TFSE
Username: [redacted]
Registry: [https://index.docker.io/v1/]

uname -a: Linux localhost.localdomain 3.19.7-200.fc21.x86_64 #1 SMP Thu May 7 22:00:21 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Environment details (AWS, VirtualBox, physical, etc.):

Physical on Fedora 21

[root@localhost docker]# docker -v
Docker version 1.6.0, build 350a636/1.6.0
[root@localhost docker]# cat /etc/redhat-release 
Fedora release 21 (Twenty One)
[root@localhost docker]# uname -a
Linux localhost.localdomain 3.19.7-200.fc21.x86_64 #1 SMP Thu May 7 22:00:21 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

How reproducible: Every time.

Steps to Reproduce:

  1. Start a docker container using docker run with a -p publish command with a port range, try 1000 ports e.g. docker run -p 1000-2000:1000-2000 -id centos:centos6 /bin/bash

Actual Results: It takes a full minute to start a container with a thousand ports published for a container that typically takes seconds to start on the same system.

Expected Results: Just a couple seconds to start -- preferably almost as fast as publishing just a few ports.

Additional info: Here's some times for how long it takes to run a container

######## Starting with a single port published.
[root@localhost docker]# time docker run -p 80:80 -id centos:centos6 /bin/bash
01a14d3b2876196a0b209bcb94e7d86d8c32fed40247ea1d463893cc839bd471

real    0m2.740s
user    0m0.030s
sys 0m0.013s

######## Starting with a thousand ports published.

[root@localhost docker]# time docker run -p 1000-2000:1000-2000 -id centos:centos6 /bin/bash
83a672a4487c16be3f5a3ec04b35ad7a5c3a03c303a74d2b72b37fc31e446a78

real    0m56.049s
user    0m0.036s
sys 0m0.016s

######## Or how about 4 ports.

[root@localhost docker]# time docker run -p 4000-4004:4000-4004 -id centos:centos6 /bin/bash
d0253005fc198d47dda5ccb43f6a2bb1cd06ea8066130041bc87622a0e98852e

real    0m2.075s
user    0m0.033s
sys 0m0.012s

Closing this since it is a dup, and is essentially fixed.
The issue here is that for each published port Docker has to spin up a userland proxy process to proxy local traffic to the container.

In Docker 1.7 you can disable the userland proxy (which enables hairpin nat), thus significantly speeding up this process.
It still takes longer to start/stop a container with lots and lots of published ports, but still much much shorter time than before:

real    0m 11.75s
user    0m 0.07s
sys 0m 0.00s

Thanks @cpuguy83 , I'll move to Docker 1.7

commented

This is still a problem in the newest version .
Even with userland-proxy disabled it needs way too long publishing multiple thousands of ports.

I ran the following commands multiple times and I got always about the same result:

Output of time docker run --rm alpine echo test:

test

real    0m0.848s
user    0m0.008s
sys     0m0.020s

Output of time docker run --rm -p 10000-10100:10000-10100/udp alpine echo test:

test

real    0m9.048s
user    0m0.040s
sys     0m0.012s

Output of time docker run --rm -p 10000-10200:10000-10200/udp alpine echo test:

test

real    0m16.973s
user    0m0.012s
sys     0m0.012s

Parts from the output of iptables --list while the container is running:

...
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10100
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10099
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10098
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10097
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10096
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10095
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10094
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10093
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10092
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10091
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10090
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10089
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10088
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10087
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10086
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10085
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10084
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10083
ACCEPT     udp  --  anywhere             172.17.0.2           udp dpt:10082
...

Docker creates one iptables rule per port.
The more Ports I forward, the more time consuming it is.
Can we maybe change the iptables rules to use port ranges instead?

Output of docker version:

Client:
 Version:      17.06.2-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 20:00:17 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.2-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 19:59:11 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 30
 Running: 30
 Paused: 0
 Stopped: 0
Images: 35
Server Version: 17.06.2-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 188
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.8.0-58-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 62.72GiB
Name: hedo4
ID: YTAJ:4EZY:WE5J:XBUL:DNKR:ZNE5:5WRL:V6BF:ZR6W:3DLC:3KJS:KPFZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Output of cat /etc/docker/daemon.json:

{
    "userland-proxy": false
}

Additional environment details (AWS, VirtualBox, physical, etc.)
Physical Server with Ubuntu 16.04

Please reopen this issue. Starting a container with a large port range takes forever. For applications like RTP servers, it's nigh on unusable.

I have same problem with publishing Asterisk container RTP ports. Creating 10k ports in Iptables takes several hours.

We have the same issue, and we're only trying to create 1000 ports.

This is also true for recent Docker versions. It's basically impossible to start 1000 ports.

We have a similar problem, also running Asterisk in Docker. Takes about 15 minutes to open up 4000 ports, on Docker 18.06.1-ce.

Same issue here with Docker 19.03.5. In my case I setup an FTP server in a Docker container and had to forward port range 40000-40999 for FTP's passive ports. It took around 70 seconds. I like the suggestion of @bdito to use iptables with port ranges instead of opening every port one by one.

Same issue with Docker 19.03.8.

I have same problem with publishing Asterisk container RTP ports. Creating 10k ports in Iptables takes several hours.

Same, asterisk need 10000-30000/udp, but os oom。

commented

Same problem, coturn default port range (-p 49152-65535:49152-65535/udp) is unusable due to this:

Same issue with Docker version 20.10.6, build 370c289.
Trying to open 49152-65535 ports for ejabberd server consumes all memory (10GB) and server hangs-up.

any updates on this?

commented

Hard to consider this "closed" when a lot of folks have so much trouble with it. If it's really fixed, we need better documentation, ideally even docker warning us that it seems like we're doing it wrong, including a link to helpful resources.

… and then I discovered that this repo here is not the docker repo. I'll mark my post as off-topic.

still same issue cant run coturn with the recommended relay ports.

Same issue here. Installing Mosh and exposing the required 1000 UDP ports adds almost 10 minutes to the container startup time. This is very bad, and it would be such a relief for this to be fixed. I would fix it myself if I knew how to. Honestly I could probably save time anyways by learning how to do that and avoiding these absurd startup times, lol. It takes soooo long

commented

+1
This is open since 2015 ... voip/rtp needs large port ranges and it's a pain to not get freeswitch/asterisk running in an container. Yes, there is a workaround with manual iptables modifications, but containers are there to get rid of such external dependencies ...

Apparently it's unfixable so just stop wondering when it will be resolved. Use a workaround or something. They had plenty of time to address this. Nothing happened till now.

@Netzvamp Depending on your use case, there's another workaround that could fit your needs: starting the container within host netns (eg. docker run -d --net=host instrumentisto/coturn). It removes network isolation between your container and the host machine but at least there's no more iptables & userland-proxy involved, so it's as fast as starting any other container (~300ms on my computer). However, you might not want to do that for production use cases, and that's understandable.

@indywidualny There're a bunch of things that need to be refactored in libnetwork, including how iptables rules are managed. Fortunately things are starting to move. So hopefully, this issue will be properly addressed in the not so distant future 🙂

I think all these +1's are pointless as this particular issue is closed :) So nobody is probably seeing this.
Looks like #36214 seems to be a new issue where this might get addressed (not holding breath), has only 12 votes so far ...

The plan is to handle this as part of the discussion here: #45524

To be clear, there is work being done to overhaul and solve a lot of issues port mapping in docker and we are very aware of the issue around mapping large port ranges.