Bandwith limit not working properly
xoancosmed opened this issue · comments
Bandwitch limit is not working as expected, the expirimented limit is much higher than the value I setted via the label. Here some examples:
- Without limitation, as you can see, we get 10Gbps.
$ docker run -it mlabbe/iperf3 iperf3 -c 172.20.35.249
Connecting to host 172.20.35.249, port 5201
[ 5] local 172.17.0.2 port 49944 connected to 172.20.35.249 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.28 GBytes 11.0 Gbits/sec 12 3.06 MBytes
[ 5] 1.00-2.00 sec 1.20 GBytes 10.3 Gbits/sec 0 3.06 MBytes
[ 5] 2.00-3.00 sec 1.29 GBytes 11.1 Gbits/sec 0 3.06 MBytes
[ 5] 3.00-4.00 sec 1.33 GBytes 11.4 Gbits/sec 0 3.06 MBytes
[ 5] 4.00-5.00 sec 1.33 GBytes 11.4 Gbits/sec 0 3.06 MBytes
[ 5] 5.00-6.00 sec 1.27 GBytes 10.9 Gbits/sec 0 3.06 MBytes
[ 5] 6.00-7.00 sec 1.19 GBytes 10.2 Gbits/sec 0 3.06 MBytes
[ 5] 7.00-8.00 sec 1.13 GBytes 9.68 Gbits/sec 0 3.06 MBytes
[ 5] 8.00-9.00 sec 1.02 GBytes 8.75 Gbits/sec 0 3.06 MBytes
[ 5] 9.00-10.00 sec 882 MBytes 7.40 Gbits/sec 0 3.06 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.9 GBytes 10.2 Gbits/sec 12 sender
[ 5] 0.00-10.04 sec 11.9 GBytes 10.2 Gbits/sec receiver
- With the limit setted at 1mbps (which actually means 1 MB/s), we get around 230 Mbps
$ docker network create test5-net
7c5df3e208dd7a1faccc909848292c7bd93d24e6c871858fa10ede948f30c54f
$ docker run -it \
--net test5-net \
--label "com.docker-tc.enabled=1" \
--label "com.docker-tc.limit=1mbps" \
--label "com.docker-tc.delay=100ms" \
--label "com.docker-tc.loss=10%" \
--label "com.docker-tc.duplicate=5%" \
--label "com.docker-tc.corrupt=1%" \
mlabbe/iperf3 \
iperf3 -c 172.20.35.249
Connecting to host 172.20.35.249, port 5201
[ 5] local 172.23.0.2 port 37802 connected to 172.20.35.249 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.13 GBytes 9.67 Gbits/sec 12 3.10 MBytes
[ 5] 1.00-2.00 sec 1.10 GBytes 9.50 Gbits/sec 0 3.10 MBytes
[ 5] 2.00-3.00 sec 31.2 MBytes 263 Mbits/sec 1 2.32 MBytes
[ 5] 3.00-4.00 sec 23.8 MBytes 199 Mbits/sec 0 2.51 MBytes
[ 5] 4.00-5.00 sec 26.2 MBytes 220 Mbits/sec 0 2.67 MBytes
[ 5] 5.00-6.00 sec 27.5 MBytes 231 Mbits/sec 0 2.79 MBytes
[ 5] 6.00-7.00 sec 27.5 MBytes 231 Mbits/sec 0 2.89 MBytes
[ 5] 7.00-8.00 sec 27.5 MBytes 231 Mbits/sec 0 2.96 MBytes
[ 5] 8.00-9.00 sec 28.8 MBytes 241 Mbits/sec 0 3.02 MBytes
[ 5] 9.00-10.00 sec 30.0 MBytes 252 Mbits/sec 0 3.02 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.45 GBytes 2.11 Gbits/sec 13 sender
[ 5] 0.00-10.04 sec 2.45 GBytes 2.10 Gbits/sec receiver
- With the limit setted at 256kbps (which actually means 256 MB/s), we get around 230Mbps
$ docker network create test7-net
f91ccaaf402391584ba6afb17dfd1260d91df04f24b76df272c62520a1c520f1
$ docker run -it \
--net test7-net \
--label "com.docker-tc.enabled=1" \
--label "com.docker-tc.limit=256kbps" \
--label "com.docker-tc.delay=100ms" \
--label "com.docker-tc.loss=10%" \
--label "com.docker-tc.duplicate=5%" \
--label "com.docker-tc.corrupt=1%" \
mlabbe/iperf3 \
iperf3 -c 172.20.35.249
Connecting to host 172.20.35.249, port 5201
[ 5] local 172.25.0.2 port 49744 connected to 172.20.35.249 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.10 GBytes 9.44 Gbits/sec 12 3.02 MBytes
[ 5] 1.00-2.00 sec 806 MBytes 6.76 Gbits/sec 1 3.02 MBytes
[ 5] 2.00-3.00 sec 28.8 MBytes 241 Mbits/sec 0 3.02 MBytes
[ 5] 3.00-4.00 sec 28.8 MBytes 241 Mbits/sec 0 3.02 MBytes
[ 5] 4.00-5.00 sec 30.0 MBytes 252 Mbits/sec 0 3.02 MBytes
[ 5] 5.00-6.00 sec 27.5 MBytes 231 Mbits/sec 0 3.02 MBytes
[ 5] 6.00-7.00 sec 27.5 MBytes 231 Mbits/sec 0 3.02 MBytes
[ 5] 7.00-8.00 sec 28.8 MBytes 241 Mbits/sec 0 3.02 MBytes
[ 5] 8.00-9.00 sec 30.0 MBytes 252 Mbits/sec 0 3.02 MBytes
[ 5] 9.00-10.00 sec 28.8 MBytes 241 Mbits/sec 0 3.02 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.11 GBytes 1.81 Gbits/sec 13 sender
[ 5] 0.00-10.04 sec 2.11 GBytes 1.80 Gbits/sec receiver
Do you know where is the problem? I created a new network for each container, I don't know if that's right. I'm using the docker compose YAML to run docker-tc. I'm using Ubuntu Server 18.04 and Docker 19.03.8.
Thanks,
Xoán
Hi again,
I've been doing more tests and I found that the problem is with the upload tests (the above ones). These are download tests, and it seems that they work fine:
- With the limit setted at 256kps (which actually means 256 KB/s) I get around 240 KB/s:
$ docker run -it \
--net test-net \
--label "com.docker-tc.enabled=1" \
--label "com.docker-tc.limit=256kbps" \
--label "com.docker-tc.delay=1ms" \
--label "com.docker-tc.loss=0%" \
--label "com.docker-tc.duplicate=0%" \
--label "com.docker-tc.corrupt=0%" \
mlabbe/iperf3 \
iperf3 -R -c 172.20.35.249
Connecting to host 172.20.35.249, port 5201
Reverse mode, remote host 172.20.35.249 is sending
[ 5] local 172.26.0.2 port 50992 connected to 172.20.35.249 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.08 GBytes 9.32 Gbits/sec
[ 5] 1.00-2.00 sec 1.17 GBytes 10.0 Gbits/sec
[ 5] 2.00-3.00 sec 686 MBytes 5.76 Gbits/sec
[ 5] 3.00-4.00 sec 235 KBytes 1.92 Mbits/sec
[ 5] 4.00-5.00 sec 254 KBytes 2.08 Mbits/sec
[ 5] 5.00-6.00 sec 238 KBytes 1.95 Mbits/sec
[ 5] 6.00-7.00 sec 242 KBytes 1.98 Mbits/sec
[ 5] 7.00-8.00 sec 238 KBytes 1.95 Mbits/sec
[ 5] 8.00-9.00 sec 238 KBytes 1.95 Mbits/sec
[ 5] 9.00-10.00 sec 233 KBytes 1.91 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.04 sec 2.93 GBytes 2.50 Gbits/sec 1623 sender
[ 5] 0.00-10.00 sec 2.92 GBytes 2.51 Gbits/sec receiver
- With the limit setted at 512kps (which actually means 512 KB/s) I get around 480 KB/s:
$ docker run -it \
--net test-net \
--label "com.docker-tc.enabled=1" \
--label "com.docker-tc.limit=256kbps" \
--label "com.docker-tc.delay=1ms" \
--label "com.docker-tc.loss=0%" \
--label "com.docker-tc.duplicate=0%" \
--label "com.docker-tc.corrupt=0%" \
mlabbe/iperf3 \
iperf3 -R -c 172.20.35.249
Connecting to host 172.20.35.249, port 5201
Reverse mode, remote host 172.20.35.249 is sending
[ 5] local 172.26.0.2 port 50996 connected to 172.20.35.249 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.12 GBytes 9.64 Gbits/sec
[ 5] 1.00-2.00 sec 1.17 GBytes 10.1 Gbits/sec
[ 5] 2.00-3.00 sec 811 MBytes 6.81 Gbits/sec
[ 5] 3.00-4.00 sec 489 KBytes 4.00 Mbits/sec
[ 5] 4.00-5.00 sec 466 KBytes 3.81 Mbits/sec
[ 5] 5.00-6.00 sec 469 KBytes 3.84 Mbits/sec
[ 5] 6.00-7.00 sec 479 KBytes 3.93 Mbits/sec
[ 5] 7.00-8.00 sec 502 KBytes 4.11 Mbits/sec
[ 5] 8.00-9.00 sec 443 KBytes 3.63 Mbits/sec
[ 5] 9.00-10.00 sec 479 KBytes 3.93 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.05 sec 3.09 GBytes 2.65 Gbits/sec 2972 sender
[ 5] 0.00-10.00 sec 3.09 GBytes 2.66 Gbits/sec receiver
- With the limit setted at 1mps (which actually means 1 MB/s) I get around 930 KB/s:
$ docker run -it \
--net test-net \
--label "com.docker-tc.enabled=1" \
--label "com.docker-tc.limit=256kbps" \
--label "com.docker-tc.delay=1ms" \
--label "com.docker-tc.loss=0%" \
--label "com.docker-tc.duplicate=0%" \
--label "com.docker-tc.corrupt=0%" \
mlabbe/iperf3 \
iperf3 -R -c 172.20.35.249
Connecting to host 172.20.35.249, port 5201
Reverse mode, remote host 172.20.35.249 is sending
[ 5] local 172.26.0.2 port 51000 connected to 172.20.35.249 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.14 GBytes 9.77 Gbits/sec
[ 5] 1.00-2.00 sec 1008 MBytes 8.45 Gbits/sec
[ 5] 2.00-3.00 sec 191 MBytes 1.60 Gbits/sec
[ 5] 3.00-4.00 sec 935 KBytes 7.66 Mbits/sec
[ 5] 4.00-5.00 sec 934 KBytes 7.65 Mbits/sec
[ 5] 5.00-6.00 sec 976 KBytes 7.99 Mbits/sec
[ 5] 6.00-7.00 sec 931 KBytes 7.63 Mbits/sec
[ 5] 7.00-8.00 sec 931 KBytes 7.63 Mbits/sec
[ 5] 8.00-9.00 sec 934 KBytes 7.65 Mbits/sec
[ 5] 9.00-10.00 sec 931 KBytes 7.63 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.04 sec 2.32 GBytes 1.98 Gbits/sec 2245 sender
[ 5] 0.00-10.00 sec 2.31 GBytes 1.99 Gbits/sec receiver
With this tests I conclude that: first, the rule takes about 2.5 seconds to apply (which is not really a problem in my case), and second, that the rate doesn't apply well to the outgoing traffic.
Taking a quick look to the code I see that the command executed is qdisc_tbf "$IF" rate "$LIMIT"
, so the problem may not be in docker-tc. Probably one parameter or something is missing, I'm a noobie in terms of tc, but I'll try to investigate it.
Hello, @xoancosmed
I saw your benchmarks and even tho I don't have advanced information about tc
I noticed 2 things
- With
iperf
you are benchmarking TCP speed whiletc
sets bandwidth in at most in IP (3rd) Layer and it might be lower (2nd) layer. (TCP and UDP runs on the 4th layer) - In your first post, you had 10% loss, 5% duplicate and 1% corrupt, because of that speeds are lower than upload tests.
I have no idea why you are seeing 2.5 seconds of high bandwidth, it might because of how qdisc (queue) handles them. It might be TCP buffering. There is also a burst
option which might be helpful for that kickstart issue.
My suggestion is to either use UDP or raw IP for calculating raw bandwidth.
Ref: https://www.tldp.org/HOWTO/html_single/Traffic-Control-HOWTO/#o-packets
Hi @pvtmert
Thanks for your reply. What you said makes sense ... I'll try it with IPERF in UDP mode. However, the biggest problem for me is that 2.5 seconds where it seems that there is no limit. I have to investigate more about this ...
Hi @xoancosmed
as I stumbled on this issue. The reason for not working in both directions is that tc only works for egress shaping. If you want to use it in both directions you have to apply it on both sides in egress (traffic leaving a container). You could add a limit to the server, which acts in download direction and add a limit to a client container that acts in upload direction.
If you do not use a client container you have to apply shaping on the interface the client uses, this however will be applied to all clients connected to this interface.
The reason for the 2,5 sec delay may be that the container is checking the docker api for changes, this takes some time until the rules are applied. It might be better to test against an already running container.
I hope this helps