tylertreat / comcast

Simulating shitty network connections so you can build better systems.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Setting latency influences bandwidth heavily

JelteF opened this issue · comments

I'm running to VMs that have to communicate with each other. When enabling latency with Comcast the bandwidth goes down drastically. Without anything enabled I get iperf to measure about 1.1Gbit/s, with a latency of 200ms however I get only a bandwith of about 100Mbit.

These are the rules that it outputs:

sudo tc qdisc add dev eth1 handle 10: root htb
sudo tc class add dev eth1 parent 10: classid 10:1 htb rate 1000000kbit
sudo tc class add dev eth1 parent 10:1 classid 10:10 htb rate 1000000kbit
sudo tc qdisc add dev eth1 parent 10:10 handle 100: netem delay 200ms
sudo iptables -A POSTROUTING -t mangle -j CLASSIFY --set-class 10:10 -p tcp --match multiport --dports 1:22,23:65535 -d 192.168.56.111
sudo iptables -A POSTROUTING -t mangle -j CLASSIFY --set-class 10:10 -p udp --match multiport --dports 1:22,23:65535 -d 192.168.56.111
sudo iptables -A POSTROUTING -t mangle -j CLASSIFY --set-class 10:10 -p icmp -d 192.168.56.111

I think it's a matter of the way the kernel handles arbitrary latency, perhaps a buffer is getting filled up and thus compromising the amount of bandwidth possible? I think someone pointed out that this would be an issue, and it's not something within the purview of this project, as all we're doing is writing glorified configuration files. What is actually happening to the packets is in the hands of the kernel subsystems.