Limit incoming and outgoing bandwidth and latency in linux

Solution 1:

I finally settled for just setting the outgoing bandwidth/latency on the server, and then doing the same on the client, effectively reaching the same result.

These are the commands I ran on the server and client respectively to reach my goals:

Server: 4 Mbit 50 ms

tc qdisc add dev eth0 handle 1: root htb default 11
tc class add dev eth0 parent 1: classid 1:1 htb rate 1000Mbps
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 4Mbit
tc qdisc add dev eth0 parent 1:11 handle 10: netem delay 50ms

Client: 512 kbit 50 ms

tc qdisc add dev vmnet1 handle 1: root htb default 11
tc class add dev vmnet1 parent 1: classid 1:1 htb rate 1000Mbps
tc class add dev vmnet1 parent 1:1 classid 1:11 htb rate 512kbit
tc qdisc add dev vmnet1 parent 1:11 handle 10: netem delay 50ms

Solution 2:

Some 80-90 kByte / s is about what to expect from

    tc filter add ... police rate 1.0mbit ...

You ask incoming data to be thrown away when it arrives at 1 mBit / s, that's about 125 kByte / s. The remote server will then drop to considerably lower than that (maybe half, not sure). After that, all packets come through, so the remote end slowly picks up speed until 125 kByte / s are again reached. You get an average throughput considerably below 125 kByte / s, which is typical of ingress shaping.

I'm a bit surprised that the speed should reach 2 MByte / s with the ingress policy filter already in place. Where did you measure - at the downstream client (programm) or at some upstream router? Or maybe you first started the connection and only afterwards you kicked the ingress policy filter in place?