tc is a command or subsystem that configures a qdisc (queueing discipline) that queues packets and controls traffic before the kernel sends them to the network interface. Kubernetes CNI bandwidth plugin is also realized by tc, and can limit the bandwidth of pods with the following annotatiton.
annotations:
kubernetes.io/ingress-bandwidth: 1M
kubernetes.io/egress-bandwidth: 1M
I tried running it on an Amazon Linux 2023 t2.micro EC2 instance.
$ ip link
...
2: enX0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
$ sudo yum -y install iproute-tc
$ tc -json qdisc show dev enX0 | jq
[
{
"kind": "fq_codel",
"handle": "0:",
"root": true,
"refcnt": 2,
"options": {
"limit": 10240,
"flows": 1024,
"quantum": 9015,
"target": 4999,
"interval": 99999,
"memory_limit": 33554432,
"ecn": true,
"drop_batch": 64
}
}
]
fq_codel is an algorithm widely used in OSs using systemd and router firmware, and controls the minimum queue delay time to be the target of options even if traffic bursts. limit is the upper limit of packets, and memory_limit is the upper limit of the number of bytes. If these are exceeded, incoming packets will be discarded. handle is the qdisc identifier used when specifying parent.
Try making delays using netem (Network Emulator) and limiting bandwidth using tbf (Token Bucket Filter).
$ sudo tc qdisc add dev enX0 root handle 1: netem delay 100ms
$ sudo tc qdisc add dev enX0 parent 1: tbf rate 50mbit burst 512kbit latency 400ms
$ tc -json qdisc show dev enX0 | jq
[
{
"kind": "netem",
"handle": "1:",
"root": true,
"refcnt": 2,
"options": {
"limit": 1000,
"delay": {
"delay": 0.1,
"jitter": 0,
"correlation": 0
},
"ecn": false,
"gap": 0
}
},
{
"kind": "tbf",
"handle": "800b:",
"parent": "1:",
"options": {
"rate": 6250000,
"burst": 65531,
"lat": 400000
}
}
]
By sending traffic to other instances using iperf and ping, you can confirm that the bandwidth limit and delay are working. When the tbf burst is small, the transfer becomes 0 midway, so I increased it.
# iperf3 -s # another instance
$ sudo yum install -y iperf3
$ iperf3 -c 172.31.25.78
Connecting to host 172.31.25.78, port 5201
[ 5] local 172.31.16.172 port 51834 connected to 172.31.25.78 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 5.50 MBytes 46.1 Mbits/sec 0 1.11 MBytes
[ 5] 1.00-2.00 sec 5.38 MBytes 45.1 Mbits/sec 0 1.41 MBytes
[ 5] 2.00-3.00 sec 6.75 MBytes 56.6 Mbits/sec 0 1.70 MBytes
[ 5] 3.00-4.00 sec 5.38 MBytes 45.1 Mbits/sec 0 2.00 MBytes
[ 5] 4.00-5.00 sec 6.62 MBytes 55.6 Mbits/sec 0 2.30 MBytes
[ 5] 5.00-6.00 sec 5.50 MBytes 46.2 Mbits/sec 0 2.59 MBytes
[ 5] 6.00-7.00 sec 5.25 MBytes 44.0 Mbits/sec 0 2.88 MBytes
[ 5] 7.00-8.00 sec 6.62 MBytes 55.6 Mbits/sec 0 3.16 MBytes
[ 5] 8.00-9.00 sec 5.25 MBytes 44.0 Mbits/sec 0 3.16 MBytes
[ 5] 9.00-10.00 sec 6.50 MBytes 54.5 Mbits/sec 0 3.16 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 58.8 MBytes 49.3 Mbits/sec 0 sender
[ 5] 0.00-10.50 sec 58.5 MBytes 46.7 Mbits/sec receiver
$ ping 172.31.25.78
PING 172.31.25.78 (172.31.25.78) 56(84) bytes of data.
64 bytes from 172.31.25.78: icmp_seq=1 ttl=127 time=101 ms
64 bytes from 172.31.25.78: icmp_seq=2 ttl=127 time=100 ms
64 bytes from 172.31.25.78: icmp_seq=3 ttl=127 time=100 ms
Removing the qdisc will revert to the origin.
$ sudo tc qdisc del dev enX0 root
$ ping 172.31.25.78
PING 172.31.25.78 (172.31.25.78) 56(84) bytes of data.
64 bytes from 172.31.25.78: icmp_seq=1 ttl=127 time=0.461 ms
64 bytes from 172.31.25.78: icmp_seq=2 ttl=127 time=0.476 ms
64 bytes from 172.31.25.78: icmp_seq=3 ttl=127 time=0.626 ms
References
tcコマンドでNetworkの帯域と遅延を制御してみてみた - Qiita
How to Use the Linux Traffic Control
Chapter 32. Linux traffic control Red Hat Enterprise Linux 9 | Red Hat Customer Portal