This is a really nice feature: you can run iperf3 directly on a FortiGate to speed-test your network connections. It’s basically an iperf3 client. Using some public iperf servers you can test your Internet bandwidth; using some internal servers you can test your own routed/switched networks, VPNs, etc. However, the maximum throughput for the test is CPU dependent. So please be careful when interpreting the results. Here we go:
I am using a FortiGate FG-90D with FortiOS v6.0.10. I don’t know whether this iperf implementation is present on all FortiOS releases on all FortiGates. On mine, it is. ;) Here is more information about iperf3.
You have to set at least the iperf client and server interface on the FortiGate in order to run it. The server interface is NOT used when testing the bandwidth to an external server. However, you have to specify it, otherwise, you’re getting an error. (You can test internal paths within the FortiGate <- that’s why you have to set the client and server interface. However, I don’t know whether these tests will have any value.) To test your ISP connection, you have to find a public iperf server, e.g., here: https://iperf.cc/. The FortiGate implementation of iperf does not accept hostnames, but only IP addresses.
Test, Test, Test
A basic run looks like this. Using port 5200 (in my example) and testing in both directions:
diagnose traffictest client-intf wan1 diagnose traffictest server-intf wan1 diagnose traffictest port 5200 diagnose traffictest run -c 213.209.106.95 diagnose traffictest run -R -c 213.209.106.95
That is:
fg2 # diagnose traffictest client-intf wan1 client-intf: wan1 fg2 # diagnose traffictest server-intf wan1 server-intf: wan1 fg2 # diagnose traffictest port 5200 port: 5200 fg2 # diagnose traffictest show server-intf: wan1 client-intf: wan1 port: 5200 proto: TCP fg2 # diagnose traffictest run -c 213.209.106.95 Connecting to host 213.209.106.95, port 5200 [ 8] local 194.247.4.10 port 1489 connected to 213.209.106.95 port 5200 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 8] 0.00-1.03 sec 15.6 MBytes 127 Mbits/sec 0 359 KBytes [ 8] 1.03-2.00 sec 16.2 MBytes 140 Mbits/sec 0 410 KBytes [ 8] 2.00-3.05 sec 18.8 MBytes 150 Mbits/sec 0 385 KBytes [ 8] 3.05-4.01 sec 16.2 MBytes 143 Mbits/sec 0 392 KBytes [ 8] 4.01-5.06 sec 18.8 MBytes 149 Mbits/sec 0 380 KBytes [ 8] 5.06-6.04 sec 16.2 MBytes 140 Mbits/sec 0 389 KBytes [ 8] 6.04-7.04 sec 17.5 MBytes 146 Mbits/sec 0 387 KBytes [ 8] 7.04-8.05 sec 16.2 MBytes 135 Mbits/sec 0 404 KBytes [ 8] 8.05-9.06 sec 17.5 MBytes 145 Mbits/sec 0 386 KBytes [ 8] 9.06-10.06 sec 17.5 MBytes 148 Mbits/sec 0 386 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 8] 0.00-10.06 sec 171 MBytes 142 Mbits/sec 0 sender [ 8] 0.00-10.06 sec 171 MBytes 142 Mbits/sec receiver iperf Done. iperf3: interrupt - the server has terminated fg2 # diagnose traffictest run -R -c 213.209.106.95 Connecting to host 213.209.106.95, port 5200 Reverse mode, remote host 213.209.106.95 is sending [ 8] local 194.247.4.10 port 1491 connected to 213.209.106.95 port 5200 [ ID] Interval Transfer Bandwidth [ 8] 0.00-1.00 sec 8.02 MBytes 67.0 Mbits/sec [ 8] 1.00-2.00 sec 8.13 MBytes 68.4 Mbits/sec [ 8] 2.00-3.00 sec 8.27 MBytes 69.5 Mbits/sec [ 8] 3.00-4.00 sec 8.19 MBytes 68.7 Mbits/sec [ 8] 4.00-5.00 sec 8.51 MBytes 71.2 Mbits/sec [ 8] 5.00-6.00 sec 8.46 MBytes 71.1 Mbits/sec [ 8] 6.00-7.00 sec 8.08 MBytes 67.7 Mbits/sec [ 8] 7.00-8.02 sec 8.32 MBytes 68.7 Mbits/sec [ 8] 8.02-9.03 sec 8.32 MBytes 69.1 Mbits/sec [ 8] 9.03-10.01 sec 7.96 MBytes 68.1 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 8] 0.00-10.01 sec 83.7 MBytes 70.1 Mbits/sec 0 sender [ 8] 0.00-10.01 sec 82.4 MBytes 69.0 Mbits/sec receiver iperf Done. iperf3: interrupt - the server has terminated
Other useful commands are:
diagnose traffictest show diagnose traffictest run -v diagnose traffictest run -h
They show the current configuration on the FortiGate (first one) and some more details about iperf itself:
fg2 # diagnose traffictest show server-intf: wan1 client-intf: wan1 port: 5200 proto: TCP fg2 # diagnose traffictest run -v iperf 3.0.9 fg2 # diagnose traffictest run -h -f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes -i, --interval # seconds between periodic bandwidth reports -F, --file name xmit/recv the specified file -A, --affinity n/n,m set CPU affinity -V, --verbose more detailed output -J, --json output in JSON format -d, --debug emit debugging output -v, --version show version information and quit -h, --help show this message and quit -b, --bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP) (optional slash and packet count for burst mode) -t, --time # time in seconds to transmit for (default 10 secs) -n, --bytes #[KMG] number of bytes to transmit (instead of -t) -k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n) -l, --len #[KMG] length of buffer to read or write (default 128 KB for TCP, 8 KB for UDP) -P, --parallel # number of parallel client streams to run -R, --reverse run in reverse mode (server sends, client receives) -w, --window #[KMG] TCP window size (socket buffer size) -C, --linux-congestion <algo> set TCP congestion control algorithm (Linux only) -M, --set-mss # set TCP maximum segment size (MTU - 40 bytes) -N, --nodelay set TCP no delay, disabling Nagle's Algorithm -4, --version4 only use IPv4 -6, --version6 only use IPv6 -S, --tos N set the IP 'type of service' -L, --flowlabel N set the IPv6 flow label (only supported on Linux) -Z, --zerocopy use a 'zero copy' method of sending data -O, --omit N omit the first n seconds -T, --title str prefix every output line with this string --get-server-output get results from server [KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-
Caveats, Caveats, Caveats
Unfortunately, here are some (major!) caveats: At first, the iperf implementation on the FortiGate is heavily CPU related. My FG-90D has a 1 Gbps uplink to the Internet. Running iperf3 on the Forti reveals only about 150 Mbps (see above), while the CPU usage immediately peaked at 100 %. Ouch:
Testing my ISP speed *through* the FortiGate from a Linux system behind it, iperf3 showed about 900 Mbps, while the CPU usage on the Forti stayed by about 3-5 %. Following is the bandwidth widget from the Forti during my tests:
Certainly this behavior is different on other FortiGates hardware. To be fair, my FG-90D is not the newest nor the biggest model. I have tested the traffictest feature on a FG-501E with FortiOS v6.2.5 which was able to receive 900 Mbps while only one out of eight cores peaked at about 25 %.
Second caveat: it’s not working with IPv6, but only with legacy IP. :(
fg2 # diagnose traffictest run -c 2a02:2028:ff00::f9:2 iperf3: error - unable to connect to server: Invalid argument iperf3: interrupt - the server has terminated fg2 # diagnose traffictest run -6 -c 2a02:2028:ff00::f9:2 iperf3: error - unable to connect to server: iperf3: interrupt - the server has terminated
Conclusion
Uh, that’s hard. In theory, this is a cool hidden feature. If you’re keeping track of your CPU usage you can probably use it for getting realistic results. Especially on links with small bandwidth.
However, if you really want to test your big ISP connection, you shouldn’t rely on it. Or to say it differently: If you’re getting the expected results with iperf on the Forti, you’re ok. If not, you don’t know why. ;(
PS: Happy Birthday Nicolai!
Photo by Harley-Davidson on Unsplash.