I've started experimenting with an unmanaged switch and two Dell PE-servers, one R710 and one R720 and network port bonding.

The switch is a Zyxel MG108, 8 port 2,5 Gbps switch that I was curious about if it could at all work with bonded network ports on the servers (each server has 4x 1 Gbps network ports).

The servers run Proxmox 8 and has a GUI for setting network bonding.

After having set up the bonding on one server, I needed a way to check network throughput.

Enter iperf3.

 

Assumptions

Two Proxmox 8.2.4 servers.

Cyndane5
192.168.0.4

Dragonborn3
192.168.0.5

Below, the original bridged network setup. Looks the same on both servers.

root@cyndane5:/etc/network# cat /etc/network/interfaces.2024-07-19
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno3 inet manual

iface eno2 inet manual

iface eno4 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.0.4/24
gateway 192.168.0.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

root@cyndane5:/etc/network#

 

Below is the bonded network config from Cyndane5. The config was done via the Proxmox web-GUI.

root@cyndane5:/etc/network# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno3
iface eno3 inet manual

iface eno2 inet manual

iface eno4 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno3
bond-miimon 100
bond-mode 802.3ad

auto vmbr0
iface vmbr0 inet static
address 192.168.0.4/24
gateway 192.168.0.1
bridge-ports bond0
bridge-stp off
bridge-fd 0

 root@cyndane5:/etc/network#

 

iPerf3

Installation

First, install iperf3 on both servers.

root@dragonborn3:~# apt install iperf3
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
libiperf0
The following NEW packages will be installed:
iperf3 libiperf0
...

 

 Confirmation

Confirm the installation.

root@dragonborn3:~# iperf3 -v
iperf 3.12 (cJSON 1.7.15)
Linux dragonborn3 6.8.8-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-3 (2024-07-16T16:16Z) x86_64
Optional features available: CPU affinity setting, IPv6 flow label, SCTP, TCP congestion algorithm setting, sendfile / zerocopy, socket pacing, authentication, bind to device, support IPv4 don't fragment

 

Testing

On Dragonborn3 run the below. This server will be the listening server that the client-side iperf3 will connect to.

root@dragonborn3:~# iperf3 -s -B 192.168.0.5
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------

 

On Cyndane5, run the below. Cyndane5 is the client-side testee and will connect to Dragonborn3.

root@cyndane5:/etc/network# iperf3 -c 192.168.0.5
Connecting to host 192.168.0.5, port 5201
[ 5] local 192.168.0.4 port 50028 connected to 192.168.0.5 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 115 MBytes 961 Mbits/sec 0 426 KBytes
[ 5] 1.00-2.00 sec 112 MBytes 939 Mbits/sec 0 464 KBytes
[ 5] 2.00-3.00 sec 113 MBytes 948 Mbits/sec 0 533 KBytes
[ 5] 3.00-4.00 sec 112 MBytes 942 Mbits/sec 0 559 KBytes
[ 5] 4.00-5.00 sec 111 MBytes 935 Mbits/sec 0 559 KBytes
[ 5] 5.00-6.00 sec 113 MBytes 946 Mbits/sec 0 559 KBytes
[ 5] 6.00-7.00 sec 112 MBytes 939 Mbits/sec 0 559 KBytes
[ 5] 7.00-8.00 sec 112 MBytes 936 Mbits/sec 0 559 KBytes
[ 5] 8.00-9.00 sec 112 MBytes 942 Mbits/sec 0 559 KBytes
[ 5] 9.00-10.00 sec 113 MBytes 944 Mbits/sec 0 559 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver

iperf Done.
root@cyndane5:/etc/network#

 

And hey presto, we get some numbers on the throughput!

Please note that I only got 1 Gbps througput, as only Cyndane5 is using the bonded network interface.

 

 

Wrapping up

Next phase, will include finding a second network cable, bond the network ports on Dragonborn3 and run the test again.
I am very curious as to what the result will be!
A likely result will probably be 1 Gbps, still, because the switch isn't that advanced - it's unmanaged to start with, and the specification do not mention it's 802.3ad/LACP capable.
Still, as a lab-experiment I guess it's worth, if not for anything than curiosity.

 

Follow-up

Was too curious. Did the bonding thing on Dragonborn3 as well, enabled LACP, and tested the performance.
No go, didn't do a thing. If anything, both servers felt sluggish on the CLI now!

So, I redid the bonding type to Round-robin (balance-rr), and behold, speeds are now upwards 1,4 Gbps from the earlier about 940ish Mbps! Better than I hoped for actually.

root@dragonborn3:~# iperf3 -c 192.168.0.4
Connecting to host 192.168.0.4, port 5201
[ 5] local 192.168.0.5 port 48460 connected to 192.168.0.4 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 154 MBytes 1.29 Gbits/sec 511 103 KBytes
[ 5] 1.00-2.00 sec 172 MBytes 1.44 Gbits/sec 483 103 KBytes
[ 5] 2.00-3.00 sec 163 MBytes 1.37 Gbits/sec 429 58.0 KBytes
[ 5] 3.00-4.00 sec 167 MBytes 1.40 Gbits/sec 375 65.0 KBytes
[ 5] 4.00-5.00 sec 165 MBytes 1.39 Gbits/sec 329 69.3 KBytes
[ 5] 5.00-6.00 sec 170 MBytes 1.42 Gbits/sec 579 110 KBytes
[ 5] 6.00-7.00 sec 174 MBytes 1.46 Gbits/sec 647 67.9 KBytes
[ 5] 7.00-8.00 sec 167 MBytes 1.40 Gbits/sec 634 62.2 KBytes
[ 5] 8.00-9.00 sec 171 MBytes 1.43 Gbits/sec 703 59.4 KBytes
[ 5] 9.00-10.00 sec 175 MBytes 1.47 Gbits/sec 581 58.0 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.64 GBytes 1.41 Gbits/sec 5271 sender
[ 5] 0.00-10.00 sec 1.64 GBytes 1.41 Gbits/sec receiver

iperf Done.
root@dragonborn3:~#

 

I'll leave the bond type as is, and see about a LACP-capable switch later.

 

 

Sources

https://www.zyxel.com/global/en/products/switch/5-8-port-2-5gbe-unmanaged-switch-mg100-series
https://chrisjhart.com/Install-iperf3-on-Ubuntu-22.04/
https://forum.proxmox.com/threads/bond-configuration.97864/
https://pve.proxmox.com/wiki/Network_Configuration
https://manpages.org/iperf3
https://iperf.fr/iperf-doc.php

 

 

 

 

 

 

 

 

Stop Spam Harvesters, Join Project Honey Pot

 

Get a free SSL certificate!

 

The leading nonprofit defending digital privacy, free speech, and innovation.

 

The Linux Foundation provides a neutral, trusted hub for developers and organizations to code, manage, and scale open technology projects and ecosystems.

 

Kubuntu is an operating system built by a worldwide community of developers, testers, supporters and translators.

 

 43ef5c89 CanonicalUbuntudarktext