How to best benchmark/find bottleneck of my NAS on 2x SFP+? disk iperf?

I would like to carry out an experiment by bonding my new NAS dual SFP+ uplinks to see if I can sustain 20Gbit for how long with ZFS+nvme write cache enabled.

In the past I have benchmarked local hard disk ‘read’ speeds with linux pv but I haven’t done random write tests over the network… what do you guys suggest and how would you approach this experiment?

Info:

  • NAS/server has 2x SFP+ nic
  • Client has 2xSFP+ nic
  • on the same L2 network (USW-Aggregation)
  • Jumbo frames enabled on all ports

I think to setup bonding in unifi for each port but haven’t done so yet. Does bonding also require configuration on the OS side as well? any tips here would be appreciated.

What aggregation protocols does the switch support?

Only balance-rr lets a single TCP connection use both links at the same time on a single IP address, but you’ll need to increase your TCP reassembly buffers on both the server and client end to handle out-of-order packets.

Alternatively, setup multipath TCP and you don’t need an L2 aggregation protocol.

2 Likes

I think Unifi switches are limited and only support 802.3ad LACP - according to https://community.ui.com/questions/balance-rr-on-Unifi-Switch/a99e1951-84f6-4d86-ac05-2a38f3a69104