10GbE performance issues, need help tuning. (FreeNAS-Win10)

Hi all,

I having a really hard time to get my 10GbE network to perform.
I try to tune my systems to play nice but i don’t seem to get it right.

My SMB performance is utter shit most of the time and i think it is due to my lack of knowledge to tune my systems right.

So… my systems are the following.

- Network -
Switch: Netgear XS708E
Cables: Cat6

- NAS -
OS: FreeNAS-11.1-U4
Case: SuperMicro Server Chassi-SC825
MLB: SuperMicro X9SRL-F
CPU: Intel Xeon-E5 2620v1
RAM: 32GB ECC
Storage: 6x Seagate Constellation ES.3 2TB in RAIDZ2
NIC: Intel X540-T1

- Workstation -
OS: Windows 10 Pro
MLB: Asus z170m Plus
CPU: Intel 6700K @4.7GHz
RAM: 16GB
Storage: All SSD based (5x Samsung 850 Evos)
NIC: Intel X540-T1

- Software tuning -

Network:

  • Nothing done, no jumbo frames.

FreeNAS:

Tunables (all sysctl)

hw.ix.enable_aim = 0
kern.ipc.maxsockbuf = 2097152
kern.ipc.nmbclusters = 2097152
net.inet.tcp.delayed_ack = 0
net.inet.tcp.mssdflt = 1448
net.inet.tcp.recvbuf_inc = 524288
net.inet.tcp.recvbuf_max = 16777216
net.inet.tcp.recvspace = 131072
net.inet.tcp.sendbuf_inc = 16384
net.inet.tcp.sendbuf_max = 16777216
net.inet.tcp.sendspace = 131072
vfs.zfs.arc_max = 30800904320
vfs.zfs.l2arc_headroom = 2
vfs.zfs.l2arc_noprefetch = 0
vfs.zfs.l2arc_norw = 0
vfs.zfs.l2arc_write_boost = 40000000
vfs.zfs.l2arc_write_max = 10000000
vfs.zfs.metaslab.lba_weighting_enabled = 1
vfs.zfs.zfetch.max_distance = 33554432

Windows:
NIC Driver: Intel 4.0.215.0
Performace Options:

  • Receive / Transmit buffers: 512
  • Interupt Moderation Rate: Adaptive

Behavior of network transfers:

  • On freshly rebooted systems i can see speeds up to about 300-350MB/s, but this decreases over time. Longer uptime on systems seems to result in worse performance?!? In some cases as bad as 130-150MB/s

  • Smaller files perform worse over the 10GbE network then if transferred over the 1GbE network.
    1GbE network is on own hardware, NIC’s, switches etc.

Goal:

  • To get “good” consistent performance, not asking for lightning speeds here. If i can reach +400MB/s would be a big win.

So if anyone can give a helping hand in this matter would be most welcome.

Best Regards
Daniel

Be aware if all those disks are in a single vdev your write IOPs will be that of a single disk (as all individual writes are striped across all the disks with variable stripe width). i.e., say 75-150 IOPs depending on the RPM of your drives.

IOPs x IO size = transfer rate. So small transfers will maybe be slower because of that…

In theory 4 data disks should be able to get what… 600 MB/sec throughput for sequential (at a guess, haven’t benchmarked rust for a while). But if there’s other stuff going on with the pool at the same time… spinning disks don’t multi-task well.

I’d try and benchmark the network and the disks independently, when the performance problem is apparent first up.

What happens if you try to run some disk benchmarks on the local host (NAS - if need be spin up a FreeBSD jail in it and install bonnie++ or something)? i.e., can you confirm that performance looks “as expected” on the local machine (or have you already confirmed this)?

Can you confirm that you’re getting good bandwidth over the NIC with say, iperf?

Have you checked whether or not any virus scanning on the host is perhaps scanning files in flight?

I’d definitely look to enable jumbo frames, 10 GbE is definitely where you start seeing benefit from it, but i’m not sure that’s your issue.

Do you have network port stats on your switch (or either server/client) to check for errors (maybe its a dodgy cable or port)? And is your 10 GbE NIC (on the client) actually capable of running the full 10 gig with the PCIe slot/lane count it has.

This could be something actually. If you have a bunch of SSDs driven by the chipset, and the NIC also driven by the chipset, if i’m not mistaken you could perhaps be hitting a PCIe bandwidth limit on the client? Maybe someone else more knowledgeable on that aspect could pipe up.

I’m not aware of how to monitor PCIe bandwidth throttling, but I’d wager that a bunch of SSDs plus 10 gig driven off the chipset could see contention. i.e., your problem may well be on the CLIENT, and not in FreeNAS.

edit:
one more thing… this shouldn’t be a thing anymore, but what is the ashift on your drives in the pool? This should be 12 (i.e., 4096 sector size for the disk), and recent freenas versions should set this properly most of the time automatically. But if it is 9 (512 byte sectors), this could cause IO problems on your disks.

2 Likes

Try to add an slog and l2arc, that improved performance on my system quite a bit - however from the ~500ish to 800~ish with the same drives

Thanks for your replys and sorry for my late reply, i have been away from home for work related things.

I don’t believe i will benefit anything by adding a slog (ZIL) as it only helps with synchronous writes.

SMB is asynchronous writes, so no or close to no benefit adding this to my system.

L2ARC on the other hand might help, but i haven’t seen any indication on that i am running “out” of ARC.

I will give Jumbo frames a second go, last time i tested i didn’t see any benefits.

As mention by thro i might be running out of system resources on my z170 board. But i have only one GPU and one 10GbE NIC installed in the PCIe slots. The MLB manual states that i should get 8xPCIe on both slots when setup like it is.

All drives i use is SATA based.

And yes all disks in the server are in the same vdev (6 drives/ RAIDz2). When i fist setup the server, 10GbE LAN was not a thing. If i am unable to get some good results with the zpool setup as is i will redo the setup.

I will test some suggested tweaks this weekend and post my results and findings.

Best Regards
Daniel

Did you find any answers on this. I have a system I am struggling to get decent speed out of and I was wondering if there was a solution.

Have you run through the suggestions I posted above in thread, and what were the results?

I’m aware you’re not the OP, but the diagnostic procedure will be the same…

my guess was drive limted i cant saturate my 10gbe on 8x3tb WD reds could prob tweak more but yeah its a pita to saturate it

1 Like