TrueNas Scale

Well couple of extra fans good clean, Hyper M:2 installed bios sees all the drives, just waiting to check the 100GBE NIc for firmware in windows before installing TrueNas Scale, probs Monday when the Nics arrive. Hyper M:2 good bit of kit glad i looked about. Reading up I should get 6800MB/S RaidZ1. That Will do until the new gen 5 Samsung NVMEs come out for my Workstation. So plan to get another Hyper M.2 for another 4 drives which should get me up to 12000MB/s which will pretty much saturate the 100GBE link and pretty much give me Gen 5 performance to a 25TB Volume over Lan. :grin:


Its Alive :slight_smile: I had a bit of issues with the Mellanox 100GBE Nic but it eventually installed :slight_smile: No Cable to test yet :frowning: Should of arrived today but didnt :frowning:

'Im a bit of a noob and I’m getting similar results, I have 4x NVME Raid Z1 with one VDEV Can you add more VDEVS to Raid Z1 to increase performance?
image

Kinda but not really. Mirrors are mainly where speed is achieved. When you add more vdevs to raidz, it will prefer the fastest vdev on writes, and then you are at the mercy of that vdev on reads.

additional cache can mitigate some of the performance woes. there would also maybe benefit to a separate slog drive.

This gives a good overview of how and why a separate slog works to increase ZFS performance.

1 Like

Any idea how I can configure the 100GBE NIc on the TrueNas for MTU/jumbo packet?

I had too many issues with scale so I don’t use it but odds are you can do it the hard way under the hood.

https://wiki.archlinux.org/title/Network_configuration

1 Like

Fwiw I don’t think it’s necessary to do jumbo frames anymore but maybe you know something I don’t.