Zfs 8x sata ssd vs nvme

I am going to build a zfs server with 128gb ram probably xeon used server but my question is what is going to be faster 8x samsung evo 870 4tb or 2x in mirror micron 9300 15 tb nvmes. The reads is not my problem they will both do fine my problem is random writes. There will be 40x client machines connected to it with 1gb nic and iscsi from images that will be used for storage. What will be faster for random writes ssd sata raidz1 with 8x sata or nvme.

We talking about 9300 MAX or standard 9300 (Pro)?

8x SATA SSD should be better on random writes. But putting them into RAIDZ puts all write performance into the dumpster. Never do RAIDZ if workload is random IO centric.

8x in 4 mirrors is a better comparison because mirror vs mirror. I put my money on the EVOs vs 9300 Pro and on the 9300 if it’s a max.

9300 will have better handling in your use case. Consumer drives drop off in performance after some load. I’d try to get 4x8TB drives instead (9400 if you like Micron and want PCIe 4.0), better €/TB and double the performance.

Both will be fine and very likely limited by the 1Gbit interface, I’m going to guess that the 9300 will performance better simply becasue of less latency/overhead etc

I can even put the nvme on stripe back up on this array is not my problem but not the 8x sata on stripe. 2x nvme micron 9300 in stripe. The server will probably have dual intel x520 sfp+ and I use linux tgt iscsi.

Also in the future clients pcs will use 2.5g nic. Also the nvmes can be kioxia or miron my only problem is with used servers its hard to find them compartible with PCIe 4.0

Okay, from what I recall I think FreeBSD iSCSI performs better than Linux’s so you might want to look at that too.

1 Like

I am so used to linux its hard for me to use freebsd.

I agree on this one, judging from my experience. TrueNAS Core ftw. iSCSI was the final nail in the coffin for abandoning TrueNAS Scale, but mostly about bugs, not only worse iSCSI performance.

It’s 40x 1G with dozens of targets. You really want all the IOPS you can get, because it doesn’t sound like 40 VM disks that idle most of the time but targets with actual traffic.

Check on Micron 7400/7450/9400 Pro 8TB drives. 4 of those run circles around EVOs. PCIe 4.0 isn’t really that important for most servers. I think Kioxia equivalent are the CD6/7/8 line of datacenter SSDs, not sure and I know little details.

1 Like

I use zfs from terminal I have not problem with zfs and terminal. I use debian.

Ok thanks for the info I will look into the mircon 7400 etc but there will be no issue if it run at PCIe v3.

Also iscsi from vdev is slow in linux but from a lot of testing mounting images to iscsi linux tgt is quite fast. I have also tested zvol still slow .img images work really well.

There are differences for sure but you’ll get accustomed quite fast in most cases and things are usually well documented but you can always try Linux first.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.