NVMe Drive Recomendations for NAS

So the Terramaster F8 SSD Plus has peeked my interest and I decided to build one but undecided on which NVMe drives to get.
It’ll be used as a media server(Jellyfin), and for storage, running through a 2.5Gbps LAN.

I almost settled on the WD Red SN700 but seen mixed reactions since the involvement of SanDisk.
Was also thinking of the TeamGroup T-Create Classic C47, heard Wendell had some information on them , but I can’t locate it :confused:

Are you running a NAS with NVMe drives? How’s your experience been so far with drives you’ve selected?

Thanks,
Z

Kinda. But that’s off-topic :slight_smile:

For a 2.5G NAS…just get drives with good/best price/TB values while keeping an eye on TBW as a secondary concern (I wouldn’t buy <500TBW per TB of storage on consumer drives.).

Performance isn’t important. CPU, memory and 2.5G won’t allow for much anyway. Any NVMe drive is basically overkill .

edit: 4x “economy/ discount business class” >>> 3x “first class” . Capacity and strength in numbers are the name of the game in most of storage.

2 Likes

A concrete recommendation (but only 2 datapoints):

I had a Kingston NV2 2TB at first in my mixed NAS (mostly HDD and some SSD for more latency sensitive data), now a lexar nm790

The Kingston would suffer from read speed degradation and use more power (no ASPM and lower power states supported), and have pretty bad write consistency when transferring more than about 10 GB.

The lexar drive doesn’t have read speed degradation (at least for the over 12 months of use) and better write consistency and lower power use.

Both are very cheap per TB (though successor models are available for both, though the nv2 is not available any more but the nm790 is). The lexar drive is TLC, the Kingston might be tlc or qlc via lottery. Both are dramless but haven’t noticed issues particularly especially with the Lexar.

If you are planning bigger writes avoid qlc or potentially qlc drives. They can get very slow for big writes (like a lager backup?) so that they might not even saturate 2.5gb. It’s also more likely they develop read degradation.

Maybe take a look at the read degradation thread on the forum for info. It could be a concern if you use ssds for long term cold storage as one might on a nas.

Personally I run monthly btrfs scrubs to check for bit rot. Basically sequentially reading all data. The NV2 would would drop below 1GB/s after a while (I think it was 6-9 months), so I’d run a btrfs balance periodically (rewriting the whole filesystem) to mediate. The lm790 still scrubs at 4-5 GB/s after more than a year. As does the Kingston KC3000 on my desktop.

But basically get a cheap TLC drive, with no read degradation if you can find data on this, and you’re good to go. Pcie 3 or 4 shouldn’t matter.

3 Likes

Indeed. I’ll add my usual caution not to get the Crucial P3. It ends up at ~80 MB/s read speed once the data is a few months old. Despite weekly ZFS scrubs.

1 Like

^^^^^^^

Which is good for power consumption. As long as they don’t throttle in practice because of that. I have Samsung 980 (non-pro) PCIe 3.0 drives that write throttle ~35GB mark but they’re super power-efficient. But they aren’t cheap and no >1TB sold.
And distributing data across potentially 8 drives alleviates a lot of cache concerns when talking 2.5G (and even maybe 10G later).

I know they are the very bottom of today’s NVMe lineup, but I didn’t expect it to be that bad. Is this a TRIM/discard thing? that’s some janky ass controller ffs!

Otherwise good stuff what @quilt said. :+1:

I guess zfs scrubs only read the data? You’d actually need to rewrite data to get speeds back up.

1 Like

Yeah I regret getting it. The nm790 is very cheap too but so far performing immaculately.

One of the issues too with NV2 (might apply to NV3 as well) is that they would swap out NAND and even controllers to keep them cheap. You wouldn’t know if you get TLC/QLC or which controller.

Dramless is not really an issue nowadays AFAIK, HBM will use system memory for that. Over PCIe that works quite well. A lot of the bad rep comes from dramless sata SSDs where it would completely destroy performance.

ZFS scrubs are exclusively sequential reads (both data+metadata reads). That’s why it’s very fast even on HDDs. BTRFS the same. Scrub is a good sequential read benchmark. If it turns into a random write benchmark, you’re in deep trouble :wink:

Yeah I think we can agree on non-trash “economy/cheap midtier class”.

Got the KV2-equivalent of SATA SSD running here. I made it work with ZFS and ARC tuning…“kind of” but not really. Wouldn’t buy again either, 9.99 per drive :wink:

Yep. There was some discussion in the SSD data retention thread that maybe reads would refresh the data, if I recall correctly. Well, just like you say, it doesn’t. Not on the Crucial P3 at least.

This would only be true on a virgin dataset, otherwise fragmentation will break the sequential access.
The only unlikely caveat to this would be on newer versions of ZFS that that had at least as much system RAM as the entire dataset they are scrubbing; they can run a “healing”-equivalent scrub and do a sequential read of the drive and sort out the fragmentation in memory.

ofc this is more an issue for hdds and not ssds.

1 Like

Totally agree. I usually run extensive hygiene (large TXGs, periodically restoring stuff from backup, tighter snapshot policies, etc) and tuning on my pools, so I got a bit mislead from my own practice. I learned from my 80% FRAG 10k snapshot experiences years ago.

But to not derail further…I’m not sure this Terramaster will run this kind of stuff ( but 8x N305 “E-cores” are totally up for the task).

non-trash (so no KV2,P3,etc.) TLC cheap €$/TB. All agree on that?

2 Likes

Amen.

Off topic, but it seems that level1techs has its own Godwin’s law:

As a level1techs thread grows longer, the probability of a comparison involving ZFS approaches one.

4 Likes

I have ran some 4TB SN700’s for a few years now in a business web server, since at the time they were the best high endurance M.2 NVMe. These things are flawless, but they still cost vastly more than a typical “good quality” 4TB M.2 NVMe like the 990 Pro. I’d personally recommend the SN700 Red drives if your concern is endurance.

Edit: Just checked, been running them in mdadm RAID (mirrored) since mid-2022. Zero smart errors, granted it’s not read/write heavy in my case, 800TB~ read 300TB~ written 7% used.
Honestly though any good drive is fine… I run dozens of 990 Pro 4TB and SN850X 4TB drives in servers and these two models have basically never failed on me.

Mostly not, yeah. If it’s an n - 1 or current gen HBM drive, so basically SN770, replacement (SN7100), or competitor (increasingly many) there’s not much to no penalty to maybe an edge for a lot of workloads. But, for example, I have some file repair workloads which read at ~2 GB/s-thread and immediately turn the buffer with its fixed up contents around for write. Last I checked on SN770 the total transfer rate was 2 GB/s-thread, on SN850X its 2 GB/s read + 2 GB/s write = 4 GB/s-thread. Not a big deal but if I have like 50 GB of files to process I do notice.

Also, not the use case here, but if it’s a drive accessed over USB last I checked it didn’t seem like mass storage had HMB support.

1 Like

You may want to read this before getting an SN770 (or at least before formatting it to 4 KiB sector size):

1 Like

Having read through it, that’s more like you want to read this before using zfs with NVMes. Not much left if you exclude all HMBs as suggested in the issue and then start crossing off the DRAMed drives people mention problems with.

Personally, seems like it’d be easier just to exclude zfs.

I won’t pretent to know the actual issue here, but it seems to me to be more related to some SN770 bug with 4 KiB sectors than to ZFS. A couple of quotes from GitHub:

I guess using 512 B sectors is a work-around, but annoying not to be able to use the most performant format. (Unless the performance is the same between 512 B and 4 KiB on the SN770; I have no idea really.)

Edit: To be clear, there are likely multiple issues represented in that thread, both ZFS ones and a plethora of different NVMe controller issues. One of them (and the original topic) seems to be a SN770 bug with 4 KiB sectors.

I have been playing around with a Beelink ME mini 6-Slot. With 6 SM961 1tb ssds in raid 6. Because MLC nas. :smiley:

SM961 1tb ssds have really gone up in price recently. I got them for under $50 a few months ago.

Thank you all for the replies. A lot of good information here!

1 Like

Is there any user that has a terramaster F8 ssd that got it working I tried to connect mine to my supermicro epyc 10gb port and does not recognize my 24tb on my nas. I can connect it to my internet
router but that is 1gb.