8TB+ SATA SSDs recommendation?

today one of my HDDs in RAID1 died and I’m looking for replacement. But since it was an old-ass 2TB RAID array I’m planning to replace both drives and either get 20TB Seagate Exos HDDs as replacement or some big capacity SATA SSD.

Unfortunately there doesn’t seem to be many 8TB SATA options. Enterprise drives are typically U.2 or NVME and I don’t have spare pci-e connectivity (since I already have 2x4TB FireCuda 530 + 2x2TB 990 PRO).

Use case is not very demanding as you can guess by fact that it used to be on HDDs - it’s for files, pictures, just generic linux home drive. Mostly read only. Reason why I’m considering SSDs is that I feel like I don’t really need 20TB HDDs in workstation since I have external HDD based NAS with 10G connection but it has a bit high latency and low random read for home drive directly. Buying smaller local HDDs wouldn’t make sense at all for me since it’d be just deprecated junk in few years.

I see that my local shop offers:

  • QVO 8TB ~600$
  • Kingston DC600M ~750$
  • Micron 5400 PRO ~750$

QVO seems affordable but also feels like a bit of junk. At the same time I’m not really sure if I need non-junk SSDs here. Are there some other recommended high capacity SATA SSDs?

I’m not considering 4TB SATA drives since they will soon be junk and I’ll end up with 14 ssds just like with my HDD RAID array which is 14x2TB now because I kept running out of space…

Well, if you have ports you could go for RAID 0+1 which would give you more options and be quite a bit cheaper.

Like 4x SanDisk Ultra 3D SSD 4TB (SDSSDH3-4T00-G26) or something…

Try these guys:

Their Exadrive series offer up to 100TB capacity.

Just don’t look at their price list :roll_eyes:

:exploding_head:

Oh, you did :stuck_out_tongue:

1 Like

jokes aside it looks like terrible deal. Seagate makes way cheaper 16TB U.2 SSDs

This Kingston looks kinda good tbh now when I look at it. Too bad they don’t offer 16TB ones.

I’ve searched this space exhaustively.

Sounds like the SAMSUNG QVO 8 TB you found is the best fit. You quoted prices so I assume you also want cost efficiency. There are no 16 TB SATA SSDs with reasonable pricing, and there have not been even during the temporary depression in SSD prices in 2023. Even at the lowest price point of the SAMSUNG QVO 8 TB ($320), the cheapest 16 TB SATA SSD was $1,700—5 times the price for twice the storage density.

But even after the price increases this past quarter, SAMSUNG’s 8 TB SSD is still in the sweet spot.

Everything eventually becomes deprecated junk. What’s the horizon you are setting as your cutoff? And what’s your definition of “deprecated?”

  • Physical storage density? (SATA beaten by Solidigm’s 61.44 TB SSD)
  • Bandwidth? (SAS was already there)
  • Latency? (lol)

If you can find a way to add more PCIe lanes (if only just to connect the SSDs—you won’t get more bandwidth), I highly suggest PCIe NVMe SSDs. Not only do they have better stats than SATA, but you’re not paying much of a premium for it either: 8 TB SATA SSD @$600 ($75/TB) v 64 TB PCIe 4.0 NVMe SSD @$3,800 ($59/TB). A fairer comparison might be 8 TB v 8 TB, in which case you can find even SAMSUNG’s enterprise PCIe NVMe SSDs for $600 and Micron’s even lower at $470.

This doesn’t leave much room for SATA SSDs to thrive in the future. I really tried to make the math work for SATA SSDs when I built two systems in the past year, but good high-density ones are outclassed by PCIe NVMe in just about every metric. My ICY DOCK SATA enclosures are sitting idle and I’ve found it better just to pay a little more for an M.2-to-U.2 adapter kit, forgoing SATA altogether.

If you are about to invest in that, why not a full blown NAS for $449 and then populate with m.2 SSDs?

You could start with two mirrored 8TB drives to prepare for the future, or do a full 6x4TB for 20TB redundant storage. Both options would set you back around $1.5k-$2k.

And for those of you who think I’m preaching this way too much:

Sorry, you are right. I am coming across as a fanboy here. At the same time the Flashstor is a mindblowingly good and disruptive product in this market, seriously HDD 6 bays cost like $300-$400 extra and 4TB m.2 are already pretty cheap, imagine what will happen in 12 months, or 36 months, when 16TB m.2 are pushed below $400… In my opinion the Flashstor is almost an iPhone disruptor in the SOHO NAS space. Competitors are soon coming based on the same idea. I think we may finally be witnessing the end of SATA proper. And that is both scary, and wonderful.

Well I ended up ordering 2x Kingston DC600M 8TB. Unfortunately while I rarely write something to those drives, when I do it’s usually 2 TB at once or something when migrating backups. I looked at few benchmarks and QVO in 8TB variant looks pretty pathetic. Plus it’s probably gonna depreciate quickly. And it feels off to buy QVO for WRX80 workstation in a sense… Though if I could find QVO for 300$ like you mentioned I probably would go for it. But it didn’t so welp.

By “junk” I mostly meant rapid depreciation and commodity. I literally just found bunch of spare 2TB HDDs in wardrobe bc they’re quite useless nowadays. You can only stack up so many of them before you’re running out of space both physically and in terms of connectivity (and sata power). Like 256gb or 512gb SSDs. Nobody wants them, you end up with lockers filled up with those and nobody can find use for them. I wouldn’t like to end up in situation where if I wanted to buy another 2 drives, it’d make more sense to scrap whole thing and just buy bigger drives instead of extending array.

Those SATA drives have a bit different use case for me since like I mentioned I already do have high performance 2x 2tb 990 pro nvme and 2x 4tb firecuda 530. There’s a bit of interest clash when it comes to NVME storage because 4TB drives for now seem to be faster than 8TB ones (which usually target capacity at cost of performance) thus I went for 4TB NVME instead of 8TB.

At some point I considered going full NVME for bigger storage but it’s really complicated. Pci-e is not all that easy and cheap to split so as a result it’s not that easy to build for example 8 bay RAID array based on nvme storage. I already have WRX80 platform so there’s a ton of pci-e but it’s still… you know, limited. It’s not like SATA where you have 8 ports on board, buy one IBM HBA controller and boom, you have 16 ports. Just like my 14 bay HDD NAS built from old PC with one IBM card and onboard SATA controller on Sandy bridge platform…

1 Like

Who in the right mind looked at 6x nvme raid and decided “yeah, 2x 2.5G NIC will be fine for that”. My HDD NAS has 2x10G Intel x710-DA2 to avoid bottlenecks lol.

1 Like

NAND storage products depreciate extremely quickly, but SATA segment is literal dead end.

Good call on avoiding Samsung QVOs, they are not even in the same ballpark as the other two.
While it is slightly cheaper, it not cheap enought for its performance as it is consumert QLC drive with extremely bad performance characteristics.

Those reviews severely understate it, since they are not doing sustained performance testing.

Either the Kingston DC600M or Micron 5400 PRO will serve you well. If you want best bang for the buck, look for used ones on ebay.
You can get even nvme enterprise drives for nearly same price. Longevity is not factor there, as they are extremely overprovisioned and lightly used anyway.

You can get Kioxia CD6 7.68TB for pretty much the same cost as used MICRON 5300/5400. Only con is poer consumption and having to solve u.2/u.3 connection somehow.

Perf values for reference here and related discussion on l1 here.

2 Likes

Now you are thinking high end, this runs circles around your average HDD NAS but not the datacenter SANs. :slight_smile:

An all-NVMe NAS requires a dual 40 Gbps fiber connection to not bottleneck. That shit gets expensive fast, however. There are a bunch of performance concerns with the Flashstor, some are valid and some less so, and yes, I am not going to pretend it is the best and most performant NAS ever. Those will come too, and will cost quite a bit more. Read the reviews to get a sense of what parts are important for you.

For the SOHO market though, where capacities generally go sub 50TB and 1GB or 2.5GB switches are common… It is more than good enough. This might change, in 5 years you might need the Flashstor 4+ Pro Turbo with quadruple 80 Gb Fiberoptic interfaces to serve your needs in the prosumer market. But for the needs today? Flashstor, surprisingly enough, given how shitty performance it has, works just fine. Not fantastic. But fine.

If this is still not your cup of tea though, then at least now you know what is happening in the market, and just knowing is a good thing no? :slight_smile:

I was mostly referring to fact that 2.5G is barely enough for HDD NAS and this is NVME NAS. Single NVME drive saturates 2.5G already, it’s unproportionally crappy and diminishes whole point of having NVME when even HDDs would saturate uplink. If it had single 10G interface I’d decide that oh well, yeah it’s consumer grade device should be fine. But 2.5G is a bit of a stretch xD 10G copper is not THAT uncommon and there are consumer grade 10G switches out ther (ironically made by ASUS). I think they done goofed up a bit this time xD

Though I see it has usb 3.0 soooo if you were reaaaaaaaaaally determined you could go for Aquantia 5G dongle which would be… just slightly faster since they’re not actually 5G. I have them connected to laptop docking station and they achieve realistically like 3.5G.

1 Like

Eh, if you want a 10 GbE you can always go with the 12 bay Pro version for $799 :slight_smile: But the CPU only has 6 PCIe 3.0 lanes, so you won’t saturate anything anytime soon with those NVMe speeds in any case.

1 Like

that sounds reasonably actually especially since you can’t expect RAID5/6 to scale speed linearly so this one is probably quite legit for SOHO.

6 pci-e lanes for 12 NVME ssds is similar vibe to those mining motherboards that support 16 GPUs, each one via pci-e x1 XD

Exactly, when I looked into it it makes sense though, they’re using switches to overcome some bottlenecks and clever tricks inside that thing, so it is performant enough. But will it beat an EPYC setup with 12 NVMe drives? No, not even remotely. :slight_smile:

But then again, it doesn’t really have to, now does it? Not with the competition it is facing in the niche it is trying to fill.

I’m not here to nitpick (being new and all) but I’ve only seen Micron 32TB disks for that price. The other issue with these huge enterprise disks is power; the Microns are all around 25 watts. Granted this drops to 1/10th when idle but you can’t plan for idle…

True. SAMSUNG’s specs states that their SSDs have 4.0 W max power draw, which is an incredible advantage despite all the other shortcomings.

I, in fact, bought an inferior (performance-wise) Optane 800P just for the lower power draw alone, even though it’s surpassed in every other way by the P1600X series. I wouldn’t want my laptop to be unavailable at an in inopportune moment because the SSD sucked up all of the juice. Power draw is a legitimate attribute worth considering.

1 Like

I mean tbh 25w sounds comparable to spinning rust in stress. At least older generations. HDDs in RAID arrays don’t park anyways so their idle power draw is not all that low.

After longer contemplation over what @wertigon mentioned and taking look at those relatively unattractive off-the-shelf M.2 RAID solutions I started taking closer look at pci-e switch based 3.5" solutions like this OWC shuttle:

I did say that I don’t have spare pci-e but that doesn’t mean I can’t split some of what I currently already use for less performance critical stuff. I’m on WRX80 platform which has 2 onboard M.2 slots (and one U.2 that I use for USB controller). I use them for OS (nothing performance heavy) - one shared with bunch of stuff on chipset and one CPU-direct. I could attach M.2 → U.2 card and OWC pci-e switch to that CPU-direct one. That way it’d provide additional i/o AND it probably wouldn’t trash performance all that badly since I have RAID1 between chipset-based M.2 and that CPU-based one so if both of them would be shared with other crippling stuff it’d probably equalize crappiness a bit XD And it shouldn’t matter that much for casual storage. Maybe I could even daisy-chain such thing with my M.2 USB controller so that one of those shuttles would carry 3 drives + USB controller and another one 4 drives.

My biggest concern tho is how those switches behave with IOMMU since I’m passing through that USB controller to VMs and I certainly wouldn’t like to pass NVME based storage alongside with it.

But that’s very far fetched future. For now SATA should be fine.

I am running an Odroid H3, two “internal” HDDs and 4 more in a USB enclosure, it suffers the same 5Gbit/s limit.
I should have an AMD Kria dev-board by mid-February, which has 10G+4x1G networking, but is USB-only for storage.

1 Like