Best bang for buck prosumer/enterprise SSDs for NAS & Home Media Server

Ignoring the WD marketing above me, what sort of system will be hosting this? Do you have the PCIe slots and bifurcation support to support a handful of either M.2 or U.2 drives via PCIe breakout cards?

NVMe SSD value has improved drastically in recent history, might be worth considering an array of them if you have the hardware to support it, but that’s gonna be a pretty beefy system to have that many PCIe lanes.

If you’re hosting Minecraft and/or game servers, I’d look into at least getting one NVMe drive specifically for hosting them, it made a huge difference on my own NAS that was also running ARK + Minecraft + Project Zomboid servers simultaneously for our little group of a dozen or so people.

1 Like

WD sells SSD’s too. But I do not suggest them. - my post is all about HDD’s over SSD’s. SSD’s are not worth it imo. Their cost is too high, and they do not offer significant improvement in the type of data storage in question here.

1 Like

I’m about to start flagging your thesis on WD hard disks for being off-topic when the thread was specifically asking for SSD recommendations to help decide on a configuration.

SSDs absolutely offer a significant performance improvement in a NAS, and NVMe over SATA is a noticeable upgrade if OP is going to be using this system to host game servers which stream world assets to the client or distribute large files as part of configuration/mod distribution to the users, etc. Which is a lot of them.

That’s why I just asked if they have the resources to support such a setup.

4 Likes

Flag away, this is a technology discussion and I stand behind my statements and I see no reason why a discussion about various technologies available could possibly be off topic at all. The man came in here asking for advice about storage options, and we are giving it. So far I see nothing off-topic about anything anyone has posted, aside from your baseless threat just now.

The one thing I do see is people who are passionate about technology, and that is not a bad thing at all.

You do raise good points about streaming data, if he is planning on offering server benefits to many people, and this is going beyond any sort of very small group use, then integrating a few SSD’s would be useful, as MisteryAngel also mentioned in their earlier post.

For greater clarity, it should be mentioned that my own perspective is from that of a single user. And waiting a second for a drive to spin up from sleep mode every once in a while does not bother me at all. I have never personally noticed any significant delays introduced during regular use of HDD’s. The one slight exception is with more modern games, SSD’s do help reduce load times greatly on larger titles, or doing something significantly intensive like restarting a computer entirely, but the original posters mention of minecraft did not fit that requirement - and this being a server, it will very rarely be starting from shut-down.

Also my very early experiences were with loading data off tape drives on a commodore 64. Compared to that, everything we have these days is beyond 7th level sorcery. So waiting a second every once in great a while for data to be pulled from an HDD does not bother me in the slightest.

2 Likes

Yeah I’m not saying there should be no hard disks, but my recommendation would be to run about a 5-10TB array of NVMe drives and run spinning rust for the rest of the ‘cold’ storage of things that don’t get accessed much or don’t need much random I/O.

Just an example since they’re in stock in the same place, something like this in RAIDZ2 would give you 5.7TB of NVMe storage and ~28TB of HDD for somewhere around $2000. It would depend on having HEDT/server platform amounts of PCIe available though.

5 Likes

One of the reasons I pound HelioSeal tech so much, is they really are another step up from traditional HDD’s. My personal experiences have supported that conclusion, but I don’t like saying ‘just trust me’, so I tried to offer supporting evidence. I did not mean it to come across as marketing. I have no personal stake in western digital, i don’t even put money into their stock. But that particular product is very good. It’s surprising to me sometimes that few people seem to know about it. Although it does make sense too, because they tend to be on the very large/more expensive end of the HDD market.

3 Likes

The QVO is a QLC drive and it will get killed in a ZFS array. You’ll very easily end up with huge write amplification and the drive performance will tank. See /r/DataHoarder for anecdotes.

Take a look at:

  • WD Red SA500 4TB, fairly cheap, 2500 TBW
  • Samsung PM893 3.84TB, around 50% more expensive but you get an insane 7000 TBW

SSDs and ZFS are tricky because SSDs lie a lot to the controller. Usually more expensive drives lie less and can handle more abuse. The more you pay the more predictable the worst-case performance gets. That’s what you pay for on the PM893 and above.

Edit: I’ve personally tried Patriot Burst Elite (cheap TLC) in two separate mirrors and they died after 3-4 months. The WD red SA500 are still going strong after 6 months in the same chassis so you do get what you pay for.

Edit 2: For less than the price of 1 SA500 at 4TB I bought 2 Toshiba MG08 14TB helium drives, so the price difference is definitely still huge.

5 Likes

Excuse me while I spout some blaspheme

At this point in time, it might be best to treat high-capacity HDDs more like fast-access tape drives. Magnetic platters are still pretty good at holding data offline, which is something SSDs aren’t so good at. And the ways in which HDDs fail tend to be mechanical in nature, so if your drives aren’t even mounted, and aren’t being spun, they tend to last a whole lot longer than if they are.
Something I’ve been tossing around but not doing because, tbqh, I just don’t really have the money for it, is a Raid5 SSD array using enterprise castoffs off ebay, and having a couple of un-parity HDDs as a cold backup. Not even mounted, let the drive sit idle, no spinning for the spinning rust, except on a regular interval to make a cold copy of the SSD array.

SSDs, even SATA SSDs can rebuild an array very quickly, enterprise grade drives are very very good at tanking a heavy number of suboptimal writes, consumer grade SSDs have a tendency to let the controller roast at unsafe temperatures compared to server drives, and if a drive goes down, the odds of the other 3 going down while the replacement is being written to are very low, because reads don’t really hurt SSDs, whereas they’re not that different from writes for a harddrive.
For the harddrive, if it’s sitting offline 95% of the time, you’ve basically extended it’s use life by an order of magnitude times. If you use 4x 3.84tb SSDs, you end up with an array of 11.52TB, and 12TB HDDs are cheap enough that you can buy two and swap between them for backups. The odds of losing a bit in two months with a HDD are very low.
If you put them in raid, though, both drives not only spin more often, as accessing one requires accessing the other, but the strain they’re under while spinning is also greater, due to the vibration of it’s companion drives.

If you have 4 3.84TB server castoff SSDs per 2x 12TB HDDs, I think that’s a decent spot to be in for a very resilient, fast enough home server setup optimized for overall low TCO. HDD power consumption is, after all, a non issue while the drive isn’t spinning, and adding the backup HDD, even if it’s just one, counteracts one of NAND flash’s biggest weaknesses in the storage space; cold storage integrity.

But, it’s probably terrible for some reason I don’t understand ¯_(ツ)_/¯

4 Likes

You know, if this was 2015 I would totally expect your response. But its 2022, and, damn near 2023. SSDs have come a long way since the days of the Sandforce controllers and the Samsung 840 woes.

Now, while I certainly see a use for HDDs in both the enterprise and home lab, I also see that line is slowly shifting towards all-flash. In my professional life, I have a Pure SAN, Hitachi Vantara SAN, NetAPP E-Series, Lenovo DE-Series, a whole slew of ZFS boxes, and a couple of small all flash VSANs.

The Hitachi is older, from 2014, and even it emplores a substantial amount of SSDs for caching. Even in datacenters with requirements of tons of bulk storage, SSDs in a tiered storage architecture have a purpose.

The NetApps and Lenovos are just “cheap and deep” dumb block storage. I’m of the mindset, if it isn’t broke don’t fix it. LSI created a marvelous design when they created what became Santricity, and if IOPS don’t matter then heck yes HDDs are great.

But HDDs have moving parts. They are inherently less reliable than SSDs because of basic physics. They wear out over time whether there is a heavy workload on them or not. Saying unequivocally that you wouldn’t trust your important data on SSDs is just plain ignorant to the facts.

Which brings me to Pure. I have no affiliation with them as a company and I am in no way preaching their good graces. They are just as evil as any other enterprise vendor out there. But they make a fantastic product. QLC flash for bulk storage, at a fraction of the electrical requirements and rack space over spinning disks. It’s certainly an idea worth looking at. You can fit half a petabyte inside 3U and use a fraction of the power. Granted, I am not using my Pure in this sense, it is Tier 1 storage. But I don’t think you are arguing against using flash in that role.

Give this a watch some time. Maybe hearing out people who do this for a living will soften your heart a little bit.
Better Science: HW & SW Co-Design | Pure Storage

3 Likes

Hi OP.

I would suggest a practical approach, where you are using both hard disks and solid state ones. To each we would give the roles which best leverage their strengths.

I think an Epyc-based system, the first generation, may be a good option. I’ve bought from this seller twice and had good experiences both times. Registered ECC DDR4 ram is cheap and will allow you to invest in other areas.
AMD epyc 7261+Supermicro H11SSL-i 8 Core 2.50 GHz 64 MB sp3 | eBay

Alternatively, you can spend a bit more and buy an Alderlake system like @wendell featured in the “Son of the Forbidden Router” series and put something like a 12600T in it. But availability of LGA1700 server/workstation boards that support ECC are limited at best.

Personally I think the difference between a CPU that can consume a TD of 65 watts and one that can consume 200 watts is greatly exaggerated on ones power bill. But to each his own.

In any case, invest some money in nice quiet Noctua fans and a Noctua cooler for whatever you choose. Other brands like Be Quiet or similar are also good, I just tend to stick with what I know and I’ve never had a bad experience with anything Noctua. My current custom server has the not-so-quiet Noctua IPPC 140mm 3000 RPM fans, and they are loud, but still alot quieter than the fans on my old Dell R720 at a given workload.

As far as power is concerned, buy Seasonic. And don’t skimp. Buy the higher-end units where you may get less watts/dollar but you gain efficiency and build quality.

Case is dealer’s choice. @wendell has been talking up the Sliger 3/4U cases and from what I can see they look rather nice. My current custom server is in a Fractal Define R3 I’ve had for like 7 years. I can’t complain about anything with that case, and I’ve had 2.

As far as storage, which is really the point of contention, I see no reason you couldn’t have both spinning rust and have solid state storage. You can have a pool with two mirrored NVME drives like the P4610s (rated for mixed workloads) for a relatively good price. Intel DC P4610 Series 1.6TB 3D TLC PCI Express 3.1 x4 NVMe SSDPE2KE016T8 #H-8006 (ebay.com)

You can run your VMs or your Docker containers on that storage.

Then you can get some super inexpensive hard drives. I have bought 10 from this seller and they are all working great
HUH721010AL4200/42C0 HGST Ultrastar He10 10TB 7200RPM SAS 12Gbps 256MB 2018 | eBay

Under load, such a system will draw less power than a gaming PC and do a whole heck of alot more for you. Throw in a graphics card like a Quadro P400 if you want to get crazy with Plex transcoding. :slight_smile:

Flash and NVRAM is the ONLY way to handle productive work in a business setting. No way round it.

“Mimimi price per TB” → On the fly De-Dupe and Compression.
Hosting 200 identical Windows VMs? → Write 1 to storage
Hosting 12000 nearly identical user profiles? → write the equivalent of 1200 to storage

Whole heartedly, this!

There are some pretty juicy SAN solutions on the market.

Yep. HDDs are fun and games for writing the nightly back-up as one gigantic write from the productive storage to. For production, only as a surveillance server (= known input datarate), or maybe as that dumb log-writer server sitting in everyones racks somewhere.

1 Like

In theory yes, in practice, SSD’s are actually about as likely to fail as HDD’s, the biggest difference though is that a NAND chip dying on SSD is irrecoverable, a head or controller dying on an HDD can almost always be fully recovered.

QLC is only good for write and forget type of workloads, they only have a fraction of TLC writes and even in consumer tier drives in active use they should be expected to last 2-4 years, most HDD’s that I’ve had have lasted longer than that.

1 Like

First off, repairing physically failed hard drives costs hundreds of dollars, if not low thousands depending on the issues. And there is no guarantee that the head crash didn’t cause irrecoverable damage to sectors of the drive resulting in data inconsistency.

Second, if the loss of any one disk results in the loss of irreplicable data, you’re doing a lot more wrong than the choice of the type of hard drive you are using. Any important data should be stored on redundant pools or raid arrays and should also be backed up.

Finally, recovering a failed SSD is absolutely possible in certain circumstances, just like failed HDDs. Just watch Wendell’s video
(444) You did WHAT with the dead SSD? And the data? - YouTube

PS, HDDs and SSDs both equally have their merrits. And SSDs don’t require lasers or microwaves to purposely make them hotter just to pack the data in. HAMR | Seagate US

No one is claiming that QLC write endurance is any good, but for a workload whereby you are storing large quantities of data that you read from often but not change often it makes sense. This is exactly the type of workload the OP is asking about. Once you write your media files to a NAS, how often are you changing them? If you’re doing it right, its WORM, write once read many.

I also think your assertion that they won;t last more than a couple of years is exaggerated at best. Samsung 840 QVOs, a consumer drive, are rated for nearly 1.5 PB of writes on the 4TB model. Samsung 870 QVO SATA 2.5" SSD | Samsung Semiconductor Global

There is absolutely a case to be made that in certain workloads the QLC flash based NVME drives are a better choice than spinning hard drives. Again let’s look at a market leader like Pure. This pricing at 768TB for $384k is before any discounts or contracts one who spends this kind of money on a SAN would leverage to get better pricing.
Pure aims to kill hybrid storage with FlashArray//C – Blocks and Files
image

Now let’s look at a Lenovo DE series based SAN. This pricing is based on what Lenovo has on their site. The “head-end” of the SAN filled with drives would cost about 14k and take up 2u:

That’s 216TB. To get to that same 768TB we would need 43 drives, so we would need the controller above, with 3 additional shelves.

So our total is around $70k for an HDD based offering. This is about $.10/GB or about 1/5 the upfront cost of NAND. To be clear, this is not an apples to apples comparison in terms of features, as the Pure does far more things like deduplication and compression, file storage, and a whole bunch that SANTricity does not do.

So in addition to getting those bells and whistles, what else does one get for a 5 fold increase in price? A single Seagate IronWolf 18tb drive will hit at maximum 60,000 IOPS. Seagate IronWolf Pro 18TB Review - StorageReview.com

If we took all 43 drives and built one massive RAID-0 the maximum theoretical read IOPS of our ridiculously unsafe array would be 2.6 Million or so not accounting for overhead.

What about our SSD array? Since Pure doesn’t publish information on their individual super fancy direct flash modules, so we can use Intels as a substitute for my conversation here. Intel P5316 SSD Review (30.72TB) - StorageReview.com
All of the sizes are rated for 800,000:

So our 31 drive array of a silly giant RAID 0 would be 25 million read IOPs, or just shy of 10x.

And the endurance ratings are pretty impressive if you ask me:
image

Now, let’s talk about power. The QLC Intel drives have the following:

The hard drives have the following:
image

Since there is no “Average” power consumption for the SSDs listed, we’ll average the idle and full for 15 watts. With 31 drives, that’s 465 watts, or about the same as a single RTX 4090 :stuck_out_tongue:

For the HDDs we’ll take 8x43 and we get 344 watts.

But for the HDDs we are using 1 controller and 3 shelves, each with other components drawing power and losing electrical efficiency with multiple power supplies. So we’ll call this close to a draw.

So what does buying SSDs get you? 10x performance, in 3U vs 8U of space, and about the same power consumption, all for about 5x the cost. Doesn’t sound like a raw deal to me. It sounds like an option worth considering for the right workload.

TLDR; Don’t tell people not to buy flash.

3 Likes

Do any of the cheap consumer qlc drives do wear leveling by relocating used blocks in the background (as opposed to simply round-robin-ing around available blocks?)

Umm on what scale are we talking about, if you have a single HDD with a dead head, just get another one and replace it, it costs like 20$ to do that, if you have larger issues, well yeah, it can cost thousands, but if it’s worth getting the data back for that money, then you’ve gone waaay wrong with backup strategies, at the same time, if you make the same mistake with an SSD the data is gone for good.

It very much depends on the issue that you’re facing, most HDD issues are recoverable without huge expense although, a professional will probably do it better, at the same time most common SSD hardware failure is a dead NAND or controller, if it’s a dead controller, just replace it, if it’s a NAND chip you’re out of luck unless that’s an enterprise drive with built in redundancy…

Good, now go check 1TB drives, most of them will be out of writes in 2 years with COD:MW and a few other games like that updating regularly, but tbf Samsung has some of the better QLC rn.

My point is exactly that, get the right tool for the right workload.

They are supposed to, but nobody is really testing how well they manage.

1 Like

I’m just going to say it again for those in the back with bad hearing, but hard drives having recoverable data is irrelevant when you have to spend a minimum of a thousand dollars or do a platter swap on your dining room table. Louis Rossman is the cheapest I know of and he still starts at like $900USD (plus shipping costs and damage). You are much better off as a consumer in assuming that you won’t be able to recover any data from either in the event of a failure and instead plan in fail-safes and redundancies. A data solution that costs thousands to recover isn’t a very economically feasible one.

2 Likes

I agree that flash is preferable if not inevitable for a bunch of NAS use cases in 2022. I’d just like to draw attention to a type of issue that could be seen sometimes with SSDs, but typically not with HDDs, and that’s when more than one drive fail in the same instant. Here is a recent example from another thread:

Two drives in a mirror that fail at the exact same time. One known cause of these type of failures is loss of power at the wrong instant (on drives without PLP), though there seems to exist other, more obscure causes too.

Regardless of cause it’s a failure that pool redundancy won’t guard against. RINAB and so on so forth, but it’s relevant to mention given this thread’s discussion. Whereas HDD failures are usually due to stochastic wear and tear with a known and estimable (though not predictable) progression, SSDs have that plus some additional unknown unknowns. (at least cheaper models).

2 Likes

OP wants affordable / not too expensive Flash drives.
But also wants enterprise.

I love flash, and the price is improving over time, but it seems against standard hardware trends, that older /used parts are not dropping as fast as new cheaper parts are coming to market?

Like, some U.2 SSD’s are trickling out, but they are not as drop-in as SAS and SATA drives.
New-er gen3 NVMe have the price/capacity high point, at the consumer level.

All drives will die, rust or flash.
OP requested flash, regardless of rust maybe fitting better or not.

For WORM needs, I also think Consumer drives, redundant+backed up would be better, perhaps QVO or similar by Samsung?
Crucial would be my next go to, lastly Kingston.
I would not go WD or Seagate, as their offerings are overpriced.

But I only know the consumer side, and generally buy flash new.

I have not seen compelling prices on used enterprise, except PCIe add in flash drives, which scale even worse than U.2/NVMe.

Even though OP specified enterprise, I am not sure the extra endurance will be tested in this capacity, but perhaps if OP needs scratch space to work on projects, then a U.2/NVMe mirror would be good to work off, before writing out to QLC for warm storage?

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.