NAS HHDs vs regular HDDs

Anyone have any opinions on using NAS drives vs regular HDD for a home NAS server. If I am not hammering the drives all day is there any reason to buy the more expensive drives? I have heard some stuff about stacking drives close together can cause long term issues with drives rattling each other and NAS drives being designed to better handled that. Specifically the I am looking at are WD Blue 6TB, and the WD Red 6TB. The former currently has a $90 price tag, and the latter a $100 price tag. I intend to put them in a RAD z2 with 7 drives total. If it matters they are going into a more traditional case as apposed to a more NAS or server oriented case. Specifically this case Cooler Master N400 ATX Mid Tower Case (NSE-400-KKN2) - PCPartPicker.

After doing some more research I learned the that both of those drives are SMR and thus don’t work well with ZFS. So I am now looking at the WD Red+ equivalents now which are $110 per drive. They are also the cheapest non SMR 6TB drives I could find so there really isn’t another cheaper non NAS rated drive I am considering atm. I am still curious as to the general thoughts of everyone on the subject though. :slight_smile:

1 Like

Hmmm, so 30TB usable, 2 drive redundancy over 5 data, $700 budget growing towards 800.

Hmm, how much do you care about read iops?


I’m asking because you might want to consider newer bigger enterprise drives instead, and then rely on some kind of raidz expansion once you run out of space:

e.g. 14TB exos from Disk Prices (US)

If you get 3 drives, and do raidz1 you get 28TB usable for $630 … if you do 4 drives in raidz1 you get 42TB usable for $840, but 1 disk of redundancy which is slightly less reliable theoretically than raidz2.

The 14T drives will be more useful to you a few years down the road, and they do come with a longer warranty, and you might save a bit of electricity along the way.

6TB drives, beyond the whole SMR aspect are relatively old tech and it’s hard to predict when they were actually manufactured and how long they’ve been sitting and where.

The 14TB enterprise binned drives such as those exos on the other hand, probably aren’t older than a year.

.

1 Like

So I am relatively new to this, but my understanding was that I won’t be able to expand raidz into more drives on the same vdev. Therefore I hadn’t actually considered the idea of expanding the system. However I also hadn’t looked into the price difference for just going with larger capacity. Speed isn’t much of an issue, I did have some concerns over losing a drive, and then effectively having a raid0 until the array was rebuilt, but on the other hand I do like to save money…

The exos x16/14T drives aren’t the newest, they’re reasonably new but still not so new you don’t get a track record, and you get about 2% chance of 1/3 drives failing (according to backblaze), which is about average.

Odds of 2 drives failing is much much smaller, it’s not without tradeoffs.

Odds of something else screwing up your data are about the same no matter how many drives you have.


That’s true today but there’s hope, depending on when you’d expect to run out: RAIDZ Expansion feature by ahrens · Pull Request #12225 · openzfs/zfs · GitHub

… Needs a spreadsheet with different scenarios - incl. factoring in the cost of money and cheapening of drive space over time.

1 Like

Thank you for your input. I will take this into consideration.

Just for regular HDD vs NAS HDD

1 Like

You probably know this already @Tex , but you can also increase storage by replacing all drives in a RAID with larger ones. So for instance I have 4TB x8 in a RAIDZ2, if needed I’ll replace them all with 8TB. So usable space goes from 17.5TB to 35TB (20% knocked off both to avoid performance issues).

You can expand the vdev by adding an identical disc set. For example an existing 2 drive mirror can be combined with another 2 drive mirror…though the knowledgeable @risk would have to check me on that.

Only personally but I tend to go with Ironwolf these days, or Toshiba for larger capacities (14TB).

Also, I’ve broken up my data into cold and active and put them on two different servers. The active is ‘always on’ and lower capacity (30W), while the cold is much larger (110W) and only turned on every few weeks to be scrubbed and receive snapshots via replications from the active server. Saves a lot of money on electric.

I’ve got a third server (low power) that then receives snapshots via replication from the larger server. Because I’m not into tech for a living, I plan on having a 3 x 14TB RAIDZ1 as a fourth destination for snapshots via replication.

Yes…it’s a rabbit hole! :slight_smile:

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.