Least noisy reliable HDD

Henlo,

really need to upgrade to a second NAS since I cannot fit more drives in my current enclosure and unfortunately I have to run the whole thing in my office at home.
Is there any way to get almost enterprise reliability and reduced noise those days or will it just not matter? Minimum size would be 8TB per drive but I would prefer larger drives. Everything will be thrown into a QNAP TS-673A which I “found at the side of the road” so cooling is “okayish” I guess.

Any suggestions are welcome.

1 Like

I’d go for large-capacity drives with low RPM, something like the WD Red Plus 12–18TB. They’re quieter than enterprise drives like the Exos or Ultrastar but still solid for 24/7 NAS use.

Also, throw in some Noctua fans if your unit allows for fan swaps, it makes a surprising difference in both acoustics and airflow. With some airflow tuning and drive choice, you can get pretty close to the quiet enterprise without the jet-engine effect.

1 Like

Exos 20

2 Likes

These kinds of NAS, which are basically cages with harddrives in them, will generally be louder than a good case. They might rattle a bit and there is not too much stuff around the drives to dampen the noise a bit. The quality of the case and properly mounting everything can be as important as the drives themselves for noise.

Hardwareluxx does reviews including noise measurements. Those don’t tell you the whole story because the ‘kind’ of noise can be as important (or more) than the volume in dB. It’s in german but that should not be an issue in 2025. Sadly they don’t list all measurements in all reviews so you might need to search through it a bit to piece everything together, e.g.

There’s not too many 5400rpm drives around any more and they cap out at 8TB as far as I know. But they are significantly quieter. 7200rpm goes up to 24+ TB. Fewer larger drives will be better than more smaller (unless 5400rpm is okay). Helium filled is also supposed to have noise, power, and heat advantages.

Any modern enterprise drive should be plenty reliable (Exos, Toshiba MG, Ultrastar or the WD red pro/gold). Quietest of the bunch do indeed seem the exos x20. The Ironwolf Pro are even quieter but much more expensive for what is basically the same hardware. I believe they are identical to exos, just with a firmware that is tuned more to home/nas use.

2 Likes

Agreed but lets say I put this thing on ebay and look for something else then I most likely will not achieve the same power efficiency or pay extreme premium for it.

Ironwolf Pro “NT” models. 20/26 dBA idle/seek

3 Likes

Hard drives, even the 7200rpm ones, don’t make as much noise as they used to in the bad old days.

I recently fired up an old WD 1000BB IDE drive, and oh boy, the whine. I had totally forgotten about that loud whine!

I have lived with a large quantity of 4TB WD Red (pre-Pro, back when they were CMR) 5400rpm drives, 4TB 7200 RPM HGST SAS drives, 10TB 7200rpm Seagate Exos x10 and later 16TB 7200rpm Seagate Exos x18 SATA drives.

The WD Reds were probably the quietest for within earshot operation, but the Seagate EXOS Drives were not bothersome either.

The loudest of the bunch where the HGST drives, but not in the rotational whine sense. They have this habit of frequently emitting low grade read noises even when there is no disk activity. I gather the drives are using idle time for error correction algorithms or something. The EXOS drives do this as well, but they do it less, and are quieter when doing so.

My take is that while 5400rpm drives are slightly quieter these days, the difference is much smaller than I’d expect, and I probably wouldn’t even bother going 5,400rpm for the noise. I’d just stick with 7,200rpm.

They might measure higher, but at idle (thus excluding head movement) it is a low grade white noise that is - at least to me - not very bothersome. At least not compared to the bad old days.

If you really don’t want to ever hear the drives stick the NAS in another room or in a closet.

If you do decide to go with WD Red’s make sure you get the “Pro” models. After that scandal a few years back when WD silently switched their Red drives from CMR to SMR, they relented, and now regular Red’s are SMR, and “Pro” drives are CMR.

In most cases you really want the CMR variety.

1 Like

Exos, IronWolf, IronWolf Pro, MG08, MG09, MG10, N300, N300 Pro, Red, Red Pro, and Ultrastar are all enterprise lines with many drives idling at 20 dB(A). Just see the datasheets; generally 20 dB(A) is 12+ TB, though IIRC there’s a few 10s. Seagate, Toshiba, and WD all rate operating noise differently and my acoustic spectrometry experience is drive noise structure varies a good bit with workload. Different drives of the same model’ll put up different harmonic structures on the same workload, too.

So… it depends. Haven’t worked with recent Toshiba drives but out of Seagate and WD I’ve had the best overall luck with Ironwolf Pro. Exos 2X14 and 2X18 are known for being quiet and the 2X18 I run is mostly quieter than 18 TB Ironwolf Pro. But get the 2X18 onto the wrong workload pattern and it’ll put nails into your skull. Also 12 W active is a lot to ask of small box NAS thermals even with better airflow than the 673A and last I checked QTS dual actuator support looked pretty iffy.

1 Like

Since no one has said it yet…
Go SSD.
You can buy up to 8TB SATA SSD
30TB SAS SSD
122TB NVME drives

Silent with up to 3 DWPD (higher than enterprise HDD drives)

1 Like

see my comments yesterday here Advice on building a quiet NAS using NVMe and bifurcation.

tl; dr: I have a stack of Seagate Exos and WD Gold (manufacturer recert from Server Part Deals) in the Fractal Design Define 7 case and the noise from the drives is nearly inaudible.

1 Like

SSD’s are great, more responsive, and long gone are the days where write endurance and reliability were huge concerns, but there are still two downsides in an application like this.

1.) Cost. For mass storage, nothing beats the price per TB of hard drives.

OP’s minimum disk size is 8TB, and there are six bays in his QNAP I presume he is looking to fill.

The cheapest 8TB SSD’s I can find are $550 each, so six of these would be $3,300. And these are consumer drives which really arent intended for or heavily tested for RAID applications, not enterprise ones which would cost even more, often like $1,600 or more each.

So, it is certainly possible, and they would perform great but man, it’s going to cost a pretty penny.

By contrast an 8TB enterprise hard drive is likely going to cost you just over $200 each, and if you want to take the risk and go consumer, (like with those consumer SSD’s) you can get them for just over a hundred bucks a piece.

So we are talking ~$600 to fill that QNAP vs $3,300, or more.

2.) Risk of data degradation if SSD’s are left unpowered for long periods of time (six months to a year).

During normal operations the controller monitors all of these things, and re-writes cells that are getting close to voltage thresholds when needed. If disconnected for long periods of time, they are not doing this, and thus can result in corruption. Note that this applies to all Flash NAND, even that in USB sticks and the like. They are not the most reliable long term archival solution for data storage, unless they are left connected to power.

1 Like

Generally, only SAS and a subset of enterprise NVMe SSDs perform cell refresh operations (with some notable consumer exceptions like the mx500) when power is applied. Most SSDs will happily let the data degrade over the course of months to years even when powered up.

2 Likes

Are you sure about that?

If this were the case, with the known degradation rate on NAND cells, especially when when used in TLC and QLC configurations where the voltage level ranges are smaller (due to cramming in many more bits per cell) would result in catastrophic levels of corruption in most consumer applications where people write their data and let it sit, often for years.

…yet we don’t see that.

Personally I have yet to try any QLC drives, but I have several TLC (and older MLC) drives, and they have had data retained on them for years without manually re-writing or any sign of corruption.

Conventional wisdom would have that they would start degrading after 6 months to a year if the controller were not in there refreshing the cells automatically.

I was under the impression that just about every modern SSD controller - consumer or enterprise, and even the ones in decent USB sticks - refresh their cells automatically when powered on if they get too close to the boundaries for comfort.

It seems to be the general trend the past several years, but I can’t say it’s a hard and fast rule; it’s possible this could change in the future too.

The issue was sort of explored in this thread (although not as deeply as I’d like):

It’s somewhat hard to model the degradation rates of the NAND because it changes wildly with wear level; Also 3D NAND has some strange effects on change decay rates because of the floating body (detailed in post #14).

1 Like

Yeah no this is an absolute no go for me I will just dampen a standard case like crazy. I will never put important data on consumer grade SSDs and the price per TB is terrible. So I thanks all for the input and it seems that just going custom build and throw everything in a damped case plus taking care of vibrations will be key. So no I am looking at boards and cpus…

Thank you for linking that. When I have some time I will have to read up on it.

That makes sense, but still something isn’t adding up here for me.

If nothing were being done at all on the controller side to combat decay, we should be seeing catastrophic levels of corruption on SSD’s out in the field, and we just aren’t.

My own personal sample size is pretty small compared to all of industry, but much larger than the typical user. (I have something like 40 SSD’s in active use right now, and over the past 15 years those numbers are well over a hundred.

I did have some early corruption on OCZ drives back in the day, but those were known to be terrible in that regard. Ever since stopping buying OCZ drives (~2012?) I have never seen any corruption on any SSD I own, despite them containing old data, and some of them (notably 128GB Samsung 850 Pro’s initially used as cache devices and later used in laptops) have been beaten to death with writes, and still retain data without corruption for a long time.

To be clear, I have had two SSD issues since discontinuing use of OCZ drives, but neither of them were corruption. I had a Sabrent Rocket 4 which just bricked itself between two boots once, and I have one Samsung 980 Pro which I used in a ZFS boot mirror with another identical drive, which would randomly just disappear from the host once every 6 months of uptime or so, and stay missing until power cycled.

Other than that I’ve never had an SSD issue since 2012 in over a hundred drives. If cell degradation were not actively mitigated, I really ought to have seen at least some corruption.

Now I know, maybe I just missed some of the corruption, because I don’t constantly monitor most drives for bit rot, but in many cases these SSD’s have been in ZFS pools, and if there was corruption it would have resulted in parity errors, and I hven’t seen those either.

For what it is worth, I’d highly recommend using some form of software defined storage pool solution rather than a hardware RAID card. it tends to be more reliable.

If you are comfortable with Linux/Unix command line you can roll your own using Linux or FreeBSD and OpenZFS. If you are not quite as comfortable with this TrueNAS (the old FreeNAS) has an appliance OS release with a web management interface which is pretty nice and usable.

The “most right” way to do it from a reliability perspective is to use proper server boards and enterprise SAS HBA’s in a server case that has a backplane that supports hotswapping, but as a starter you don’t really need to go that nuts. (You can gradually migrate there over time like I have)

From a platform perspective - however - I recommend making sure whatever you get supports ECC RAM.

A couple of years back I upgraded my router build using a Supermicro X12STL-F board and a 11th Gen Rocket Lake based Xeon E3 CPU. It wound up being a pretty cost effective solution for me, and I remember thinking that if I put a SAS HBA in it, it could be a nice lightweight ZFS box.

Don’t be afraid of used server pulls from good recyclers on eBay. It’s a great way to get a fantastic deal on enterprise hardware, and I have never had an issue. The hardware tends to be well taken care of (compared to used consumer stuff which is often mistreated by idiots)

But there are many many options, and configuring and building them is part of the fun!

Just a slight caution. There are many “barebones used servers” out there that are terrific deals for what you get. Just keep in mind that these are business machines that usually are at home in server rooms. The drive noise is going to be the least of your concern in those things. The fans alone can often wake the dead.

I got a barebones server like this years ago based around a Supermicro SC846 case, and it has been great, but it took some modding to make it “home compatible” from a noise perspective, and even so I keep it in a rack far away from my office.

I once got a HP DL180 G6 back in 2014 and attempted to do the same, and I utterly failed. That thing sounded like a jumbo-jet taxiing on a runway. With it in my basement at the time, with two doors closed in between, the noise was still loud enough to be bothersome in my bedroom two stories up :sweat_smile:

Seriously, not kidding about the jumbo jet thing. Watch this video to see what I mean:

(Not my video or my exact server)

In my case, because I populated one of the PCIe slots with a PCIe card (SAS HBA) that the servers BMC/IPMI didn’t recognize it went into 100% fan mode permanently to make sure it was cool enough, and it had 8x little 80mm fans that went up to 18krpm I think. It was totally nuts.

So, you are probably going to want to build something yourself when it comes to managing noise.

My backup Workstation in my office has 6 HGST 7200rpm SAS drives in it, and it is pretty good from a noise perspective. I built it into an OG Phanteks Enthoo Pro case. But there are many options out there.

For what it is worth, here it what it sounds like with 6x 4TB HGST drives in it. Most of the noise is fan noise.

Though, this one is only sporadically powered on, so I didn’t get obsessive about noise. If you have a noise focused build approach you can probably do better.

Also, of course I realize that with no baseline, it is impossible to tell exactly how loud that is. For a qualitative description, if I am sitting right next to it with the cover put back on, it is audible, but not really bothersome. From across the room ~10ft away it is only borderline audible.

In general, apart from occasional head seeks, I have found that modern hard drive rotational vibration is pretty low key. Usually its the fans needed to keep the hard drives at reasonable operating temperatures that create most of the noise.

Good luck. I’m curious to see what you come up with :slight_smile:

Since I am also trying to go low power enterprise boards are not really an option with the management chip. No reason not to run am4 consumer boards with pro cpus and ecc ram.
With the relative low load even 24/7 I do not see any issues with that. Since I will be running zfs I do not care for raid controllers and all that jazz as you already mentionend.

edit: for virtualisation I have a dedicated system.

That is certainly a good alternative, through I think you are making a little much of the power use by the BMC. They are usually very low power and only add a single digit watts in most cases in my experience.

And they are very convenient. I have come to love them so much that I wish all of my systems had them.

Single digit watts is enough to bother me since energy and heat are a concern for me… This thread was about drives and it seems the consensus was to use consumer grade nas drives for reduced noise over enterprise but the chassis will matter a lot as well.