Shopping for NAS drives, what to look out for?

Hey there,

as the title says I’m currently shopping for NAS drives and I need some advice because I’m not into the HDD space at all. In the past I just bought whatever was cheap and would hold my data, but those were single drives whenever needed and not for a NAS/RAID.

So to start off, these drives will be going into a self-built NAS, probably FreeNAS/TrueNAS, OpenMediaVault, or (less likely) a custom Linux install with btrfs or openZFS.
The files are mainly going to be video files (largely my own Bluray rips and other media) and music (mostly a FLAC collection, some MP3s here and there).

I know that SMR drives are not a good idea but that’s about the end of what I know about NAS drives.
I know there’s NAS specific drives that are better suited due to vibration resistance and other firmware optimisations, which is why I’m looking into those primarily.

While looking up drives I did stumble on some datacenter drives though that seem to be oddly cheap. From this topic it seems the consensus is, if the datacenter drives are cheap(-ish), they can be used on NASes just as well.

Just for comparison, I saw these 2 drives:

  1. Toshiba 14 TB (MG07ACA14TE) starting at 307 Euro (was 311 yesterday) = 21,40-ish per TB
  2. Seagate IronWolf NAS 10TB (ST10000VN0008) starting at 306 Euro = 30-ish per TB
  • virtually the same price, but 4 TB difference in size
  • both are CMR, both are 7200 rpm, both have 256MB cache, both are helium filled
  • the Toshiba though is rated for 2.5x the MTTF, has 2 years more for warranty

What I don’t know though is that the Toshiba is listed as “4K with emulation (512e)”, and I don’t know what that means in practical terms.
I know that is the sector size (which from my understanding limits the maximum amount of files since even files of less then 4KB take up at least one sector?). With that limited knowledge seeing as my files won’t be smaller then 4KB for the most part, it seems this won’t matter. But I don’t know enough to know what the smaller sector size means in practical terms for the file system.

The second thing is the Persistent Write Cache, but from the word alone I would assume that just means that the cache remains intact in case of a power-outage and the data will be written next time the power turns on.

So TLDR: Why is the Toshiba so cheap in comparison? It just doesn’t make sense to me.

And are there any other things I should look out for?
Note those are probably not going to be the final drives (sizes) I will go with, but these very conveniently close in price for comparison’s sake.

Thanks in advance!

2 Likes

I have 2x WD Red SMR (I’m one of the people that got screwed over by WD unfortunately) 2x WD Red CMR , 4x Seagate Ironwolfs and used the Toshiba NAS Systems drives for a few weeks.

From what I observed, the main difference is that Ironwolfs lower capacity run at lower RPM, consume less power and are less quieter, making them suitable for those tower NASes that live in your living room. The Toshiba ones are rattly and loud as HELL, spin at 7200 RPM even at low capacities and if you don’t have mounting hardware with anti-vibration inserts you’re going to have a bad time.

Ironwolfs also have 3 years of data rescue included with the warranty, which might make them more costy.

In my opinion just avoid Western Digital like the plague. They’re ripping you off by making you pay more for CMR drives and they will not honor your warrantly if you use a NAS drive in a NAS and it fails, because and I quote what I got in my support ticket when trying to RMA: “WD Red internal hard drives sometimes produce noise or report errors when used in third party NAS devices, without actually being defective.”.

Also if you are buying multiple drives at once, get them from different distributors to avoid the same make batch. This will help preventing the risk of all drives failing at once in case you get drives from a bad batch.

5 Likes

I guess you mean less noisy, so more quiet, hence the living room thing? Yeah that’s what I saw too, but from what I’ve seen all the higher capacity ones seem to be 7200rpm though. If there’s a 5400rpm one that I could get I’d consider that though

Thanks for that, that’s definitely a consideration for me because they are going to have to live in my living room…

Was already doing that before the SMR debacle, but thanks for re-affirming that :grinning_face_with_smiling_eyes:
That support experience sounds fun…

I was thinking about this actually, and would using different brands/models make sense too? I’d have concerns regarding compatibility though (I mean it’s all SATA, but I’m referring to different write-behaviours etc). So any thoughts on that?

2 Likes

Not all vendor drive pricing has recovered fully from XCH craze.

I like the WD warranty handling. You go online, enter a serial they ship you a new drive, you use the same box to ship back the old one. WD Gold 16T is the sweet spot. WD Gold 18T was 18Eur/TB before the craze, somewhere around 16-20 / TB should be the normal pricing.

Seagate enterprise drives (exos) have weird warranties (weird start dates and registration stuff), retailers need to be onboard.

Helium filled is good - more efficient and more durable.

Toshiba benchmarks lower for mixed workloads (MG07ACA and MG08ACA). They’re closer to their usual price at the moment.

If I was taking two drives. I’d take WD Gold based off of personal experience. If I was taking 5-10 drives I’d consider Seagate exos or Toshiba.

Based on your use case, price per TB and density sound like things to optimize for. I’d start with 2 drives and btrfs in raid 1 and then add 3rd/4th/5th/6th drive to the raid as time goes. I wouldn’t bother with zfs for your use case - too little data for the use case. (<100T I’m guessing)

1 Like

Oh yeah, that’s what I meant. They’re pretty quiet. I got 4x 2TB Ironwolf drives in my dyi server and its pretty quiet.

Go with the same model if possible for maximum compatibility. Different models of drives are not going to be the same capacity. When I added two more drives into my pool at a later date, they had a gig more than the previous two drives.

I actually had bad experience with WD warranty handling. At my previous work we bought two WD Reds in EU for a customer, but when they were failing at a later date the serials were for US drives and it caused issues trying to get an RMA on them (Need to ask support to change region and then after a few days you can try to RMA again). Same crap on my current WD Reds where they’re labeled as US drives even when they’re from amazon.co.uk.

2 Likes

If you use ZFS, you’ll want to manually set ashift=12 (2^12=4096 bytes) when creating a pool (it cannot be changed after the fact). This is because the 512e means the drive lies about it’s sector size for compatibility, and ZFS has the horrible default of trusting the lies, which bites a lot of people with needless read/write amplification.

Basically, 4096 byte sectors is the real size, so prefer that when possible.

2 Likes

That’s what I thought, but what is the 512 byte emulated sector size used for then? Why emulate smaller sectors then actually available, I don’t really get the point of that feature (and I say feature because the same line of drives exists without it).

Yeah that sounds reasonable. Originally I was planning on getting 4 drives right away and going RAID5-ish, but I’m not really sure about that, especially since 4 drives of a reasonable size for a NAS would land me at roughly 4-digits…

Thing with btrfs is, is there a NAS system that I can just install similar to FreeNAS, but based on btrfs? I would like a couple plugins if possible, mainly a BitTorrent client (probably Deluge), NextCloud (not right away but later on), and Jellyfin (although I know that is not available as a native FreeNAS plugin yet). So maybe go CentOS/Rocky (just because I’m used to Fedora) from the start? Not sure what to use as a management interface in that case though, cockpit?

Check out what Backblaze use.

They publish their reliability stats and buy huge numbers of off the shelf drives for their storage pods so have detailed metrics.

I don’t have direct experience with it, but my understanding is that older hardware (like raid cards) and various software were poorly coded to assume 512 byte sectors, and are unable to handle 4K sectors properly. Basically 512 byte emulation was also meant as a transition step to 4K. Older operating systems without the proper drivers loaded may not even be able to see 4K native disks.

But frankly I don’t really understand the use case of this “feature” in the year 2021 either, because emulating 512 byte sectors on top of 4K actual sectors results in read-modify-write amplification, which fucks over performance in a big way. This makes it a poor idea to use anyways for things that really need 512 byte sectors and the real solution is to upgrade the hardware and/or software. Conversely it also causes havok on newer systems that can’t tell it’s really 4K underneath.

I wish this nonsense would just go away. It wouldn’t be such an issue if you could use a utility to force the drive to report one way or the other, but that’s not possible outside of a few datacenter SSD’s.

Note that flash based storage generally does fine with 4K or 8K (ashift=13) sectors. The drives are fast with lots of parallelization, and have enough black magic optimization going on that you typically need to benchmark to see any differences. Underneath they are speculated to be 16K sectors or larger, or even weird non-power of 2 sizes. Manufacturers generally refuse to say and it doesn’t matter too much since they are meant to generally work best with 4K sector formatting for most purposes.

2 Likes

If used to fedora and cockpit, use fedora and cockpit.

I come from Debian/Gentoo/Arch/…, and I’d do Docker and inside docker I’d do Portainer as a docker management UI (e.g. linuxserver.io for containers). I’d manage the base system over ssh (storage/accounts/crons/postfix for notifications). Looking around cockpit screenshots, it seems useful for a base os, keeping track of updates and telling you the status of systemd services and things.
Main thing you get from docker plugins vs just running the things on the base os, is that you get to decouple container updates from the OS and from each other. (reduces probability of issues).
I don’t know why you’d go with CentOS/Rocky over Fedora for this — it’s just some disks and samba and docker on a machine at home.

1 Like

I know their stats but I’m honestly not a big fan of those. They use drives way out of spec so their usefulness is questionable at best IMO.

Ah well that explains that, thanks. I assumed 512 byte was a newer thing and not legacy.

The main reason was update frequency. Package updates can probably be automated (or are already on CentOS, I don’t know), but Distro upgrades every half a year doesn’t strike me as feasable for a system that’s just supposed to be set up and run forever, hence the idea for CentOS or Rocky.
I haven’t used cockpit, only heard of it. So I don’t know what it can do exactly.

The thing that FreeNAS has going for it is that it seems to be point and click instead of having to worry about using the command line. While command line is cool and useful, it’s annoying when you don’t know what you’re doing (which would be the case as a server/NAS for me).

I hate distro upgrades/updates too.

FreeNAS also has updates once in a while as FreeBSD that it is based on revs.

I run Debian “testing”, which i believe is somewhat similar to e.g. fedora rawhide, for my storage at home. In practice for me this means I get whatever versions of packages the upstreams have declared stable.

I use the LTS kernel, just because that’s where the filesystem stuff lives and I care more and I can afford to read the release notes relevant to my filesystem once a year when e.g. I’d go from 5.4->5.10, but I do upgrade within the LTS version regularly… and mostly blindly (not unattended, but when I have time).

systemd, openssh, bash, docker, libc, tmux, postfix, nginx, gzip and what have you… I’m ok getting those as soon as upstream says they’re good,… it’s hard for me to remember issues.

I can appreciate this being a different experience on desktops where, e.g. GPU drivers that work with new kernel, haven’t been patched/fixed the same way as older GPU drivers to not crash or corrupt memory when e.g. kde tickles some bug through some weird plasma widget feature.

Rolling release on a simpler (relative to desktop) server is much simpler, and there’s docker / portainer that keeps everything else rolling independently of base os and of each other.

Yeah of course it has updates, it’s just that they are less frequent and from what I can tell mostly automated through the GUI.
Fedora 34 has been out for almost 3 months now and I still haven’t gotten around to upgrading, which should tell you something :sweat_smile:

Thing is with a custom install I still need to worry about if everything fits together, which is something I kinda want to avoid with a “set it and forget it” system, and why FreeNAS seems appealing. But as was stated before ZFS seems way overkill for he usecase (and also not being able to easily add drives is a bit of a pain).

That being said, I just found a Linux/BTRFS based NAS system called Rockstor, any experiences with that? Uses docker for the plugin system too, which seems pretty much what I would do manually anyway…

Fedora’s update system, unlike Ubuntu’s, really doesn’t feel like going to a new release, but just a normal update. Of course, there are stuff going on that may break, but it’s rarer on Fedora. Even on my “production systems” at home, I run Void, which is a rolling release. I never had issues with it on my Prometheus+Grafana VM, nor on my Samba VM. And at my old company (I was a helpdesk back then), we ran Gentoo on servers, with all kinds of services from Asterisk PBX, to Apache web servers, to Samba, to Nagios+Grafana and others (what absolute mad lads).

If that’s a worry, TrueNAS Core (FreeNAS) is basically as point-and-click as it gets, with the awesome FreeBSD underlying system.

Cockpit is more of a friendly way to present the command line and not really a solution that holds your hand, like TrueNAS Core or pfSense (I’m having a real hard time with pfSense because of its stupid hand helding, when did *BSD become Windows?). It’s just a simple GUI and won’t come with any special configurations.

But when you know what you’re doing (and it takes time), you can do things better, like automate stuff. You could probably try automating mouse clicks (and people have done that in Windows as a hacky way to get around proprietary software and licensing restrictions), but it’s just easier to do stuff with the CLI. Especially when you want to perfectly replicate environments en-masse and quickly (which is why scripting exists). For home use, generally point and click solutions are fine, but they become really cumbersome when you want to do things better (and arguably easier, because sometimes, albeit intuitive, GUIs can be harder when presented with many options and menus).

Which is why I want to run Void as the host OS, Void inside LXD and OCI Containers inside LXD (nested containers). LXD will take care of HA without the added resource requirements of VMs and also make the setup portable, while Podman will just ran whatever programs I need if they are not available as simple services for Void (one example is bitwarden_rs). I still prefer the classical way of handling programs by installing them via a package manager and configuring manually, but I like the separation that VMs offer, so LXD is a great tool for me.

Never heard of it. If you want easy BTRFS, go with Fedora (it has that in the default installer in Fedora 34), or try OMV, I think they offer BTRFS as well (maybe).

Unraid will cover you… But it ain’t free.

Fully agree on that, as I run Fedora as a daily system.
In the past ~2.5 years I only had 1 package that would not work after an upgrade, and that is/was an upstream issue. Other then that I had a couple issues with (presumably) configuration of the audio system getting messed up over time or on upgrades, but it’s manageable.
Still though, what bothers me (only a bit) is that a Distro-upgrade is only possible through the command line. Suppose I would have my dad use Fedora as a daily system, while the “daily” updates are just a widget in the taskbar, there doesn’t seem to be an equivalent for Distro upgrades. But anyway, that’s getting a bit off-track :smiley:

Yes, which is why I was looking at it initially, but ZFS worried me a bit from the start as it seems a bit of a headache adding drives.

I agree on that and it’s not like I don’t use it. I use it on my daily system all the time but that is also not hosting terabytes of data and doesn’t require a whole lot of intricate configuration. Kinda the point of those “one-click NAS” solutions is that the user (i.e. me) doesn’t have to worry about the configuration most of the time.
I don’t know how FreeNAS handles things regarding command line (I think SSH is available though), but from what I’ve seen Rockstor at least has a command line you can get to from the WebUI.
Basically what it boils down to (for me): Command line is a cool thing to have if I want it, but it shouldn’t be required for basic setup of a NAS.

I looked into OMV a while ago and I couldn’t really find a list of available plugins, maybe I’m just blind.

Kinda forgot about Unraid, but yeah not sure about the cost…

To be fair, the cost isn’t too bad. Just looked it up to see what it’s currently up to and it’s only $59 for up to 6 storage devices… You can always upgrade for a small fee if you need more and it’s not a reoccurring fee either which is nice. Additionally that license follows the usb stick so you can migrate to different hardware whenever you wish. And, not for nothing but their docker and plug in integration is top notch. You could always try it out full access for free for a month. Just a thought.

Yeah I saw that, might look into it later.

It’s just annoying that for some reason I can’t create VMs on my system but that’s a whole different rabbit hole… but it makes testing different systems a bit tedious :smiley:
And now I also just saw that the motherboard I was going to get is not available at any sellers right now, hope it comes back :confused:

I recall doing the jump from 30 to to 31 or 31 to 32 via KDE Discover on Fedora KDE Spin, but my memory could be failing me. I’m more of a fan of the CLI anyway, so all other upgrades were done via dnf in a terminal.

The audio in Linux is hit or miss anyway, but on a server, Fedora Server is rock solid.

That is an issue if you plan on a big array, like 6 drives in RAID-Z2, but if you only have 4 bays to spare, you can just buy 2 drives now, do a mirror and add 2 more in the future as mirror and stripe them (initially you won’t benefit from the speed of stripped vdevs, but as more data gets written, it will eventually get leveled).

I believe FreeNAS also has a webGUI Command Prompt. So does pfSense, Proxmox and I think Cockpit as well. What I don’t like about that is that usually, commands are run as root (eek).

OMV is just Debian. It may or may not give you a BTRFS option at install. Interesting, I never used OMV, but now trying to look after a plugins webpage, I can’t find any that lists more than a dozen. However, being just Debian, you can install Portainer, which is an easy GUI to run OCI Containers (docker) on it, so you should be able to run anything similar to how TrueNAS Core offers Jails with all kinds of services. Just that you will use a different port on the same IP in your web browser to access portainer and deploy other services (which you will do so anyway, if you want to add other web services, like NextCloud or bitwarden_rs and such).

For that, you can just enable SSH and use Virt-Manager from another Linux PC to add VMs on your NAS (if you run OMV or Fedora or other Linux distros). Virt-Manager is not as straight forward as say, Oracle Virtualbox when you run it remotely, however, it’s pretty easy to figure out how to add a storage location and after that, you’re up to the races. If you use Fedora on your desktop / laptop, you’re good to go. But that implies that you are using Linux and not Windows. You can even use an original RPi to use Virt-Manager, but that implies some small inconvenience that you can’t access all your services from a web browser on your Windows machine, if you run Windows - it will probably take some more time until you can run GUI programs via WSL2, to be able to use Virt-Manager in Windows. Reading about it, apparently some people managed to run Virt-Manager on MobaXTerm on Windows.

So, you could have OMV be pretty competitive to TrueNAS Core if you fiddle around with portainer a little. And you could do a similar setup on Fedora using Cockpit. Never used it, but it does appear that Cockpit has options for containers, VMs and other stuff. And you can run Cockpit on OMV too, so maybe it makes Virt-Manager moot if you can control VMs directly through Cockpit. And libvirt is more polished than bhyve, or so I hear.

1 Like

Sounds like a good alternative I can look into, even though an all-in-one solution would be preferable. But if it’s better that way, why not

I should have been more clear on that. The NAS is not yet built, I’m just shopping for parts.
The VM issue is on my main PC where I would have liked to try various different NAS OS’ to see what they offer