Advice on building a quiet NAS using NVMe and bifurcation.

First time poster but, I have been lurking around Level1Techs since the Tech Syndicate days and have finally decided to seek some advice.

I need a NAS, and it has to be quiet. Since I live in a small apartment, the NAS will be only 3 meters away from my bed. Considering my noise requirement and my current modest storage capacity need of 4TB, I believe SSDs are the ideal solution.

I assume that SATA SSDs are becoming less common, while M.2 SSDs will remain popular and readily available. I plan to keep the NAS for at least 10 years, so flexibility in adding more storage is a key priority for me.

Currently, I do some amateur editing with 8-bit 1080p 60Hz footage, but I’m planning to transition to more demanding video formats (10-bit 4K 60Hz). I aim to edit full video files rather than proxies.

I have the option to purchase either the SP3 Supermicro H12SSL-i for 629 euros. Which would provide PCIe lanes capable of supporting 20+ M.2 NVMe SSDs through bifurcation. However, I’m unsure about the rest of the system configuration and would greatly appreciate advice on the following:

  1. Should I go with TrueNAS Scale? It seems to be the default choice.

  2. Is my assumption correct that a EPYC Rome 8-core/16-thread CPU would be sufficient?

  3. How much RAM should I start with, and does RAM speed matter?

  4. Is it challenging to add more RAM later due to compatibility concerns?

  5. Is a vdev of mirrored 2x4TB SSDs a good starting point?

  6. Would the Crucial P3 Plus 4TB Gen 4 SSD for 210 euros suit my application?

  7. Are there affordable PCIe x16 to 4x M.2 adapters that maintain Gen 4 speeds?

  8. Is excessive power draw from PCIe sockets something to worry about?

  9. Would 10GbE networking be sufficient for my workflow?

Thanks in advance for any advice you can offer!

Look at the balance of CPU-to-Memory bandwidth vs all these PCIe lanes to see if you’ll be able to keep as many as 20 NVMe devices busy. I think that mirrors and RAID parity striping can eat that bandwidth quickly with little gain in array throughput.

I don’t think so, I’ve seen prices at a few hundred Euros for a retimer, but I also don’t think you need PCIe 4.0 when consumer devices can’t really sustain speeds faster than PCIe 3.0 x4 bandwidth (and enterprise devices become so much more expensive). Maybe you need bursty speed for scrubbing though video that can’t be delivered from striping your devices.

K3n.

This board supports x4 bifurcarion on all PCIE slots, so you can use cheap passive carrier boards.

Like this one https://www.asus.com/motherboards-components/motherboards/accessories/hyper-m-2-x16-gen-4-card/.

~~65USD per 4xm_2 carrier card with decent cooling.

2 Likes

Pick one.

If you’re planning just a mirror or stripped mirror of 4 nvme drives, don’t bother with epyc. Technically a friendlyelec cm3588 board would be enough for now, but if you want to scale to 4k later on and maybe higher in the future, this wouldn’t be enough.

While I’d like to recommend the Asustor 12 nvme drive NAS, that thing isn’t really meant for speed, but more for bulk storage (which is fine, because on a 12 drive stripped mirror with backups to a spinning rust backup server, you should see more than plenty of speed - even an equivalent raid-z3 on 11 drives setup, you’d still see plenty of speed from so many nvme drives, even the cheaper low-end ones).

But, for sake of completeness, I’d suggest you go with something like an intel core ultra 225. Its base TDP isn’t too bad and it wouldn’t see a lot of bursting. I expect its idle power draw to not be very high either. Then, you’d need to find a decent mobo that can bifurcate the x16 slot. You should find a motherboard with maybe 3 m.2 slots, so you can go with at least 6 nvme drives (and 1 for the OS). If it doesn’t already have 10G ethernet, you should have barely one more pci-e x4 for that.

Such a NAS should last you a long time. As for the OS, I’m biased, I recommend raw, bare, unadulterated FreeBSD.

  1. I would look at vanilla FreeBSD
  2. Yes
  3. 32Gb will be more than enough as you’re not going to have much of cached data to begin with
  4. No, but TR/EPYC platforms tends to be more picky than consumer platforms. You should also note that DDR4 is legacy so in a few years it’s probably going to be very expensive unless you go for used.
  5. Probably? You want as large drives as possible without breaking the bank as PCIe lines are limited
  6. It depends? QLC flashes drives are the bottom of the barrel but the P3 Plus drives seems to be fairly decent and you’ll be fine as far as 10GBe goes. I would however likely go for the T500 due to better (more reliable) flash.
  7. Any passive ones from Asus/Asrock/Gigabyte etc will do fine
  8. Unlikely
  9. Probably

I would also actually try to compare iSCSI to SMB for your workflow.

As ThatGuyB mentioned, you’ll probably struggle a bit to keep it quiet.

Fwiw I have multiple systems sitting here in my office packed with HDD and the only noise comes from the system fans.

I have 4x Western Digital Gold in this enclosure

The fan is a little grindy after running 24/7 for five years but I have a replacement fan (ordered from OWC) that I’m just too lazy to install

I also have 11x Seagate Exos in my Plex server in the Fractal Design Define 7 case and the entire system is dead silent even under full load (all disks active and CPU saturated)

So the gist here is to not over estimate the noise from HDD. My experience has been that high capacity enterprise drives are only noisy during the first write to a brand new disk then you can’t hear anything unless your face is on the disk.

1 Like

I have a semi-passive HTPC system with an 5750G/128GB ECC DDR4-3200/ASUS ProArt X570-CREATOR WIFI/10 GbE in a Streamcom FC10 V2 with 4 M.2 NVMe SSDs and 6 SATA SSDs. It works…

I don’t think something with EPYC can really be operated silently without adding much complexity for example with watercooling.

After refining the workflows I plan to use, I see limited advantages in actively editing directly on a NAS, except for the redundancy it offers and the lower local storage requirements. However, I’m still exploring options to integrate a NAS workflow effectively. I do not want to pay Apple or Dell for 2tb of laptop storage.

I have not yet excluded Epyc as an option.

I found that Pudget Systems recommends “about 4x the bitrate of your media”. That would give me a need of only 170mb/s (340 Mbps Canon R6 mark II) or 1gb/s for heavy RAW video. That is If I can figure out the workflow that uses a NAS with Resolve. For 20 drives, the total potential PCIe bandwidth would be 160 GB/s (PCIe 4.0). DDR4-3200 with dual channels offers around 51.2 GB/s. While DDR4-3200 8 channel is 204.8 GB/s. I am unsure about what this all means. But this might be a none issue, or a problem for future me to resolve.

Will go that path, seems silly that I worry about spending 30 dollars more on adapter once the whole system cost is considered.

When I built my former overclocked Haswell gaming PC the Noctua cooler was quiet. While the WD Blue 1TB was audible. Should I reconsider my fobia of HDD?

I planned to use Noctua SP3 cooler with some Artic P12 to keep the chassi cool. Other than coil whine I do not see how AMD Epic would be worse then any other PC. Do I miss something?

1 Like

its also not clear to me that your stated use case actually necessitates the type of hardware setup you are describing. Some considerations

  • do you actually need the storage to be accessible across the network? If you are in a small apartment then you can probably just use an external drive enclosure and plug in over Thunderbolt to whatever system you are working from

  • the original post is phrased as if you intend to do video editing on your local workstation directly from the mounted network storage ; why is this a requirement instead of just copying the files you want to work on to the local system disk, editing, exporting, then copying the results back? Using the NAS as a “hot storage” device that you are actively doing work on imposes a lot more requirements that could be avoided if you just copy your files locally first (you get better performance this way too)

  • 4TB is such a small capacity its not worth making an entire system just to host this

  • you can get both 4TB SATA and M.2 NVMe drives pretty easily, but why stop there, why not get 8TB disks? I just bought a WD Black 8TB Amazon.com: WD_BLACK 8TB SN850X NVMe Internal Gaming SSD Solid State Drive - Gen4 PCIe, M.2 2280, Up to 7,200 MB/s - WDS800T2X0E : Electronics for the hell of it since WD is discontinuing their entire line of SSD’s and the price ($600 USD) is a reasonable deal at that capacity (full disclosure; 8TB SSD’s were as low as $400 in late-2023). There are also enterprise grade drives (examples; Solid State Drives | Enterprise Grade — ServerPartDeals.com ) which you can get new or used (e.g. eBay) which occasionally dip to low prices and can be used in most any system with a simple adapter.

  • you mention 10GbE networking but its not clear if you already have a 10Gb network available? Keep in mind that 1Gb = ~125MB/s network speed, 2.5Gb = ~312MB/s, 10Gb = ~1250MB/s. Keep these in mind re: network requirements because it will be the bottleneck on your data access speeds depending on which disks you get. SATA III speed tops out around 500MB/s-sh, PCIe Gen 3.0 x4 is ~4GB/s ( PCI Express - Wikipedia )

  • TrueNas is definitely not the “default” solution for this. If anything the “default” would be any standard Linux distro. Examples include Debian, Ubuntu, etc…

  • do you actually need ZFS? not a rhetorical question, I dont use it because I dont do any actual work directly on my “NAS” file server systems. I use my file storage systems simply to hold the files (and run Plex etc… in some cases). So if this is how you use your systems then you may or may not need ZFS, and if you dont need that then other complexity goes away as well.

2 Likes

I have to second this. Seriously, for the cost of the NAS hardware you’re looking at you could just get an 8TB flash drive and a cheap high capacity HDD for cold backups and still save money.

1 Like

I am sitting at my desk, with that listed OWC external enclosure and the described Fractal Define 7 systems both sitting about 5’ away from me each, and using Decibel X app on my phone its recording avg 28db noise level in the room. Subjectively, most of the noise comes from the OWC since its got a bare metal open-grill construction, the Fractal Define 7 case is actually thick and padded on the inside which cuts the sound considerably. Also note that I have the version without glass, its plain metal on all sides.

this has been my experience, ymmv

Thanks everyone for the feedback. it’s greatly appreciated!

Allow me to provide some context behind my reasoning for believing that 128 PCIe lanes might be the solution to my problems, at least my material problems :slight_smile: .

My current setup involves sharing a folder from my Ryzen 3600 system over SMB, with Tailscale enabling remote access. As I often spend months away from my apartment, I upload files regularly to ensure my videos and images are protected from theft or physical damage.

However, storage space on my gaming PC is running out. As a result, I began looking into a Synology 4-bay system with HDDs. Concerned about noise levels, I started exploring SSD-based solutions. From my research, SATA SSDs appear to be becoming outdated, so I shifted my focus to M.2 SSDs.

Asustor 12xm2 is a fine alternative. However, an EPYC-based system seems to offer even greater flexibility with only a slightly higher investment. In fact, the cost-to-M.2 slot ratio appears to be more favorable with this setup.

As I delved into exploring EPYC systems with NVMe storage, I realized that I might as well take advantage of the ability to do video edits directly on the storage system. While this isn’t a primary objective, it does present itself as a convenient bonus feature.

No I do not need ZFS. I’ve encountered issues with corrupted images, which I suspect were caused by bit-rot. As a result I want a system that is actively checking against this. I only know of the file system Btrfs and ZFS. I am open to alternatives.

I can’t speak to the rate of future accumulation but, I am looking at an 10 year horizon with high bitrate 4k.

I dont really consider SATA SSD to be “outdated”. The real issue IME is the slow decline of availble models; Crucial recently discontinued the MX500 which has been a go-to for a long time and has not yet released a replacement, and as mentioned Western Digital just sold off their entire SSD division (both SATA and NVMe) to SanDisk so stock of those could fluctuate as well. I still use SATA SSD’s in almost all of my systems for two main reasons;

  • as OS boot disks so I can keep slots free for the the M.2 NVMe / PCIe disks as cache and app drives

  • as mass storage (4TB, 8TB) that does not eat up PCIe lanes

keep in mind that with SATA you can also use something like an HBA if you even needed to in order to attach a large number of drives while still keeping most of your PCIe lanes free

I also dont really have any use cases taht take advantage of the speed increase of e.g. PCIe 3.0+ NVMe vs. SATA. You might though, maybe.

another benefit to SATA SSD is that it should be a drop-in replacement for HDD in many situations when you have gear already designed for 3.5" HDD

so i would not disregard SATA SSD right off the bat, I think there are still great usages for it. This is my opinion though.

You did not mention if you have a 10Gb network to take advantage of higher speed drives. All the devices, and cables, in your network would need to be compatible (router, switches, ethernet cables, and client / server PC network cards).

I am still not really convinced that you need an Epyc system for this either. I think you should be able to put something together using standard desktop platforms (AM4, AM5, whatever Intel is doing). You might reference this thread here [Guide] NAS Killer 6.0 - DDR4 is finally cheap - [LGA1151,LGA1200] NAS Killer 6.0, Plex QSV Builds - serverbuilds.net Forums

And OWC has a variety of extremely high quality external disk enclosures for all different types of disks too, in case you want to skip the network and just attached the disk directly OWC External Storage Drives for Any Creative Workflow they might have a pre built NAS unit in there as well

Regarding noise, modern HDDs with helium (basically everything +18TB) have become pretty silent. Only the heads can be a little bit loud sometimes (exos).

1 Like

Yea, the amount of heat that epyc will push out, which is why I recommended a lower-ish core intel. I could have easily recommended the asustor flashtor, not even the pro one, if not for the future requirements (in 10 years 8K will probably be standard and 4K a bare minimum, kinda like 1080 is now).

Always keep in mind that RAM is always faster (lower latency and response time) than flash storage. As for the actual bandwidth, you’ll be limited by the 10G ethernet, which is about 1 GB/s, with the overhead (unless obviously you go big, like 100G, so ~11 GB/s). You’d be nowhere near even the throughput of DDR4.

When it comes to network storage, I wouldn’t really bother to optimize the hardware the best, I’d just go as cheap as I can for a performance tier and invest the rest of the budget into the storage itself (my own NAS is a thinkpenguin 4-bay NAS with 4x 4TB SATA SSDs in striped-mirror, so ~8TB usable space).

I assume because OP mentioned this, later ITT.

If it’s going to be a laptop, storage will come at a premium inside it (particularly thin ones and not gaming bricks).

I agree, but I assumed they’d go for a mirror, then stripe it later with more disks (yes, I know about performance balancing).

I have to agree with this, assuming OP would be using an NVME USB enclosure with an 8 TB drive. The question is just how reliable does the storage need to be (since you can’t have good redundancy over USB).

That doesn’t sound like a good application of a NAS. Anything that involves access over internet is literally useless when it comes to editing stuff straight from the NAS.

RIP. That’s what I’ve got in my NAS. I would normally buy a spare, but I have backups, at this point I don’t really care if I have to wipe my pool and zfs-send my dataset back to it with fewer drives.

Even then, the point is moot if OP is editing files on his laptop and then copying the data over the internet through tailscale. This would only apply, like what, 2 months per year in total, or something? Not even worth it.

My helium ironwolf pro drives are kinda loud, but it’s not that atrocious (when writing it’s the worst, on zfs-scrub, it’s sufferable). The reason I suggest SSDs is because they kinda tend to pay for themselves if you run them 24/7, compared to HDDs (I only power my spinning rust NAS and my backup servers on-demand, when I need something from archive or to take a backup).

“Good” SATA SSDs are slowly getting phased out, I don’t even know if you can get ones using the Marvell 88SS1074 controller which has been flawless in my experience, the MX500 was SiliconMotion based. You’ll also struggle to staturate 10GBe with such unless you build an array.

NVME over USB is poor at best, it’s not something you want to use especially for reliability. If you want a “fast” USB flash drive go for one using a dedicated controller such as SiliconMotion SM2320.

That being said, I can see the value of a NVME based NAS but I do think a simple AM5 based solution would do fine although it can’t compete with the amount of PCIe lanes.

Beware! I’ve got a Crucial P3 4 TB (non-Plus) and it drops to about 80 MB/s sequential read speed on data that is a few months old, due to NAND charge decay. They don’t tell you that in the data sheet! (And I do a scrub on this drive every week so it isn’t like the data isn’t being read for very long.)

When it comes to SATA SSD:s there’s also the Kingston DC600M (up to 7.68 TB). Jim Salter (the ZFS Ars Technica writer) recommends them… and I just got two used 3.84 TB ones at about 50 €/TB. Can’t vouch for them otherwise though. And they are expensive!

So irrelevant information?

MX 500 is the drive type I use in my desktop. I hope they will give us an updated version. I will still have access to two high-end SATA drives .

  • KC600 85 euro/tb
  • 870 evo 70 euro/tb

Then there is an enterprise SATA drive at around 120 euro/TB.

P3 Plus might not be the drive to get, but there are multiple highly rated NVMe SSDs for 60 euros/TB and higher.

My internet connection is dsl 250 mbit/s down 15 mbit/s up. I can get 5G mm for the same monthly cost. I am very tempted to change provider.
The LAN is currently running at 1 Gbps. I only have the ISP router connected to my Thunderbolt dock (1 Gb) and desktop (2.5 Gb).

My understanding is that I could directly connect the NAS to any computer and achieve 10, 25 Gbe, etc, depending on what network cards or Thunderbolt adapters I choose to invest in. I could still connect the NAS with 1 Gb to my ISP router. To make it accessible from “the road” but at a very slow throughput.

I only upload/ingest the footage while I am away from home.
Editing occurs within the comforts of my own home.

The current Intel socket does not have motherboards that support ECC. I don’t fully understand the risk of omitting the future. But I would sleep better at night knowing there was ECC.

I see the benefit of going with AM5 in terms of power consumption. I would still have ECC with the right motherboard. For 348 euro I can get Asrock B650D4U. I could get 6 NVMe with direct CPU contact, 4 SATA and a 4x gen 4 slot over the chipset. Is idle consumption or TPD celling your main concern for the noise level?

To escalate the project further,
How stupid would it be to combine a NAS with scientific computing? In a few months, I will no longer have access to my university computer cluster. I still have a bunch of computational fluid dynamics-based projects I would like to develop. My plan was to use Cloud computation. But used eBay high core EPYCs looks rather tempting.

Fan noise is constant, while the moving head of a hard drive is unpredictable. Shouting of neighbors vs a busy road might be a good comparison. There might be an easy psychological explanation why HDD has bordered me before.

Is no-one bothered by the 120 Hz hum? Constant head seek noise could be seriously annoying if trying to sleep in the same room, but even an idle drive hums constantly, and that bass note is the noise that really is intolerable to me. :face_with_spiral_eyes: Is it just me?

(This is for modern helium drives with lots of platters; on older drives the sharp whining noise kinda droned out the hum. And there was less hum overall since there was less rotating mass.)