Heya,
Long time lurker, first time poster (kinda), so hopefully this is in the right category, if not, I sincerely apologise!
I’ve been wanting to build a NAS for a long time and had researched the subject numerous times, but never ended up taking the plunge.
After my 5th drive dying on me 2 weeks ago (2HDDs, 3 SSDs over the past decade or so), I’m miffed enough to fork up for a proper NAS build.
I have set some requirements for myself, which I am very much aware will be considered silly by some, so please bear with me ![]()
Requirements:
-
ITX Motherboard
Want to put my Streacom DB4 Passive Case to use, which has been collecting dust ever since I upgraded my workstation + living space in central EU comes at a premium, so no basements or similar to hide all the equipment, and the DB4 is a pretty sleek looking case, so I don’t mind it being a center piece next to the TV
And I know some of the hardware needs proper cooling and passive won’t do the job, but I’m quite comfortable modding cases, and making custom heatsinks with heatpipes re-directing the heat to one of the case’s side panels (see my original build gallery on PCPartPicker) -
All SSD storage
With the Streacom DB4 being a chunky Aluminium Cube, it’s an echo chamber and any HDD’s attached to the mounting brackets would make even a banshee shudder (But also, since this will be in the living room, I simply want this thing to run quiet)
And plentiful of it, I know this is expensive, but I don’t want to find myself in a situation where I end up running out at a time where I can’t afford to, as such, I’m probably looking at 40TB of minimum usable capacity (See below for what I had in mind)
Considering the use cases further down below, I expect SSDs being a necessity (yes, NIC is part of that equation) -
Low power consumption
EU electricity is expensive, and dynamic contracts make that even worse, with regular spikes of 40-50 eurocents/KWh (And a record high of €1.2/KWh earlier this year) -
Reliable
I’d love for this to be a set and forget kind of thing, until a drive dies on me and I get some sort of notification of it, and before you tell me to go buy an off the shelf NAS unit; I don’t mind tinkering with hardware, in fact, I enjoy it, but I have other hobbies too
Under this I think I’d also like to exclude any form of RAID solutions that aren’t recoverable under a completely new system (ie all new hardware other than the drives), but I’m not sure if this is realistic?
Basically I want to guard against motherboard, CPU or OS drive failure, with any of them not killing the entire storage RAID (I think this is why people avoid Hardware RAID these days?)
And yes yes, I know, NAS/Raid is not a backup, I am aware. -
Compatible with MacOS, Linux & Windows Clients
Not sure if this is even relevant? As I imagine there are cross-platform protocols (SMB?)
Use cases are always asked, so:
- Storage with redundancy for important documents and some media
- NAS Drive to work on directly for various workloads: video editing (lossless 1440p, occasionally 4k), photo editing (Capture One, RAW), Programming (large scale, highly templated, C++ projects (read: compiling game engines and hosting code/data repositories))
- Storage for large media where redundancy is less important for certain types of content (eg raw recorded media once project is finished)
- Potentially to store games, although for the time being, this is simply handled with local drives, redundancy here is irrelevant
Future use cases:
- Using it as a media server might be desirable (music mainly), but I’ve little experience in this area and don’t quite see the need for it just yet
- Setting up, at most, a handful of user accounts that run directly off of the NAS, with only light workloads (web browsing, mail, maybe online media consumption (youtube/twitch/…))
This would be for a completely separate NAS to be fair, one I’d put at my parents’ place, as managing tech becomes more difficult for them. Something I’d setup with firewalls and SSH keys so I could remote into the machine from my own home
So this last one isn’t a requirement for the current build, I’d just really like to get people’s input on this, in case I wanted to pass this NAS build onto them later down the line
So, the hardware, ie to ECC or not:
I currently still have an Asrock Z390 Phantom Gaming ITX, together with an intel 9900 (non K), installed in the Streacom DB4, however, this system does not support ECC memory and I’ve read many posts swearing by it, but also many who’ve been running their NAS systems without ECC for years without any issues
And I’m conflicted on this, my current workstation doesn’t have ECC either, so… why would it matter for the NAS? Does error rate go up significantly once you’ve seggregated storage to a separate machine?
Would ECC have saved my ass 2 weeks ago when my 4TB SSD died? (Samsung 870 EVO bad batch FYI, winner winner, NAND dinner)
Not being very experienced with ECC, how does it actually help if a drive is starting to corrupt? Does it start throwing write errors and thus its on you to investigate? Or is it smarter and does it negotiate with the drive and start using reserve blocks, with SMART monitoring software handling the reporting side of things? (eg running out of reserve blocks, I/O errors, …)
If this system is not sufficient for the use cases, and/or ECC is worth the premium (ie splurging on an entirely new system), then what are the recommendations for ITX boards & CPUs?
I had a look already and the Asrock Phantom Gaming B850I Lightning Wifi seems quite affordable still, the other option I saw was the ASUS ROG Strix B850-I Gaming Wifi, at nearly double the proce
Reason I have AM5 in mind here is for the power efficiency and ECC support, as well as Integrated GPU and a Gen 5.0 x16 Slot (as opposed to older boards capped at Gen 3.0 (relevant for intended Card below)
Not sure if better/ other options exist in this form factor?
But mister, ITX only has 1 PCIe slot and limited storage options, what on earth are you thinking?!
Aha! I’m glad you asked!
There’s really 2 options here;
-
The tried and tested LSI/Broadcom HBA cards, but these aren’t transparent to the underlying OS, which I’m not super keen on, because I’ve read quite a few posts of SMART monitoring issues for drives going through the 94XX and above cards.
The 9305 and below appear to handle this fine, but they on the other hand are quite power hungry and lack PCIe ASPM support, preventing the CPU from going into lower? (higher?) C-States, which clashes with the power efficiency desires -
A PCIe Switch, for this, I had the Broadcom PEX88048 in mind, a 48 lane switch at PCIe 4.0 x16
More specifically, one of these: https://nl.aliexpress.com/item/1005010161840654.html
Or similar (although others I’ve found are priced double, or even triple the price)
Does anyone have any experience with these sorts of cards?
From researching, the main advantage of this approach is that all devices are completely transparent to the system, and should thus be fully supported, SMART et all
On the other hand, I can’t find reliable information on the power consumption of these kind of chips, so I’m not sure how they compare to the HBA solution mentioned above, considering the linked card has active cooling, it probably won’t win any prizes, but it’s still a guess at this point
I’m still leaning towards the second solution, even with the potentially increased power consumption over HBA, mainly because it gives me more options on what to do with the PCIe lanes, where the HBA is purely for storage, the PCIe switch could also be used to attached different devices (M.2 → PCIe adapters), and makes optimal use of the x16 lane PCIe slot, in contrast to the HBAs running at x8, wasting half the bandwidth of the already very precious PCIe lanes on ITX
So, taking into account the above, M.2 vs SATA SSDs:
Quite self explanatory really, would you populate that PCIe switch card with 8x 8TB NVME drives?
Or would you instead go for the ASM1166 M.2 → Sata Expansion boards?
There’s a great thread on this last one, with mostly positive reception and proven long term reliability (albeit fragile due to the M.2 boards)
Or perhaps it should be a mix of both, for optimal storage capacity & speed?
eg 4x 8TB M.2 + 4x ASM1166 Sata Expanders (for a total of 4x4 SATA SSDs per expander (or 6 if one doesn’t mind the potential contention)
Unfortunately the ASM1166 is only PCIe 3.0 at x2, and therefor not optimal in terms of bandwidth usage, but if it was x4, that would be at least 8 drives per M.2, meaning 32 drives, for which the Streacom doesn’t have space anyway (nor do I have that kinda money haha :D)
I think I’m leaning towards a split configuration here, with the 4x M.2 and 4x SATA expanders, as this would allow me to create 2 NAS drives, one high performance drive, the other for workloads where writes aren’t that relevant
Which brings me onto the RAID configuration
I’m thinking RAID6 is probably the best option to go with here? Giving 2 drive redundancies which gives peace of mind, but also takes quite the hit on capacity for the 4x M.2 pool
So maybe I should look at this differently, with perhaps all Sata SSDs in RAID 10 to boost write speeds?
Or perhaps different RAID configurations for the M.2 vs SATA pool?
All the speed is of course no good without a proper NIC, so:
What to do here?
I have an intel X550-T2 Dual 10Gb NIC (RJ45), as well as an Asus XG-C100C for the workstation laying around
The idea here is to connect the workstation directly to the NAS, no switches or anything in between, but I keep reading that RJ45 is quite power hungry when it comes to 10Gb and above
Add to that, that 10Gb is kind of underwhelming for an all SSD NAS, especially if M.2 drives are involved, so I’m thinking I should find something that’s at least 25Gb, or even 40/50 (100 seems absurd, right? Or is it?)
However, these higher speeds only seem to come in SFP and QSFP, making things somewhat less compatible with conventional hardware (in case I want to hook up other things later down the line)
So I’m wondering what people’s recommendations are here, considering all of the above, would 10Gb be sufficient? Or are the power savings and extra speed of SFP/QSFP worth it?
And about those power savings, are they even relevant if one goes for Direct Attach? Because researching power savings of SFP vs RJ45 cards, the power consumption of the cards themselves doesn’t really differ all that much, and the savings instead come from the SFP connectors themselves, with SFP being at 1-2W per port, and RJ45 at 3-4W (correct me if I’m wrong on that, please)
But that means, in my case, I’ll, at most, save 4W, 2W for each SFP connector on either end (1 NAS, 1 Workstation), so maybe this is really only relevant at scale??
For more power savings:
Having done quite a bit of low level socket programming, part of me is also keen to write a small script that sends a magic packet on Workstation Login, for WakeOnLan, but I haven’t investigated this too much yet, although I’m sure it’s doable, so I am indeed keen on the system being off/sleeping if it’s not in use, but if people have better suggestions, I’m all ear
==============================================
The idea of a quick little post on the matter, has me sitting here at 4AM, so I think I best wrap it up, because holy post batman!
I’m sorry this became quite a bit longer than I anticipated, and I don’t blame you if you couldn’t be bothered, to those who do, I sincerely appreciate it.
Thanks in advance!
Alex
ps considering the time, I need to go to bed, work in a couple of hours, so apologies if I’m not able to reply immediately for those in other time zones!
pps forgive any spelling mistakes, I’m 6 edits in on fixing them, but may have missed some still
