Rackmount NAS Build - Current Options - Build Suggestions - Am I Dumb?

Hello!

I’m new here, first time poster but a regular viewer of Wendell’s YT content. Such a valuable resource! The last 6 months has been an absolute ride for me and my debut into my newfound home networking/homelab hobby. To make it short let’s just say I went from a crappy ISP provided all in one ONT router/switch/wifi to complete overkill 10G/25G home networking along with an extensive camera system (details at end of post for those that need/are interested). I have a 42U 39" depth rack, ready to fill it with additional projects. I consider myself quite tech savvy and a quick learner but I’ve never had a need or desire to jump into other operating systems like linux before so I’m very new to the space in that regard. Don’t let my username fool you, I’ve never used TrueNAS or ZFS before, but I have watched and read a lot.

I’m going to cut to the chase because I don’t want to scare anyone off with the wall of text I just wrote. You can read all that stuff just below if you need more context for use case and environment this is being set up in. I don’t really have a budget in mind, it really depends on what I’m getting out of the deal and what doors it will open for possibilities. I’m thinking option #1 will probably be around $3000CAD/$2100USD diskless. I imagine option #2 will be closer to $10,000CAD/$7000USD, diskless. The S-45 Chassis barebone with redundant 1200w PSU is $4800CAD/$3300USD alone but the lifelong “buy once cry once” build quality of the 45 drive units is seductive. I’ve heard disk shelves can be filled with headaches and can be more trouble than they are worth when it comes time to reboot your setup. I’m open to being convinced otherwise, I’m not married to that thought by any means.

Option #1 - A 45 drives HL-15 chassis with maximum mobo size of ATX and PCIe card length of 10ish inches. AM5 platform? TrueNAS(scale?)/ZFS bare metal. The sole purpose of this unit would be to just host plex media (not the plex media server). No VMs, no bullshit. Slow writes would be acceptable as 95%+ of use would be reads after initial transfers in. LSI 9300 16i HBA (incorrect?). Something with an iGPU so I can use the remaining PCIe lanes for ZFS special devices or other enhancement drives running on m.2 NVMe or carrier card high endurance SSD solutions. Do I even need any of these for this use case? If I don’t then I wouldn’t mind one or two U.2 NVMe on a carrier card for if I ever decide to run all my Linux ISO remuxes through FFMPEG out of desperation to free up storage, that’s a thing, right? Maybe I’d just use it as a “whatever” networked SMB share. I like the idea of running ECC memory. raidz2 - 5 wide x 3? raidz2 - 7 wide x 2 with 1 spare? raidz1 3 wide x 5 sounds risky for 24TB drives. I’m thinking z2 is where I want to be with 15 bays for option #1 under this use case?

Option #2 - A unit I can grow into and expand upon over time without having to build an entirely new storage unit and use additional rack space for storage after outgrowing the HL-15 and Lockerstor 10. It would be a 45 drives S-45 barebones chassis (again, ATX board MAX, 10 ish inch PCIe card max length) 45 bays for various pools, configurations, and use cases, including the Plex library. It would not be the actual plex server, just storage. It would most definitely be a unit to grow into over time as my experience and use cases expand as I learn more and do more with this new hobby of mine! Zen 3 EPYC like a 7443p on something like an ASRock ROMED8-2T/BCM or a Threadripper pro 5000/3000 on an ASRock WRX80D8-2T. I have no idea if I should be running TrueNAS on Proxmox for something like this to utilize the VM capabilities of a setup like this allows. I have just a smidge above zero experience with windows Hyper-V, let alone Proxmox. I’m inexperienced and my thoughts on this are probably inaccurate and worthless but, virtualizing a storage server seems risky and introduces complexity that makes me nervous about the integrity of my data should anything go wrong. I guess I’m just not sure how much CPU power I need in the event I want to put all those PCIE lanes/slots to use with u.2 or m.2 NVMe devices or how many of the 45 bays will ultimately have SSDs in them over the years. Do I keep this unit purpose built as a bare metal TrueNAS(Scale?) hybrid storage unit? I have no problem making a completely new server once I actually start to dive into that stuff who’s sole purpose would be hosting VMs that I wouldn’t be afraid to break, so to speak.

<<>>

My wife loved the plex server I was hosting on my main rig that I run 24/7, however my hodgepodge of various storage devices quickly filled and I needed another solution to keep her happy. I had just heard about TrueNAS and ZFS shortly prior to ordering my current solution, the Asustor lockerstor 10 gen 3 along with 10 x 24TB Seagate exos drives and 4 x 4TB nvme drives. I thought I was going to love the unit, but as it turns out there are a severe lack of customization as well as hardware/software limitations that make me indifferent towards it. I opted for a 10 drive mdadm btrfs raid 10. The 24TB drives left me worried about running a raid 5 of any size. Setting it up as two separate 5 drive raid 6 arrays seemed silly as well and would have lacked the write speeds I wanted for mixed use. “Accelerating” the raid 6 arrays with Asustor’s raid1 nvme read/write cache option left me worried about reliability as Asustor hadn’t exactly instilled confidence in me with their software and design choices thus far. I currently just use slot 1 & 2 as a raid1 boot and slots 3 and 4 as SMB temp storage. They run at PCIe 4.0 x1 per slot - 1.2GB/s ish over SMB…So here I am, disappointed with 240TB of raw storage chopped in half to 120TB (109TiB). It pains me, because I know a TrueNAS and ZFS solution would most probably result in more efficient storage while achieving similar or even better performance along with all the other benefits. I had a need and I thought the lockerstor would be fine, and to a point yes, it is indeed quite fine. The lockerstor is just not expandable according to the new TrueNAS/ZFS standards that I’m now educated on and want to adopt going forward for new builds.

Given my current use case of dropping 50-100GB of sequential Linux ISO remuxes onto the lockerstor, I’d imagine my performance issues probably has something to do with the 64k chunk size ADM decided to use automatically when I specified the creation of such a massive array. I guess Asustor thought that all those 24TB drives would be used to store pictures of cats, grandma’s recipes or hosting an SQL database or some shit. I probably sound super ignorant right now, I’m just a bit sour about the whole thing and there is probably a good reason they just set it to 64k, right? At no point was a chunk size option presented in ADM’s GUI, still my own fault of course! Perhaps my expectations were too high and performance is where it should be for a 10 disk raid 10 but I have severe doubts. 450MB/s reads and writes to a pcie 4.0 x4 m.2 (on my PC) over dual 10G SMB multi-channel with 50GB+ files. I can max out the m.2 slots on the lockerstor over SMB at around 1.2 GB/s which I’d imagine is pretty close to max after overhead, 2GB/s is the theoretical max for PCIe 4.0 x1 that the slots use on the unit.

So here I am asking for advice so I don’t make the same mistake a second time! I have decided I will reconfigure and repurpose the lockerstor for another use after I build something new from the ground up that has more buttons and knobs to push and turn. Right now I’m leaning towards these two ideas and I need a “dumbass” check!

Okay, network gear! Be nice! I know UniFi gear is seen as limiting or inadequate in some circles and their “L3” switches are supposedly “L2” switches masquerading as “L3”. I have not hit the point where I feel these issues yet as I have just started this new, lifelong, networking/homelab hobby. I know my setup is laughably overkill for its current use case/workload, I have a problem, I guess.

42u 39" depth rack
UDM Pro Max (10G cloud gateway)
USW Pro Aggregation (10G/25G)
USW EnterpriseXG 24 (10G/25G)
USW Pro Max 24 PoE (2.5G/10G)
USW Ultra
USW Flex
2 x U7 Pro Max APs
1 x E7 AP
2 x AI Pro cams
2 x G5 Pro + enhancers
4 x G5 Turret Ultra
2 x G5 PTZ
2 x G5 Bullet
1 x G4 Instant
1 x G4 Pro
1 x G4 Pro Doorbell
2x 1500VA/1000W UPS
Asustor Lockerstor 10 Gen 3 - 10x 24TB Seagate Exos - 4x 4TB Samsung 990 Pro NVMe - (purchased for TBW rating and future use outside unit, also best sale/value purchase at the time)
USW Aggregation - not racked or in use, replaced by Pro Agg
UNVR Pro - populated with 3 x 24TB drives - A little peeve of mine…if it had it my way these would be YEET’ed into a RAID 0 for maximum scrub performance and storage duration, even though it means no redundancy. Any footage we’re interested in keeping would have already been archived! All other footage inquiries would be purely for curiosity or creative use and would not necessarily be missed. The fact that UniFi only has options for raid 1, raid 5 and then raid 10 on this thing seems silly to me.

So…ya! Any input would be greatly appreciated!

1 Like

Thoughts on Option 1

AMD iGPUs are not as capable as Intel iGPUs when it comes to media encoding/decoding (Media Engines). For an AMD CPU setup, something like an RTX A3000 could be a great choice to handle this workload. Additionally, AM5 might not be the best platform if you need lots of PCIe lanes for expansions. I also believe ECC memory is essential for TrueNAS to ensure data integrity.

Thoughts on Option 2

Personally, I wouldn’t virtualize a storage server. I know opinions are divided on this topic, but I prefer keeping storage dedicated and bare metal.

If you’re open to suggestions, I’d actually recommend splitting your setup into two (or even four) servers. For instance, you could build one storage server running TrueNAS and another running Proxmox. You could even create a cluster using something like three Minisforum MS-01 workstations with their excellent iGPUs or set up an Intel NUC cluster. There are even 1U rack mounts available for setups like this.

With this approach, you’d have a dedicated cluster for Plex and other workloads, while your storage server provides the media files.

When dealing with projects of this scale, it becomes harder to find a true “all-rounder” system that does everything well. Separating roles might be a more efficient and flexible solution.

Lastly, a tip: Avoid Windows Server with Hyper-V. Its performance is noticeably worse compared to VMware. I can’t comment on Proxmox in direct comparison, but I’ve heard good things about it as well.

The database, etc., should run on a pure SSD pool for performance reasons. I’ve heard of cases where Plex became extremely slow because certain data wasn’t in the SSD cache and had to be read from the disks repeatedly.