Mini ITX PC - NAS Conversion Suggestions

Hello,

I have an NZXT H1 case with the following hardware in it:

  • Gigabyte X570i Aorus Pro
  • AMD Ryzen 9 3950X
  • 32GB 3600MHz DDR4 RAM
  • 1TB NVMe Gen4 SSD
  • Nvidia GTX 1080Ti

It was built from some left over parts from when I upgraded my main PC and was used for Linux development (RHEL7). However, I’d rather have it as an all-in-one server that can be used for storage, dev VMs and some light media streaming within the home network.

First thing I’d do is convert the 1GbE network to the 10GbE one, where all of the PCs on the network would have a 10GbE link to the NAS. That’s the straightforward part.

The not so straightforward part is the actual OS that I’d be running, as well as the storage that will be used and its layout. As for the requirements I have: I’m not much of a hoarder so I don’t expect to have a bunch of storage space, but I’d like for the hardware to be able to saturate the 10GbE network when needed. I don’t need redundancy per-se, it would be a nice to have but I’d rather have the performance since data that’d be on the server isn’t critical and I don’t care if it’s lost a few years down the road (that’s just my view at the moment, it might change later).

Since H1 can only fit 2 2.5’’ SATA drives (in its drive bay, but a third one would most likely be able to fit somewhere and dangle around) and I’d like to avoid spinning rust, I’m leaning towards having 2 4TB SATA 3 SSDs, and here are the contenders:

  • Samsung 870 EVO - $400
  • Crucial MX500 - $400
  • Micron 5300 PRO - $600
  • SanDisk Ultra 3D - $450
  • Seagate IronWolf 125 - $900
  • WD Blue SSD - $500
  • WD Red SA500 SSD - $580
  • Intel DC S4510 - $450

From the price alone I’d choose either Samsung 870 EVO or Crucial MX500, but I’m not sure if they’d play nice with whatever OS I’d use.

From what I’ve gathered I have a bunch of options, but the 3 most common ones would be:

  • UnRaid - Either with ZFS plugin or XFS/BTRFS natively. As far as I can understand (and I’ve seen a bunch of conflicting reports) I can’t really use SSDs in an array (due to TRIM being unsupported). I can use them as a cache pool where TRIM will work fine, although I’m not sure of the implications - will that data be non-volatile and remain there after shutdowns or power losses? Can cache pools be striped for better performance? What if I decide that I want parity and add a third drive (I know that it has to be an HDD)? Would I need and NVMe SSD as a cache… for the cache pool?

  • TrueNAS Core/Scale - From everything I’ve seen I should be okay with it and my intended use case (not completely sure though). Although I’d prefer UnRaid due to it’s UI and user friendliness

  • Any Linux distro with simple SMB shares configured - Easiest for the simple use case I have, but I’d have to do most of the stuff manually when it comes to VMs and media streaming though

I’m still very new to this and could use some suggestions, either based on the stuff I mentioned or something completely different.

Thanks.

Here’s a curve ball for consideration:

Purchase a PCIe to 4x M.2 NVMe card (about USD40-50 on Aliexpress), then populate it with 4x 2TB NVMe drives. Use LVM to create dual pairs, which are then put in a RAID1. This gives you redundant storage for just a little more money as a single drive from your list. Bonus: your SATA bays remain available! Obviously, said M.2 drives are Chinese, these are on sale ATM:

(I got one 2TB drive on order, will be a few weeks before it arrives)
You’ll need 4x4x4x4x bifurcation in a 16x slot for all drives to be seen/registered. I’d suggest using JFS for the arrays: performance-wise almost on par with XFS, but better (more complete) tools.

HTH!

PS: welcome to the L1T forums!

Forgot to mention that I’d like to use the GPU passthrough for VMs (to continue working on RHEL7 or other distros, where I need the Nvidia GPU for CUDA development).

I already had to find a way how to get a 10GbE NIC on that machine since the PCIe slot is already occupied and my current approach is to have an M.2 to PCIe riser where the NIC will be slotted in.

Thanks for the welcome!

Sorry, I thought it was an ATX mainboard. My bad! :heart_hands:

No problem! It’s definitely an interesting idea and if I decide against using the GPU for whatever reason I might give it a shot.

it might not be a terrible idea getting a cheap like $50 atx case even if thats all you get ust to get everything put together

let me think on it

This, or perhaps you could even consider getting something like this:

It measures 310 x 305 x 221 mm or 21 liters, which is larger than the NZXT H1 (405 x 196 x 196 mm), but not that much larger.

I would also like to mention the Fractal Design Node 304, but this one sadly seems to have gone out of print. :frowning: Being able to mount up to 12 SSD drives (though, you need to leave them hanging), four per hanging enclosure, is definitely awesome, and could be a great case to transfer to. The bigger mATX sibling Node 804 is available though and it, too, makes an awesome NAS case - but it is probably too big for your taste.

Unfortunately, due to the space constraints the case needs to be a vertical one. :frowning:

I know it’s not much to work with, but I already made peace with having only 2 or 3 drives available. That’ll have to do until it can all be migrated into a bigger case.

305x221 is too big of a footprint then, I take it? That case is possible to flip so the right side faces down, just make sure the air exhausts on the back are free to ventilate.

If Vertical is a requirement perhaps the Jonsbo n1 fits the bill though? Linus Sebastian from LTT did a build in it a couple of years ago now.

It is, I chose H1 as something that can fit on the table while still having enough space for other random accessories.

JONSBO N1 looks nice, but doesn’t seem to able to fit the GPU that isn’t single slot.

Thought this post deserved an update. It was quite an adventure.
(skip to TLDR if you don’t like fun)

There were some hardware changes that were initially unplanned, but were needed for one reason or the other. I ordered two 870 EVO 4TB SSDs and found a Kingston KC3000 4TB NVMe for a really good price (same as 870 EVO), as well as some PSU cables. They were also needed because the M.2 to PCIe x16 riser is powered by a SATA power cable and NZXT H1 only came with two of those (both needed for SATA SSDs).

I also figured that I could buy a smaller GPU since prices are finally back to normal, at least for the RTX 30 series. So I bought an RTX 3060Ti as well to replace the enormous 1080Ti. The main reason for it was to have some space above the GPU where I could mount the 10GbE NIC (initially it was supposed to hang out of the case, since the riser is like 50cm long).

Fast forward to a few weeks later when most of the stuff arrived - I decided to clean the case and the built-in 140mm AIO so I could create a partial build and play around with Unraid. Cables weren’t here yet, so I used a single SATA SSD and the other SATA power cable was used for the 10GbE NIC’s riser. Once everything was assembled (just hanging all over the place since the case couldn’t be closed) I entered the BIOS just to check that everything was properly set up and noticed the CPU temps were at ~80c. No big deal I said, the 140mm AIO is at low RPM and isn’t cooling anything at that point… and then I got a hard shutdown before managing to exit the BIOS. Tried it again and got the same shutdown in a matter of minutes. Note that this PC was used in the past two years for long code compiles (C++ and templates, eh) and never once overheated or even felt hot (it’s on the table, 30cm away from me).

After a bunch of troubleshooting, deciding to call it a day and disabling turbo boost to keep the thermals in check I found some references of the H1 AIO getting clogged with some random shit after it gets tilted for random maintenance. I decided to open the pump and check if that’s the case, and in the meantime stress test the CPU with a spare NH-D15 (obviously it can’t fit in the H1, so it was an open test bench). And lo and behold, the random shit is there and it’s fully clogged. No reason to clean it up and refill, I just threw it in the trash since that’s what it is. CPU stress tests passed and everything was okay with the hardware.

At that point, since 140mm AIOs were nowhere to be found and I couldn’t really deal with RMA, I thought about getting a new case that had a similar footprint to H1, but could fit a larger AIO… or I could just bolt a larger AIO to the back of the H1. I ordered a 240mm AIO thinking if I can’t hack around it I’ll order the case as well and be done with it.

Fortunately, 240mm AIO installation was super easy with the radiator bolted to the back of the case. Holes on the back panel were the perfect size and even lined up with the radiator holes and no case modifications were needed. It doesn’t even look that bad and has the same footprint as before. GPU was securely seated in its chamber and the NIC fit above it (super happy about it although it’s a bit janky at the moment, but I might revisit it). I also routed an ethernet cable extension from it to the back of the case for easier maintenance. The panels could be closed and everything had great thermals (well, the NIC could use a bit more air).

TLDR: Upgraded some parts. Everything went well in the end, but had some problems every step of the way. Don’t buy NZXT AIOs ever again (140mm built-in AIO for H1 is awful and gets clogged with goo).

In the end, the hardware is as follows:

  • Gigabyte X570i Aorus Pro
  • AMD Ryzen 9 3950X (cooled by Corsair H100i Elite)
  • 32GB 3600MHz DDR4 RAM
  • Nvidia RTX 3060Ti
  • Kingston KC3000 4TB NVMe (first M.2 slot)
  • 2x Samsung 870 EVO 4TB
  • Asus XG-C100C 10GbE NIC (second M.2 slot, through an M.2 to PCIe x16 riser)

Didn’t get much time to set up Unraid, but the current plan is to have two pools of storage:

  • Slow(er) storage (2x 870 EVO 4TB) - BTRFS, mirrored and used for long term storage of whatever I decide to keep
  • Fast storage (4TB NVMe) - BTRFS, used by VMs and Docker containers, as well as a cache for the SATA SSDs

Thanks for coming to my TED talk. Oh and here are some pics.


1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.