[Build review] Planning AM4 home server, NAS + light virtualization

I haven’t tried hAP ac3 … I did use hAP ac2 for a while, it’s a similar chip, wifi is … not as good as one might expect.

I’ve been using a pair of u6-lr in my house for the last year ish … comparatively they’re amazing, even with 2x2 and even with 802.11ac clients I was routinely seeing anywhere between 500 and 700 Mbps whereas hap ac2 … more like 200-320 with that same client.

  1. ac3 promises an improved antenna over ac2 (3 dB, that’s two times more power)
  2. if I ever get 10 gig networking I will probably get MikroTik switches, so it’s same ecosystem
  3. I’m a CLI guy (seriously, I don’t even use a file manager on Linux) and the console interface in RouterOS speaks to me

All Ryzen CPUs without an iGPU support ECC fully. The motherboard is the only determining factor. I was running ECC in my home server with a 3600 and then a 3900X on a consumer Asrock B550 board, and it worked fine.

One thing I would be concerned about is if you want to use a GPU in the build for anything, you’re be close to using all of the PCI-e lanes. Might be worth going for a consumer X570S board instead? Asrock support ECC on all of their consumer boards. You’ve then got a bit of extra budget for a better CPU.

Also been using this for the last year and a half. Got proper Wifi 6 support in a firmware update a few months ago, which has made it amazing value. Don’t have any complaints about it.

I ended up changing my mind to X470 later on. Those two NICs are not worth the inability to mirror the boot pool.

Plex is not within foreseeable future. If I wanted to set up some sort of screen wall setup… Nah, I’d need a second machine. Outside of those two, is there any other reason for GPU? Maaaaybe if I got into GPU compute coding, I could see that one actually happening. But eh, that’s what my main rig is for.

As for consumer boards… hm. I kinda like the idea of IPMI. And I probably won’t be saving much. There’s absolutely no good mATX boards, and I don’t really want to move up to a bigger case. Half the issue is that the Ryzen platform simply is limited with the amount of PCIe lanes (and thus slots).

Ooooh! Cool! That’s great news. Although that IPQ-4019 seems like a pretty old chip (or qcom is just skimping using ancient CPU cores). I wonder if the WiFi will beat my ISP Arris. I think the only Wifi 6 client in my house is my phone. Which is usually off Wifi. Still cool though.

I’d suggest you skip your motherboard RAID support (on consumer boards). This is simply software RAID with hardware lock-in, Linux software RAID is better in basically every way. (And I’ve only had bad luck with the Intel equivalent software RAID). Use ZFS mirroring or MDADM instead if you want software raid, then you get far better manageability and monitoring.

1 Like

“allows me to mirror boot drive” as in has two M.2 slots, so I can have two drives without wasting a SATA port or 2.5" position in the chassis.

Are there really folks out there who’d use motherboard RAID in this situation?

So, now my plan has changed - turns out TrueNAS SCALE is more locked down that I’d want it to. So, I’m going with Proxmox as host. I still want TrueNAS for the NAS part of the server. Also, I’m starting to have my doubts about the Hyper card - it is nice, but would be much nicer if I had PCIe lanes to spare. Hyper card, NIC, and I’m completely out of PCIe slots.

But:

  1. Apparently keeping VMs in the same zpool as Proxmox itself isn’t the best idea, gotta figure how bad though
  2. Putting VMs on a file share via NFS seems monumentally stupid in this setup
  3. Setting up TrueNAS’ storage pool in virtual drives seems stupid as well.
  4. I’d need more drives?

So, my current train of thought is:

  • Two of those Kioxia drives in the slots off of X470 in a mirror for VMs and possibly Proxmox itself
  • Some combination of MX500s connected to SATA ports off X470 for TrueNAS. Or heck, maybe even (eww) rust?
  • If I really need something separate for Proxmox, it’s getting two 2.5" SSDs and connect them to the ASM1061

Not that ASRock Rack tells me which SATA ports are which… Cause fuck you. Hopefully I will be able to tell by the traces on the PCB. Or somehow.

I’d only describe that as not-optimal. It limits flexibility, and if you run a disk-heavy workload it may impact proxmox. However nothing you’re running looks disk heavy (think large database, or ELK), so you’ll be fine.

And if you want to change your mind later, you can perform a storage migration with proxmox while the VM is on. (Besides downtime to install disks).

Maybe setup a ZFS dataset on the proxmox host, and export it to Truenas via NFS or SMB. (Can TrueNAS Scale use virtiofs as a client?)

You could also consider a regular linux distro. With VMs and Containers you can keep everything relatively well contained.

1 Like

I’m running Proxmox and I have a TrueNAS VM with with SATA Controllers passed through. Boot drive is a cheap external USB 2.5" SSD so Host doesn’t need any SATA.

You could install NFS on Proxmox and use Proxmox as a fileserver. Proxmox has ZFS after all. But TrueNAS for storage is much more straight forward. I prefer Proxmox being a hypervisor-only and don’t install any software on the Host.

I’m pretty much resigned to just having separate physical drives for Proxmox and TrueNAS

Yeah, I can’t really pass through the SATA controller, as I’m using chipset ports. Hopefully passing through just the drives will work fine.

Didn’t work well for me. No SMART or other data and degraded performance for me. This is why people choose proper boards or HBA to make this work. x570D4U has excellent IOMMU groups, super easy.

Otherwise, there are PCIe SATA ports available via HBA or similar devices if all else fails.

1 Like

So, I need a HBA? No Hyper card for me, sadly. Urgh, the PCIe situation is atrocious. Speaking of, I’m still taking the X470D4U over the X570D4U - the x4 ports is more usable over the x1, even if it sacrifices some speed on the M.2s.

I spent the last hour looking at various Intel options, and the nice Supermicro boards are either Skylake, so too old, or Alder Lake and use DDR5, . I don’t want to know the pricing for ECC DDR5.

So, a quick update to the parts list, in light of needing a HBA.

Base platform

VM pool

Storage pool

  • LSI 9211-8i, most likely (thanks @Log for pointing me in the right direction)
  • 3x1TB MX500s

On the other hand, I have started having doubts about the case - sure, it’s super small (22L), but it will be a pain to work in. Plus, if I tap out those spaces inside, I’m done for. Silverstone SG11 or SG12 seem tempting right now, but I wonder, with those SSDs packed as tightly as they will be, won’t they run hot?

Another thought: imagine posting a freaking NAS build to r/SFF? People will freak out :stuck_out_tongue:

For the money you’re going to spend I’d look at the used server market where you would get a lot more for the money. You can find great bargains in auctions especially bidding the last few seconds so the price can’t jump too much. I just purchased today a SuperMicro FatTwin 4 node 36 bay server. That 4 nodes each with 2 CPUs and 128 GB ECC memory, no HDDs for a tad under $1026 for 100K+ passmark score. That’s 80 cores or 160 virtual CPUs. You get redundant power suppliers, typically multple Ethernet ports. Each node in what I just purchased has 2 10Gb ports as well as * 1x RJ45 Dedicated IPMI LAN port. Each node also has SAS9207-8 included.

You certainly don’t need to go that big. You could be really happy with a nice score on a Dell R730 for example. Enterpise gear gives you functionality you wouldn’t have without such as iDRAC or or IPMI which allows you to manage the computer even when it’s off as it’s a small management computer that also serves to allow remote drive mapping and KVM functionality. I couldn’t imagine using servers without IPMI anymore.

I’m sure it’s easier to purchase equipment off ebay in the States vs Poland but a lot of UK equipment is available and it never hurts to look. I’d explore this for no other reason then to have enterprise gear vs consumer gear in your lab especially when thinking about virtual machine or containers.

1 Like

That motherboard actually has IPMI.

And I want something relatively silent, as this server would be in my bedroom. So rack units, unless low power, are a bit out of the question.

Used server market in Poland… Plain sucks. For the price of a Dell T420 here I can get a T430 on ebay, from EU so no import taxes.

So, I came back to necro my own thread, as I’m much closer to being able to afford the server.

After looking over all of my options, it turns out that bifurcation just won’t work out (I couldn’t find a platform supporting enough PCIe slots and bifurcation at the same time withing my budget). So, my primary storage will be 2.5" SATA drives.

Software stack

This has been, more or less, decided.

Host OS - Proxmox

Two reasons:

  • first class LXC support
  • native ZFS

As for Docker, I actively do not want it on the host - had some issues with it mucking with iptables, so I’ll throw it into LXC or a VM.

file sharing - TBD

What I want is not a full NAS distro (like TrueNAS), but rather a share management solution I can run in a container. Maybe I can do it with openmediavault, maybe Nextcloud or ownCloud - doesn’t really matter (suggestions welcome), as long as

  • it doesn’t want to manage the drives
  • has a nice web GUI for managing users and shares
  • can be mounted under Windows, preferably as a drive
  • works in a container
  • supports Backblaze or a similar cloud backup solution

Storage

Right now it’s looking like this:

Data (files and VMs): 6x1TB Samsung 870 Evo in raid Z2

  • I’ve recently learned that MX500 tend to run hot and overheat in such environments
  • yes, I know I’ll loose a third of it on redundancy, but it’s at the right balance point between my needs and budget

OS: 2x250GB Samsung 870 Evo in mirror

Large enough, that’s about it

High IOPS

If I do decide to run builds on this machine, it is a storage that I won’t mind loosing, so I can just throw a single M.2 NVMe in there somewhere.

Platform

Now, to the meat of the issue.

First, let me preface this by saying that I do not want a rack server - no matter what, it won’t be quiet enough to put in a bedroom.

I’m currently hemming and hawing between two solutions, a refurb T630 or a new AM4 build.

AM4

This is the interesting part:

  • ASRock Rack X470D4U - it’s cheap, has IPMI and ECC support, does what I want, even if I’m limited to PCIe3
  • R5 3600
  • 2x16GB 3200 MHz ECC DDR4 UDIMMs, or maybe 2x32GB, depending on budget
  • Silverstone SG11 - I wrote more about it here
  • whatever SFX PSU I can get, either SST or Corsair

AM4 Pros

  • small (270 mm (W) x 212 mm (H) x 393 mm (D), 22.5 liters)
  • starting spec is faster than what I’ll be able to get with T630 in my budget
  • whenever I upgrade my personal PC there’s a 5900X I can put in here
  • x4x4x4x4 bifurcation
  • this is what I really want

AM4 cons

  • more expensive
  • specifically, the memory is expensive
  • much less IO
  • not rack mountable

T630 16xSFF

This probably doesn’t need much comment, does it?

  • waaaaaaaay more IO
  • I can start it out cheaper
  • quad channel memory
  • redundant PSU (do I even need it?)

Cons

  • huge and heavy
  • won’t ever reach the performance of a 5900X (even dual 2690v4 will be a bit slower)
  • 99% I won’t get the front fan assembly, so no passive cards

My thought process

Case and size

First of all, I’m really in love with that tiny Silverstone case. Also, while I have the room now, who knows how much space I’ll have in the future - so I’d prefer something easy to fit in a small space.

IO

At the beginning this may seem like a big deal. But unless my pay rises dramatically in the future, I won’t have the money to actually utilize it. And if it does, leaving this box behind won’t be an issue. What’s happening here is that in AM4 I’m maxing the on-board SATA ports, but at worst I can throw in a HBA later on. What would this box really need apart from a HBA or two and a NIC? And x8+x8+x4 should be enough for that.


TLDR: I’m probably just rationalizing going with AM4 because that’s what I want. But maybe this will help someone, and maybe someone can comment on this.

AM4 certainly has a lot of perks. It is what I am running. I have mine rackmounted in a Sliger case so I wouldn’t concern yourself if you are toying with racking it.

I would change your processor to something with onboard graphics otherwise you will need a dGPU. I am currently shopping eBay for a Ryzen 5000g series processor so that I can free up my Radeon 5700 for vm use.

The Ryzen will eat the T630 for lunch. I’d you actually plan on using this thing go for the Ryzen or, since you are going to be buying all of this stuff, 1st gen Epyc is also a great route. PCI-e 3.0 but for what most home gamers are doing that is plenty as you want the lanes more than the speed.

Hah, thanks for the reply, but I actually ended up going with EPYC. A friend alerted me to relatively cheap EPYC Rome CPUs on eBay. I’d link the other thread but I’m on mobile.

That said, I wouldn’t go with a non-pro Ryzen G because of the lack of ECC support. That said, no need for a GPU if you buy a board with IPMI/BMC - it fulfills the function.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.