Help with first EPYC Homelab Build

Hey guys, sorry this will be kind of long. Currently have a few pieces of hardware, an old PC running OpenBSD for my router/dns/dhcp/vlans, another 5820k PC running my FreeBSD/ZFS based file server (8 disks in mirrors) and plex, and a new-ish 2700x/16gb mini-itx build I use as a dev box/proxmox playground.

My network infrastructure, save the router, is all Unifi AP’s and managed switches.

Looking to get my first piece of enterprise server gear for a massive proxmox lab, and potentially consolidate away my FreeBSD file server, by getting an EPYC based server. Primarily looking at the 32 core/H11SSL-i pre-installed package by Newegg along with 256GB ram.

Have run into several questions that make me nervous before I lay down that type of cash.

  1. I have zero experience with SuperMicro and have no idea what to get for ram that is compatible. Newegg sells Nemix brand I think would work, but not sure

  2. I’m familiar with ZFS on BSD, but less sure of the viability of ZFS on Linux/Proxmox. On BSD at least its a first class citizen, not sure of where it stands in the Linux world.

  3. If I were to consolidate my file server, I have two options. One is passthrough my hard drives (do I need HBA or can I just use the SATA ports on the MB) and run FreeBSD as VM. The other is to use native Proxmox ZFS and just map the drives into whatever VM’s need the data. Which is safer? Should I just keep the file server as a seperate bare metal server?

  4. If I do use Proxmox native ZFS support, the default recommendation is to use half the system ram for the ZFS array. Do I really need to give it 128GB of RAM? The system its in now has a total of 32GB of ram which is mostly available for ZFS to do its thing.

  5. Noctua 14S enough cooling for an EPYC part in this role? I’m wanting to build a silent-ish server, its going to sit under my desk in a Fractal R6 that I’m re-purposing.

  6. Love my OpenBSD router. Would like to shrink the form factor from PC mid tower to something smaller and energy efficient. Only real requirements are silent/near-silent, has to be able to route/fw for a 1Gbps line, and have dual Intel NICs. I run dhcpd and unbound as a resolver now but those are also lightweight. Suggestions welcome.

Supermicro has memory compatibly lists-
https://www.supermicro.com/en/products/motherboard/H11SSL-i and click tested memory list.

From what I understand, in most distros it is second class, although proxmox and ubuntu is is first class. The issues with it are not about the stability of the filesystem itself, it is just that in most distros parts are compiled on the user’s machine, and that can fail a lot more easily then installing a binary. Also, it makes running a custom kernel more difficult since you have to make sure that the zfs module compiles.

I hope it’s not too late to reply.

I have been running my NAS on a Ubuntu VM with ZFS (Proxmox as host). It is stable, flexible and yet to have any issues. Disks are passed through following the method detailed in proxmox doc. It works and as per little bit of FIO tests I did, the performance is same as what it was when I had it on bare metal some time ago. And the best part is, since you will be passing the disks by id, it is totally possible to export the zpool and import into a bare-metal setup if you really want to in future. Just one thing to note is that I observed the performance of ZFS goes down drastically if memory balooning is enabled in VM. I have 6X4TB in striped mirror setup (12TB available storage). 12GB is what I assigned to the VM and I don’t see any difference in performance with 8GB/12GB/16GB RAM allocation. Also I have modified ZFS options to use 75% of RAM available to the OS.

Regarding the other points you raised. I will be setting up a EPYC box in 1-2 days with ASRock Rack EPYCD8, EPYC 7451 and 256GB Samsung ECC. I plan to stuff all these into a 2U rack-mountable case and hence I will be using a low profile cooler (Dynatron A26) . I will share what my experience is. If my tiny Dynatron can cool it , a Noctua definitely can too. I won’t anyway be running it at 100% load 24/7. your requirement may vary.

I will update as soon as I have my EPYC Box up and running.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.