One-box homelab

I’m planning on building a one-box-does-it-all homelab server using proxmox, and I just want to post here before I hit go on anything, to make sure i’m not being an idiot:

Budget is around £5k

Category Part
Case 45HL HL15
CPU AMD 7402P
CPU Cooler NH-U14S-SP3
RAM 8x32 DDR4 ECC
Motherboard ASRock Rack ROMED8-T2
HDDs 7x Seagate Exos 28TiB
HBA LSI 9300-16i
ZFS SLOG Optane 900p
ZFS special vdev 2x 990 pro 2tb
Proxmox boot drive 1x crucial p3 plus 1tb
UPS APC SMT2200RMI2U

The plan with the storage is as follows:
7x 28TiB RAIDZ2 vdev
1x optane SLOG vdev
2x 2TiB NVMe mirror special vdev

Is this reasonable?

that MB can run 8 sata drives with out needing an HBA. your ZFS layout makes me think this will be the tank used by ProxMox, so you do not need the HBA unless you know you need it for something else?

the case has a lot of HD slots so maybe you are going to do an ‘all-in-one’ home server build. if that is the case you will want to have your ZFS for DATA seperate from your ZFS for VMs (probably).

the server build hardware looks ‘good’ depending on the goal of the build.

1 Like

The idea with that case is with 15 bays, i can fill 7 now, and then in a few years if i need to i can put in another 7 - hence i’ll need the HBA. also.

My plan was to have one massive zpool and use zfs datasets to separate out vm data and “normal” data, is this a bad idea?

and yeah, the goal of the box is just to do everything (fileserver, plex, self-hosted services, sometimes things like minecraft servers, etc etc)

no, or maybe yes? i guess it depends…
how do you plan on using this for ‘normal’ data, and what is your definition of ‘normal’ data?

that layout looks good on paper, but home server builds would not typically do that for a few reasons. it might be difficult to troubleshoot, it might make ‘normal’ data difficult to manage, and theres the argument that you really shouldn’t use the hypervisor as a NAS. it is not wrong or bad, it is just less common in this use case.

usually a fast pool for VMs to reside on, and a slow pool for ‘normal data’ and that gives the ability to handle sharing out any way that makes it easy to manage for you. rather that is passing an HBA to a VM, or building SMB and NFS shares on the hypervisor.

still though, your build is your build.

This is how you do things.

I’m not a fan of single vdev RAIDZ2 (slow) personally and I’d rather take drives with better TB/£ (16-20 TB enterprise drives) and put money elsewhere.

Epyc Rome and DDR4 is dirt cheap…good bang for the buck.

UPS isnt really necessary…nothing bad will happen on power loss. But always nice to have, but make sure your network is covered too. Can’t talk to the server if the switch is powered down.

Just keep 8-16 lanes ready for NVMe expansion down the road. You may want to compliment storage with a fast SSD pool later on. But EPYC certainly is ready for expansion. Make sure your case is too.

A simple container with Cockpit for SMB/NFS is all you need in Proxmox. VM and passthroughs are a lot of trouble and troubleshooting. Keep it simple.

[quote=“Zedicus, post:4, topic:225235”]

Hyperconverged is proven concept. And Proxmox explicitly supports and promotes it by having ZFS and Ceph out-of-the-box. I run my pool directly on Proxmox and have Containers for SMB/NFS/whatever. Reduces complexity and much easier administration.

TrueNAS does the same and even runs Kubernetes on single node as well. It’s fine.

Distinction in ZFS is recordsize, compression and all the dataset properties ZFS offers. Otherwise there are datasets and zvols. ZFS was built for that.

2 Likes

there’s a lot of assumption in that sentence. depending on all the other things connected it can reduce, or increase complexity. my point was just that either way can be fine as long as you think through all of those connection points.