Home server lab spec for openshift homelab


I need help in looking at a Openshift Homelab cluster.

My ideal deployment is 3 control plane nodes, 2 worker nodes and 1 service node.
I already have a minipc which is my current homelab (i7 10th gen, 32GB, 512 nvmessd ) minipc so it will double up as my deployment node

Based on my requirements and budget (USD 1100), I have decided to go for a system with the following key specs:

  1. i7 14700K cpu (20c/28t) with UHD
  2. 64GB DDR5 Gskill (32GB x2)
  3. LGA 1700 B760M U ATX form factor motherboard with 4 RAM slots, Wifi, 2.5 G Nic, 2 HDMI, >1 PCI x16 slots from either Asus Prime WIFI or MSI Pro WIFI
  4. Addon NIC 1GB
  5. crucial 1TB Nvme SSD
    Note: Most the motherboards support 2 M2 sticks and I could expand storage later with the SATA ports using 2.5 SSDs or Nvme drives somehow
  6. Deep cool assassin cooler
  7. Proxmox Hypervisor

I have the following concerns and need clarification:

  1. Since I am planning to run Proxmox PVE, please advise if the P and E cores of the new Intel chips prone to any linux kernel issues and if proxmox will run on this device, since I will ask the IT shop to do the build they will not install linux as part of their service so I cannot validate any driver issue? I did face issue with my homelab minipc where by Fedora and centos refused to install due to intel ssd issue and had to learn arch to get a working build to meet my project deadline… Hence I ask.

  2. I have enough vcpus for my build but its a bit tight on the RAM front. I ideally need around 96GB… so my thinking was to add 2x16GB sticks or 1x32GB stick on top of the planned 2x32GB sticks in the vacant slots to achieve this. Is this feasible? I could overcommit on the VM memory (but I have received some advice that it may cause issues and I was encouraged to AVOID this option). I did consider 2x48 sticks are more much expensive for some reason here down under so thats off the table and that again will cause me to add maximum of 2x16GB as future expansion which is also weird

  3. If I wanted to add some addtional SATA storage either ssd or nvme; what should I consider adding to my Case to achieve this? I am thinking SSD enclosures that can add nvme drives and connect to the Sata or is there anything else to consider apart from 2.5inch SSD.

  4. I was told that I need to have good cooling for the i7 so I am going for a high end cooler deepcool or noctua… For my use case I am not sure I need this but at this point I saw the Deepcool assassin IV and decided to add the extra budget for aesthetics.

  5. I wanted to know if I considered adding a dedicated GPU for Windows VM passthrough or even some LLM testing if Intel arc is worth considering…

  6. Finally, I am being over cautious in my queries build since I already spent 500 USD on a refurb minipc which fulfilled my short term needs to run CRC. But I upgraded to 32GB ram on the machine recently for testing OCP SNO deployment with seperate and additional worker nodes. If this test works I am not sure of the overall need to invest in another new homelab server. Apart from the crazy amount on hours I spent in doing this research and validating my build setup, also the sense of achivement in having a server system that won’t be good for AAA gaming…

Apologies, if this post is all over the place…

Please note:

a. Not interested in RPI (No other use case) or used Xeon Tower workstations (not enough space in my apartment and cannot deal with noisy systems) or used addtional minipc setup

b. Not planning to game on this machine since I would rather invest in a console…

c. I have not considered AMD since the new i7 14th gen give more cores for the budget

NOTE: Italics are edits

Did you consider the other components also? load balancer/ dhcp/dns/ image repository/ etc

yeah unless you have some serious SSDs for swap… it’ll just slow down the bootstrap and likely cause whole setup to time out

yes this will be SSD only build… I could see CRC using quite a bit of swap after I started adding operators to it…

optane or enterprise SSDs ?

consumer SSDs i’ve tried (in the past) to use for swap… were painfully slow

In 2022, I was able to create a single node OKD with 7 VMs, but to get it to reliably bootstrap, I needed 128GB RAM on the host to divide among the VMs.

My setup was 5900x , 128GB RAM, 1x 1TB nvme, 4x 2TB nvme

And to be honest, setting up the supporting stuff before okd was more involved than deploying okd.

Load balancer, DNS, DHCP, internal registry (if your internet isnt fast enough to prevent bootstrap from timing out)… etc

Its the intel optane h10 on the refurb minipc… its pcie gen3 not too bad… so mostly I will go for a consumer ones … samsung ones maybe a bit better will try for that once I find a good deal…

So you have 1TB for the host and 8 TB for the guests/storage etc…?

If I am being honest I only had a quick think about support services… but proxmox has support for LXC containers so trending towards that…
But I am planning for a support vm; good point on the internal registry may be quay or something.

I need to have my labpc hooked up to wired ethernet directly to the router to get reliable speeds… I think 80Mbps should be good enough its all I get on the max tier…

Do you have topology for your 2022 OKD deployment?

1 Like

it was basically this. (All fedora coreos except for the Fedora 35 that i used for the DNS / LB /iptables VM (aka services))


1 Like

What solutions are you using for services?
DNS, DHCP, LB, NFS, Registry, Firewall etc…
Any operators installed…?

I have only tried gitops on crc … but something fishy happened in my crc setup so will redo everything in SNO …


i think i used bind with dhcpd and allowed ddns with bind so it auto configured dns for dhcpd clients.

I watched and pulled ideas from these videos to use for my setup.



Installing a user-provisioned bare metal cluster on a restricted network - Installing on bare metal | Installing | OpenShift Container Platform 4.10

I think i used haproxy or metallb for the LB… (see the video link above)

I havent read about or tried to setup any since 4.10 … so i’m not up to date on what all is changed.

Like i said… deciding on and setting up the supporting things before OKD is more work than actually deploying OKD

1 Like

Yes… I can appreciate that…

VMs used to crash in PVE with E-cores intel cpus (12 and 13 gen). There was a firmware update that you can apply with fwup which fixes it to some degree.

Jeff from CraftComputing did some experimentation with it recently, you can check out his channel, it was in the past 20 vids or so, at the time of writing this, should be an easy find.

No. Stick with 64GB until you can afford another 2x 32GB kit. If you can, just set your containers to be memory limited and don’t overcommit. If you need to, automate your infrastructure to stop containers you don’t need and start the ones you do need, to save on resources. Containers start up so fast it’s barely even noticeable.

If your mobo supports pci-e bifurcation, go with nvme if the budget allows. SATA SSDs are always an easy expansion, even if you buy a crappy sata card or a decent SAS HBA card. If you don’t want to fork the money for a hot-swap caddy, only few people will judge you for dangling ssds in your case.

Hardware Cannucks do a lot of cooler reviews. I don’t remember which they were recommending lately, the deepcool assassin, or the thermalright peerless assassin.

Why not? Particularly if you know you’re going to use this as your main system and not do split systems, like myself.

If you can get away with your old hardware and it doesn’t consume too much power, keep what you have, or if you’re into clustering and your software stacks work good like this, then go for a split small PC setup, than a big power hog that can run everything by itself and not even sweat.

1 Like

I went through the CarftComputing videos… it was an interesting watch but he ran geekbench on libvirt windows vms which was more of an edge case… but the take away was still not a 100% reliable… :face_with_head_bandage:

Ok… this is a great tip… It would like to make platform to be scalable. But looks current motherboard b760 may not support PCIe Bifurcated storage controller and I would need to upgrade to a z690 class motherboard…
I am not sure about the HBA card… I recall that the current mother board has 4 x sata 6gbps ports for SATA expansion thru either m.2 ssds or 2.5 ssds which may be sufficient for addtional expansion… I saw a few builds and can add more drives into the internal 3.5" or 2.5" storage bays…

Yeah the peerless assasin is actually a bit cheaper and a bit more performant than the deepcool… but deepcool looks way cooler…

I saw the servethehome where they talk about adding used minipcs to a cluster setup… but I think i was second guessing myself a bit too much… At this point I want to do the build since I have spent nearly 2 months on the research and want to see the project through…

Thanks for the feedback…

i also found this video…


when i last build 4.x

i had to use around 100G of RAM (for the 7 VMs) to get bootstrap to complete reliably.

Yeah my personal experience with OCP SNO with 16GB RAM on the node was identified is not adequate… eventhough thats the minimum required memory requirements for control plane…

Have you tried CRC?

it claims much less ram is needed

yes its pretty useful… tried it … much less ram needed … but provides more of a developer oriented experience …

1 Like

Are you gathering more hardware to build the lab?

Im thinking I might build mine again ( since the last version of OKD i setup was 4.10)

I have a make shift lab running on a mini pc.

I might need to uplift the home networking at some point… The minipc wifi is really slow not sure why… so I have connected it to LAN …

But for now it will be the home server build only…

I have a syno NAS may be that get added to the build…
but the syno just runs on demand to save personal data backups… may be I will use that for network storage …

If you find anything that hosts NVMe M.2 and interfaces to SATA (and isn’t a scam), please make a post with massive amounts of noise publicising it. These don’t exist because NVMe is raw PCIe signaling and you’d convert to SATA and back again, losing performance from 3000+ MB/s down to 600 MB/s. I’ve done a lot of hours searching, I can’t see anything that’s better than dumb wiring without conversion: ‘if the M.2 is SATA B key, it’s SATA, else if it’s NVMe M key, it’s U.2 NVMe’. I’m sad that consumer motherboards just don’t have enough PCIe lanes to add many NVMe drives.


1 Like