My New Proxmox Homelab Cluster Build with Erying MODT Boards and Intel 11980HK CPUs
I wanted to share my new Proxmox homelab cluster build featuring Erying MODT boards with Intel 11980HK CPUs. I’m currently in the testing phase with one system but plan to expand with two more systems to form a redundant cluster. Inspired by Craft Computing’s vide on this topic. Here’s the breakdown:
Hardware Overview
- Motherboard: Erying MODT with Intel 11980HK CPUs (Engineering Samples)
- PCIe Slot: 1 x16 PCIe 4.0 (currently unused, reserved for future expansion).
This slot was planned for an Intel Arc card for transcoding, but onboard graphics has been more than enough. - M.2 Slots:
- 4x4 PCIe 4.0 for a 256GB boot drive
- 4x4 PCIe 3.0 with an M.2 to SFP+ Intel NIC
(Note: I discovered during testing that none of my Mellanox ConnectX cards are compatible with Proxmox v8)
- RAM: 32GB per node
- Cooling: Intel stock coolers I had laying around from past projects.
One node running all my services hasn’t driven the system past 90°C. - Chassis: 2U Rosewill cases for each system
Storage and Networking
- Container Storage: NFS-based, hosted on a TrueNAS Scale server
- Backups: Stored on a separate SMB dataset that is replicated off-site for redundancy
Services
I run all my services in Docker containers on LXC containers, with GPU passthrough for transcoding. Here’s what I’m running:
- Sonarr
- Radarr
- SABnzbd
- Emby
- Code-Server
- Tdarr (one node per Proxmox system)
- Homepage
All services (except the Tdarr nodes) are set up for high availability.
Transition from Old to New
Old System:
- CPU: 14-core Xeon (X99-based) in a 4U chassis
- Networking: SFP+ NIC
- GPUs: 1660 Ti and 1080 for transcoding (each could handle ~300 fps but drew 150W each!)
- OS: Running Unraid
New System:
- Performance: Each node achieves ~320 fps in transcoding with Intel GPU passthrough while drawing only 25W total per system (CPU + GPU, not including NIC, SSD, etc.).
- Efficiency: With three nodes, I expect to exceed the performance of my old system, with the added benefits of redundancy, high availability, and significantly lower power consumption.
Why the Upgrade?
- Reduced Power Consumption: From two GPUs drawing 300W total to three nodes drawing a combined 75W.
- Smaller Footprint: 2U cases vs. a 4U chassis.
- Improved Redundancy and Scalability: The cluster setup ensures no single point of failure, and services remain highly available.
Next Steps
- Finalize testing and bring in two more nodes to complete the cluster.
- When I build the rest of the systems, swap out the fans in the test system to Noctuas.
The Molex fans are louder than they should be. - Consider future expansion using the empty PCIe slots (open to suggestions here!).
I’m really excited about how this build is coming together, especially given the lower power consumption and improved performance. I’d love to hear your thoughts, feedback, and ideas for further optimizations!