Hi All,
I am preparing a shopping list for Black Friday for a small DIY server for my company. This is not for a critical infrastructure, if something breaks, I would have a backup solution in cloud (both for VMs and storage) that would be in sync and I do not have to replace parts fast to keep SLA.
What do I need this server for:
- Proxmox as a main system
- TrueNAS Scale (VM) - accessed via LAN only
- W11 (VM) with Remote Desktop - accessed via LAN only
- Ubuntu (3 x VM) - services accessed from outside (heavy use)
- Some docker containers - accessed both via LAN and outside (low use)
- Storage/data rsync with other servers, daily
- HomeLab kubernetes testing
- In the future storage for IP Cameras Surveillance and another 3 Ubuntu VMs
So far my list looks like this:
- Case - Node 804
- Motherboard - ASRock Rack X570D4U
- CPU - AMD Ryzen 9 3900 OEM (12c/24t, 65W)
- Cooling - some low-profile Noctua (I want to get something on sale)
- RAM (ECC) - ‎2 x KSM32ED8/32ME to be run without OC @ 3200MHz (in the future I will add another 2 sticks and run all 4 @ 2666MHz since that mobo only allows that speed with 4 x DR sticks)
- Boot Disks - 2 x 120GB SSD (Proxmox RAID1)
- NVMe - 2 x 970 EVO Plus (2TB) or 980 PRO (2TB, PS5 edition with radiator) - for VM Pool, mirrored
- HDD - 6 x 8-10TB, ZFS RAID-Z2 (passthrough to TrueNAS VM), probably HUS728T8TALE6L4 or some other enterprise HDDs, Seagate maybe
- PSU - EVGA SuperNOVA 750 G3 (fan should be off when system idles or under small/mid load)
- UPS - some simple full-sine UPS, only to be able to shut down server cleanly
I chose Node 804 due to a cube size and the fact that I can add additional 6 HDDs for a second ZFS pool. Server will be placed in a ventilated space, so there should be no issues with temperatures. There is also a space in front door for 2 x SSD, so I would place there Boot Disks.
I need IPMI and 24/7 use (non-critical infrastructure), so not to break a bank, I chose X570D4U. It has most of the features I need: 8 SATA (6 x HDD + 2 x SSD), 2 x M.2 - one via CPU and one via chipset, but both PCIe 4 x4 without compromises (if not considering one going via chipset as a compromise), max 128GB ECC RAM and PCIe4.0, so I can add 4 x M.2 and/or HBA for storage upgrade. I do not need more LAN ports and 1Gb is enough, since I am limited by the internet connection speed and current LAN hardware infrastructure. If that changes in the future, I can add 10Gb card via PCIe or M.2.
I want to set CPU in Eco mode, so when idle, power consumption is lower. 12c/24t should be plenty.
NAS would be accessed rarely by up to 5 users + rsync processes, so I am planning to give max 4 vCPU + 16GB RAM. I do not want to use Special Disks, ZIL, SLOG or L2ARC (this NAS does not to be very fast and also I can always add extra 16GB RAM to this VM). Other VMs would be run with remaining vCPUs and RAM and stored on mirrored NVMe disks - I am estimating that I would need max 1TB, so I still have some buffer before hitting 75% capacity on NVMe.
I still have some questions and doubts:
- Will I be able to run W11 without GPU - no gaming of course, only for remote desktop for simple apps? I know that mobo has ASPEED AST2500 GPU, but I am not sure how this works: if it is only for IPMI BIOS or also available for CPU?
- As for Boot drives I want to use mirrored (RAID1) consumer 120GB SSD drives. I know I would be better with some DC SSDs from Intel, but do you think if I install two SSDs from different vendors, different TBW (<100TB though), that could be a safe-ish solution? I am OK with replacing these SSDs every couple of years and rebuilding mirror. I would also have on-site backups on other NAS, so even if 2 SSDs die at the same time, I can handle the server being offline for 24h, replace both SSDs and restore from backup.
- Wouldn’t it be better in my scenario to use my NVMe disks both as Boot and VMs storage? Saving 2 SATA connectors would be grate (I could have hot spares for HDD without adding HBA), but I am worried that system logs would kill NVMe fast - they would also be consumer grade ones.
- Since TrueNAS would be in VM (on NVMe), but HDDs would be passthrough to be put into ZFS without other cacheing than RAM, do I need to configure it somehow, so all ZFS logs would be on HDD ZFS Pool rather that VM Boot Disk (NVMe)?
- Do you see any issues with hardware I have chosen, based on my needs?
I would really appreciate your input. Thanks!


