Home server build - feedback welcome

After relying on a QNAP consumer NAS (TS-559 in RAID6… I know… :exploding_head:) for many years, I finally decided to go down the path of building my own machine.

Budget: ~2200€ (without HDDs)
Country: Germany

Main use cases:

  • ZFS-based storage for media library and backups
  • VM/Container capabilities for additional services like NextCloud, Plex, general homelab tinkering
  • (maybe transcoding capabilities via plex, I’ll add a dedicated GPU later if necessary)
  • no AI, software development, gaming or rendering

For pihole, smarthome, home automation I plan to use raspis or something like a nuc for vastly improved always on efficiency.

Reading up on a lot of builds, suggestions, reviews and recomendations I zoned in on the following setup:

Mainboard: Supermicro H12SSL-i (~540€)
CPU: Epyc Milan 7203p 8x 2,8Ghz (~380€)
Cooler: SilverStone XE03-SP3 (~100€)
RAM: 4x 32GB DDR4 ECC (~320€)
NIC: Intel X540 (~85€)
HBA: Broadcom 9207-8i (~90€)
Chassis: Inter-Tech IPC 4U-4416 4HE (~390€)
M.2: 2x Kingston SNV2S/500G M.2 as boot drives and VM/Container storage (~85€)
PSU: bequiet Straight Power 12 750W (~140€) or Pure Power 650/550W (~85€)

  • point of contention, details below

HDDs: at least 4x Exos X20 16-20TB, maybe even 6 (doesn’t count toward the budget!)

Software:

  1. Proxmox for VMs and containers
  2. TrueNAS Core for main zfs storage

Reasoning behind this setup:

  • stable and assured EEC
  • IPMI
  • IOMMU for passing HBA to TrueNAS
  • more than enough PCIe slots/lanes for future shenanigans (GPU, PCIe for M.2, USB4)

Negative aspects:

  • higher power consumption due to enterprise gear (ACPI modes, power states)
  • maybe even missing HDD spindown option (not sure, haven’t properly researched this yet)

Main question: Am I making obvious mistakes or missing overall red flags here?

Specific questions:

  • TrueNAS inside Proxmox:
    Are there major drawbacks to running Proxmox as the OS (container/VMs) and TrueNAS inside a VM (as ZFS storage)?
    Is there anything despite IOMMU and allocated cores/ram that I need to take into consideration?
    Is there any bottleneck or performance penalty with regards to the expected network throughput?
    My thoughts: As far as I understood, passing down the HBA is mandatory. NIC would only work If I bought one additional only for this VM.
    Or can I split the 2 ports of the X540? One for Proxmox, one for TrueNAS?
    If all else fails, there seem to be protocol tweaks for reduced overhead.

  • Electrical bill:
    0,35€/kWh ain’t cheap, so pending the overall power consumption, I’m leaning towards implementing a scheduled power off at night and startup via IPMI/WOL

  • 10 GBE NIC:
    Do I get significant benefits from upgrading to the H12SSL-CT/NT (BCM57416 onboard) and ditching the X540?
    My take so far: H12SSL-CT/NT isn’t worth the ~+150€ extra. X540 does only support 1 or 10 GBE, no 2.5/5 GBE modes in between, but doesn’t matter in my case.

  • HBA: SAS2008 vs. SAS2308 vs. SAS3008?
    SAS2008 is only PCIe 2.0, resulting in ~4 GB/s max throughput.
    SAS2308 is PCIe 3.0, resulting in ~8 GB/s, would offer future option towards some SSDs
    SAS3008 is PCIe 3.0, brings increased temps and power draw.
    My conclusion: SAS2308 is the sweet spot

  • PSU and H12SSL:
    I’m thinking 750 Watts is probably overkill, especially with no dedicated gpu.
    The H12SSL has 2x 12VP8 power connectors. Can I get away with only using one? Or are both mandatory?
    The above mentioned PSU has 1x 12VP8 and 2 additional 12VP4, combineable to a 12VP8.
    Other PSUs with less power don’t have this option.
    If I only need to use one 12VP8, I could buy a PSU with less power (650 or even 550: Pure Power 550W) for nearly half the price

Already discarded setups (from what I learned, please prove me wrong!)

  • Any cheap enterprise gear older than 2020: too inefficient, too much noise, consumes to much power, less potent microarchitecture, you name it…

  • B650 Workstation with Ryzen 8700G:
    ECC-support got removed / is a flimsy coin toss at best
    slim choices regarding mainboard availability (H13SAE-MF, MSI D3051, Asrock B650DU4-2T/BCM, Gigabyte MC13-LE1) with a steep price (>500€) compared to feature set
    reduced expandability due to µATX formfactor, too few PCIe slots/lanes

  • Desktop X670/E with Ryzen 7/9
    ECC-support maybe? Depends on Board and BIOS
    better availability and more brand choices
    kinda the same reduced expandability due to limited slots/lanes
    No IPMI
    IOMMU-support?

Thank you so much for reading and also for any suggestions!

I’d considered that myself (though eleccy here is very cheap), until I stumbled upon this, you might find it interesting (particularly since night is the best time to do backups and offsite synchronization - youve planned to keep a copy of important data offsite, yes?):

Though I am obligated to warn you that spinning disks up and down is vastly worse for their longevity than leaving them running 24/7.

Asus ProArt X670E-CREATOR WIFI works fine with ECC and if you not need a lot of I/O B650E boards might also be of interest
A Ryzen 7900 will be much more efficient and faster (Zen 4 vs Zen 3), you very likely would be fine with a 7700 (keeping TDP at 65W) too.
No IPMI but it’s not the end of the world…

https://www.idealo.de/preisvergleich/OffersOfProduct/203482676_-32gb-ddr5-4800-cl40-mtc20c2085s1ec48br-micron.html (much cheaper than buying from Micron/Crucial directly)
Micron 32GB DDR5-4800 ECC UDIMM 2Rx8 CL40 | MTC20C2085S1EC48BR | Crucial EU

Intel X540(-T2) are around 200-250 EUR, not 85 EUR unless you want to gamble on fake ones which very likely is a bad idea.
Broadcom BCM57416 are about the same price-wise but a bit newer at least if you’re going for 10Gbit Copper Ethernet

Broadcom 9207-8i is ancient and probably a waste of money, the motherboard you linked supports 8 SATA disks out of the box. The linked ASUS board supports 4 out of the box, if you need more a simple AHCI card will do fine for spinning rust and you can use onboard for SATA SSDs although this seem to be less more common as time passes.

As for NVME SSDs I would at least go for TLC based ones, such as https://www.idealo.de/preisvergleich/OffersOfProduct/201512030_-p5-plus-500gb-crucial.html

PSU: Something that’s ATX 3.0 and Gold-rated, 750w is probably a bit overkill but that’s usually where they start…

Unless you really need a rack, a Fractal Design XL is going to be more than fine or something similar
https://www.idealo.de/preisvergleich/OffersOfProduct/3807166_-define-xl-r2-black-fractal-design.html

You can usually catch pretty good deals on both Toshiba MG-series and Exos here, Festplatten Angebote ➡️ Jetzt günstig kaufen | mydealz

I would also be very careful about spinning drives up and down longterm…

At the moment my important stuff is on two local copies and one extra offsite/synched via schedule once a week.

I totally get what you are saying with regards to the wear and tear. And I probably will refrain from sending the drives to sleep while the machine in running. On the other hand I already know there will be a lot of days where the whole system won’t be used for extended periods of time. Thats why I consider an explicit startup signal combined with an automated nightly shutdown.

That board was on my list, yeah! I scratched the idea, because I got very conflicting information with regards of ECC support.

My plan was to buy RAM, HBA and NIC refurbished/used like from those sites:

With zfs builds I mostly found recommendations to use an external HBA and refrain from using any native ports from the motherboard. Since I plan to use a rackmount case with a backplane/hot swap, I thought an HBA with a SAS2307 was the way?

Oh I totally bonked that one, yeah. I remember those QLC drives aren’t that great with regards to longevity.

Thank you for these recommendations, those look great :slight_smile: Probably will go for the Corsair.

OK, I’ll work some numbers with regards to money spent on power compared to money spent on replacing worn out drives.

I understand the wish for all bells and whistles but you might want to take a step back to look at if it’s really needed as it adds more complexity and cost.

Most if not all(?) motherboards these days supports hot swapping (or at least offfers it) however since most neither support SF-connectors or are used in large disk arrays given the limited amount of ports the recommendation indirectly falls onto a HBA/RAID adapter because that’s the only option. There are SATA Port Multipliers etc but they have limited compatibility with host controllers, bandwidth and usually are quite quirky as it is, That being said, is it really the end of the world if you would need to power down your system 10 minutes to swap a drive in worst case?

In my box at home I have a LSI 9211-8i adapter (it’s a IBM M1015 crossflashed to be exact) and while it runs fine and all I’m kinda of the fence recommending it for a new setup. It still runs great but you need “special cabling”, some kind of cooling, it’s end of life, requires an 8x slot (could possibly work in a 4x but I haven’t tried and those are kinda rare) and struggles with compatibility issues. There are of course newer variants however they need even better cooling and are more expensive. Personally I’m likely going to replace mine with an ASM1166 card because I can live with 2 less SATA slots and that will also remove the need a separate fan etc.

I would also raise a concern about fake LSI cards, I would honestly try to avoid “Genuine branded” ones as they’re quite uncommon and go for a vendor variant by lets say Dell, HP, Lenovo, Fujitsu if I were go buy one. The one you linked to is a variant that I would sort into the likely a chinese clone variant section especially since they don’t mention it being pulled from something just “refurbished” and there doesn’t seem to be any stickers or information listing origin.

The same goes for Intel NICs, there are lots of clones and other odd variants out there. Just Google it and you’ll find lots of post regarding the topic.

I’m looking at very similar build currently. However I also intend to run a desktop session on it.

  1. I see Epyc 7303p with 16 cores is also available in Germany and costs the same

  2. The current gen consumer CPUs are too good to ignore.

CPU performance is important for VM/container lab. The aging Epyc 7003 platform has significantly less performance per € comparing to them.

There is no 7203p/7303p on the Openbenchmarking, but there is Epyc7313p with 16 cores and bigger cache, which costs €770 in Germany. It’s significantly slower than Ryzen 9 7950x or i9-13900k which are below€600. Even €250 non-overclockable i5-14500 looks better.

There might be use cases when 8 memory channels of Epyc 7003 provide better performance. However in most tasks, available desktop processors are much better, including multi-core benchmarks. The €250 i5 has 14 cores, €440 i7 has 20 cores (even if E-cores).

  1. ECC and 128 GB on consumer CPUs.

Here is so much confusion, I spent quite a lot of time to figure out.

(UPDATE: I stay corrected, ECC reporting is not supported on Linux for consumer CPUs well. Here is only about actual ECC operation in RAM. See later posts)

First of all, there is stable ECC on both Intel and AMD. Some motherboard producers don’t mention it and don’t test for it. They must be avoided. But some of them test and guarantee it in specs. The chips themselves don’t have inherent limitations, they have the same core design as server ones.

The biggest trap laid down by marketing is that using all 4 DIMMs will limit DDR5 memory to 3600 frequency. Both Intel and AMD. Because consumer CPUs have only 2 memory channels. That’s the source of frustration for folks who spent money on fancy XMP/EXPO modules with speed over 6000. But once you plug in 4 of them, they will be limited to 3600, the same limit as the cheapest DDR5-4800.

Some boards advertise “up to 196Gb RAM, up to DDR5-9999 whatever”. That’s a lie. Either you get high speed, or you can use all 4 slots. Never together.

However 4 DIMMs setup with DDR5-4800 ECC memory will give confirmed DDR5-3600 ECC. Which is a bit faster than DDR4-3200.

If 64 Gb of RAM would be enough, faster speeds are also confirmed with ECC(there is a thread on this forum, can’t include links).

To summarize, for consumer platforms:

  • UPDATE: EDAC reporting in Linux might be missing. Even when ECC is working in the system, there might be no way to observe it. See later posts.
  • 2x32Gb ECC works with DDR5-4800, also confirmed with higher speeds
  • 2x48Gb ECC should work too, but not readily available yet
  • 4x32Gb ECC works limited at DDR5-3600
  • 4x48Gb ECC will probably also work at DDR5-3600
  1. Now back to the impressive benchmarks.

I wonder how much they are influenced by over-spec RAM modules with only 1 DIMM per channel and no ECC enabled. I would love to see them with 4xDIMMS and ECC.

  1. There is a vacuum on the market for cheaper CPUs with 4+ memory channels.

The EPYC 7003 are unfortunately still expensive despite being 3 years old. Faster Zen 3 Threadrippers were limited to OEMs and not available in retail. The new EPYC 9004 and Threadrippers don’t have cheap models unfortunately. The EPYC 8004 brought them, but there are still no boards, and they are hard limited by low frequencies.

Maybe it will get better with time. 48Gb ECC x2 = 96Gb will come to the market for the consumer CPUs. And EPYC 7003 might eventually get cheap enough to be a good option.

  1. IPMI is available on some boards for consumer CPUs, as well as good IOMMU groupings.

One example is ASUS W680 IPMI.

1 Like

I stumbled over that, too, yeah. :frowning: Pretty frustrating.

In my past research I completely disregarded Intels CPU lineup because I thought I wouldn’t get away with so few PCIe lanes (same with B650 / X670 + external GPU). Shame on me!

Lets see if I got things right with the Asus W680 IPMI:

  • i5-13500 or i5-13600 (i7-13700 is probably too much)
  • decent corecount with reasonable TDP
  • 4x32 DDR5 ECC, though only at 3600
  • no need for a dedicated GPU, thanks to BMC, IPMI and on-cpu UHD Graphics 770
  • the top two PCIe-Slots in x8/x8: one for a 10 GBE NIC and one for an HBA

So far, that setup would suit my need just fine:
enough available cores with huge single-core boost potential :white_check_mark:
IPMI :white_check_mark:
128 GB of ECC :white_check_mark:
1 PCIe-slot with enough lanes for 10 GBE NIC :white_check_mark:
1 PCIe-slot with enough lanes for HBA :white_check_mark:
at least 2x M.2 :white_check_mark:
and as cherry on top, I’ll get power saving potential and USB 3.2
:partying_face:

Or did I miss anything?

I’m also now reading more about the W680. It looks that Linux can not report on ECC errors on it yet. The error correction works, but there is no way to see it.

W680 chipset was released just 2 years ago, and it was the first time when Intel unlocked ECC on consumer CPUs. AMD never locked out ECC.

I need to read more. Maybe this is the annoying detail where server hardware would just work even if it’s 3 times slower.

ECC is supported on multiple consumers CPUs (multiple generations of Pentiums and i3 CPUs), not all but many.

@HomlabHomer
I did look at W680 initially but the general bugs plaguing the platform and the power hungry non uniform cores made me look elsewhere. Unfortunately it didn’t seem to be isolated to one specific vendor even if some fared better than others. You probably want to research that a bit…
Gigabyte MW34-SP0 comes to mind as a total trainwreck and people did post issues with the Supermicro board too, I don’t think the Asus mobo got much attention at all.

Looking att productivity benchmarks higher memory clock speeds doesn’t seem to do much at all in overall performance.

I’m getting the same vibe from different postings:

No ECC reporting on W680 depending on the linux kernel:
https://www.reddit.com/r/homelab/comments/18lf855/intel_w680_ddr5_and_ecc_reporting/

Presumably works on TrueNAS (direkt Link to post, description is off):

I found a similiar Workstation-Board from Supermicro:
https://www.supermicro.com/de/products/motherboard/x13sae-f

Can’t see the picture, need an account for that. I wonder how it works. From what I see Intel on W680 chipset is not supported by Linux EDAC yet.

I found this in linux-edac mail lists:
https://marc.info/?l=linux-edac&m=168269726103330&w=2

Somebody replied last year that i9-12950HX is not supported by the reporting system. It’s about WM690 chipset, which is a mobile alternative for W680. Although error correction is being performed without reporting. A quote:

I’m sorry that there isn’t an EDAC driver for your system. Most
of the effort here goes to EDAC for server systems.

I also found that EDAC support for recent Ryzen 7000 chips made some news: https://www.phoronix.com/news/AMD-EDAC-Ryzen-7000-Series (However it’s merged in linux kernel 6.5. Debian 12 is based on 6.1. )
I couldn’t find information like that about Intel CPUs.

You would be surprised how much USB 3.0+ covers now. The only reason you want a ton of PCIe lanes today is if you truly plan to go balls deep with four GPUs or 20+ NVMe drives or similar. Otherwise, you really only need three PCIe slots and very few of those require a full x16 setup.

GPU is fine with 4.0 x8, NIC is usually overkill with 4.0 x4 and then for a disk controller to connect 20 SATA drives over SAS, x4 is enough for that one too. This is buying brand new everything of course.

AM5 does have one potential cost savings with the Asrock Rack B650D4U for $359. There are some other options available but prices only go up from there. Pair the B650D4U with a Ryzen 9 7900 and you have a powerful but low cost and low power server foundation.

That said, you do you here! :slight_smile: