Recomendations on a ryzen build used as a home server

Im trying to get a build up and going researching parts and such. Im looking at making an all in one home server. Ill be hosting a few game servers, voice server, likely a couple of websites, home automation things, truenas, plex/jellyfin, a few other home services and a section for vms that I will be running to learn more sys admin kinds of things. While I know most of these things can be in one large vm with a few docker containers Im probably going to have most of them in their own vm.
I plan on running proxmox then have everything run on top of that.
A list of parts Im looking at are:

Ryzen 9 5950x

Gigabyte x570 aorus xtreme (this seems to be really over kill for me but its one of the few boards that Ive looked at that come with 10gig and 1gig ports out of the box and Ive heard that this board has really nice iommu groupings) If any one has any other recommendations Im all ears.

I would like to use ecc ram mostly for truenas. I know its a bit slower but having the error checking there for my data is a nice to have. Not sure which ecc ram Im going to buy though, that stuff is not cheap so I might buy used here.

drives:
a small <1TB m.2 drive for proxmox’s boot drive.
maybe a second one for the truenas vm both of these drives are probably going to be Samsung evos of some flavor.
I see most of the vm drives sitting in the storage that truenas is going to be hosting. to start with its going to be 8 2tb drives because I have those already, but after the hba card upgrade talked about below that will be expanded.

3ware 9650-8 : while I have this card right now and will use it from day one, I will upgrade it to the 24 port version some time in the future.

future upgrades may include:
add in gfx card for transcoding and/or video editing vm if I get into that
add in tv tuner card or maybe a dual 10 gig sfp+ card when I get a few other sfp ports in my home. the real limitation I see here is that there are only 3 pci slots on the current mobo pick

going to shove all of this in a fractal meshify 2 xl for all those sweet sweet drive slots.

The mobo here is really the only hitch in the plan I think because it only has the 3 pci slots. however all the other mobos that Ive seen that have the pci 1x slots dont have the 10gig network built in.

I looked at going epyc gen 1, and threadripper gen1 but for my uses that seems WAY over kill and a bit to pricey for my blood even looking at used stuff. I chose the 5950x here for zen 3 and its 3.4GHz boosted to 4GHz

Any thoughts on this build? recommendations to make it better?

1 Like

I had similar considerations when building my homeserver. I went with X570D4U-2L2T for my board. dual 10GBit Ethernet, ECC, IPMI, great IOMMU groups. Asrock Rack has several server-like boards for AM4 with different configurations. You need DDR4 UDIMMs for ECC on AM4.

I bought a 20$ 240GB super cheap SSATA SSD for Proxmox. Serving me well for almost a year now. Don’t waste resources on a boot drive.

Welcome to consumer-grade land! Unless you go full server board like Xeon or EPYC, you will have to pick and choose. In practice you have two x8 slots (or one x16) as expansion to work with, on pretty much all boards, and maybe some x1 or x4 slot off the chipset with questionable use case. I have on-board 10GBit NICs and 8x SATA ports, so I got away with not needing a 10GBit card or HBA. I went with quad Gbit NIC for pfSense VM and GPU (gaming VM).

You can get SFP+ → RJ45 Transceiver if you have SFP infrastructure and RJ45 on your board. So you can still get SFP switch and don’t need a precious expansion slot on the server.

You can go server platform, but expect a multiplication on power draw, heat and noise and you’re basically stuck to large form factors and cases. But buying old server stuff can be rather cheap initially and I’m rather jealous on used DDR3 RDIMM prices compared to DDR4 UDIMMs :slight_smile:

4 Likes

I was afraid of this. I have seen the board you have talked about a few times here in the forums here. Im not sure I like the x16,x1,x4/8 layout on that one. The dual 10gig nic is very nice though, Ive not seen another consumer board have that for any reasonable price.

Do you have a recommendation on ram? Newegg seems to be sparse with the stuff only offering a hand full of sticks all at 3200 or below and at 16gigs per stick or below.

Good idea on the sata SSD’s Ill take a look into those. I figured I wasnt going to use the m.2’s for much other than maybe a cache drive for truenas but even that wouldnt have to be very big.

Oh, I had forgotten about wanting to try pfSense so Ive got to add a nic card for that. I had thought about the dual 10gig spf+ card just so I could always have a dedicated line from the server to my desktop but that might not be an actual NEED especially if I just put an editing vm on the server.

I thought hard about going the eypc route, looked at some kits on ebay and stuff but to get it in budget I would have had to go for gen 1. Then some of the prebuilt servers I saw were mostly 1U and this thing is going to be sitting beside me in the office, I dont want to have to constantly hear a tornado :slight_smile: Then I got to thinking about the power as well and Im not sure its beneficial to have a second heater in the room!
Man just want my cake and eat it too hahaha. :rofl:

I understand your desire to get a home server that is making full use of the AM4 platform since TR/EPYC is a bit too “large” if you don’t really use all of their PCIe.

I have had many, many AM4 motherboards since 2018 starting with X470 and when looking back to them from the present I’d choose between two models:

  1. ASUS Pro WS X570-ACE: The only (!) X570 motherboard where you can use 3 PCIe slots with x8, since its chipset PCIe slot is connected with 8 PCIe lanes to the X570 chipset which itself is connected to the CPU with PCIe Gen4 x4. This means up to a PCIe Gen3 x8 card is running with close to its full performance in the chipset PCIe slot.

  2. ASUS ProArt X570-CREATOR WIFI: Advantages: Thunderbolt 4, 10 GbE onboard (but only from Marvell), disadvantages: Only a PCIe x4 slot from the X570 chipset since the Tunderbolt 4 controller is using up 4 PCIe lanes.

I use both of these motherboards for daily drivers, the ProArt for a desktop system and the Pro WS for an “experimental” home server build.

ASUS has excellent support for PCIe bifurcation (CPU PCIe slots as well as the chipset x8 slot) and you could split one PCIe slot to serve two low profile x4 add-in cards, for example.

The only reason I write “experimental” home server is that I have issues with newer Broadcom HBAs and their buggy firmware and drivers, but I can replicate the issues on motherboards from other manufacturers, so it has nothing to do with ASUS. :frowning:

I tested both motherboards with 128 GB DDR4-3200 ECC UDIMM from Micron, working like a charm (3700X, 3900 Pro, 3950X, 5900X, 5950X).

2 Likes

I briefly looked at the Pro WS X570-ACE but knocked it out of the running when I noticed it when it didnt have onboard 10GbE now that Im looking at it again that might not be such a bad thing, one 8x gpu, 8x hba, 1x tuner, 8x nic card seems like it would be a pretty solid set up that seems like it would work. I wouldnt be able to do the pfSense vm any more as the dual 10Gbe wants 8x and the 24 port hba wants 8x as well. Thats not a deal breaker though. This one actually has all the slots I want assuming I can use them all at the same time. Im going to look into this one some more I think, especially on the IOMMU side of things because I have the feeling Im going to be passing a lot of stuff to different vms.

I looked into the creator as well thunderbolt 4 and onboard 10GbE is pretty nice. That would allow me to add the quad nic later on down the line and still have 10gig for the truenas trunk.

Im really glad I made this post now as these comments are helping me tease out exactly what I need. Now I’m thinking I’m going to be using the Pro WS X570-ACE

Now that you’ve said

I think Im going to pick up 2 sticks of Micron 32GB 2Rx8 DDR4-3200 to start with (I wish I could just buy all 4 right now but those are going to be rolling upgrades biased on cost)

I have to agree with Exard3k here, I went with an AsrockRack board myself for my ryzen build after almost a year of issues using consumer boards.

I had to RMA 3 Asus consumer boards in a row, because as soon as I put ECC UDIMMs in it the board died in a month or so. As soon as I got the server board it worked flawlessly, but I RMA’d the CPU also in case the motherboard damaged it on its way out.

This is a sample size of 1, but there’s plenty of accounts of people having trouble using ECC UDIMMs in consumer boards. It’s really not guaranteed it’ll work, unfortunately. Best of luck with the WS Pro Ace model, I hope it’s better validated than Asus’ cheaper stuff.

Should you choose one of AsrockRack’s Ryzen server board, you’ll save money on the 10GbE NIC and leave more PCIe lanes open for activities.

1 Like

That’s what forums are for :wink:

ALWAYS check the mainboards QVL list for recommended memory. Most of the AsRock Rack board users here in the forums, including myself, use Kingston Server Premier line. KSM32ED8/32ME and KSM32ES8/16ME are the SKUs for 16GB and 32GB respectively. 3200 is the fastest ECC memory.

Keep in mind that 5950x has no iGPU and you probably want some display option. So an old GPU will be needed. Asrock board has IPMI with on-board ASPEED2500 graphics, yet another “slot saver” and KVM option via web interface is just so convenient.
But I really like the built-in U.2 option on the Asus, I’d like to see more of this on other boards.

I have two NVMe drives on my server, one is used as L2ARC for the ZFS pool and the other one is a passthrough to my gaming VM. Having good cache on top of reasonable amount of memory increases pool performance by quite a bit. I couldn’t live without my precious L2ARC anymore.

And AsRock uses quality chips for the NICs. Both the 1Gbit and 10Gbit NICs are Intel across the board. I was just playing point+click adventure in Proxmox while seeing others with e.g. Broadcom chips having trouble with their NICs. Although pricey, I’m still happy with all the features I payed for as the board has everything built-in in a well-rounded package. Wendell used the x470 version too in some of L1 videos and I can’t blame him.

There is a reason why board manufacturers usually don’t do these things. There already is a potential bottleneck in the chipset in terms of bandwidth and loading potentially 8 more Gigabytes of traffic into 4 lanes also shared by 2x NVMe and IO is just further overbooking an already troublesome bottleneck. But for (old) low-bandwidth x8 cards, it’ll probably be fine. In the end you plug sth like dual 10Gbit NIC there instead buying a board with 10Gbit in the first place.

All x570 have exactly 24 lanes to work with. You can play some tricks to enable something more to the user, but it always comes at a price.

Well, that’s anecdotal evidence for you: I on the other hand came to ASUS after several ASRock Rack X470D4U units that had been highly unreliable.

@Alupus

I use the Pro WS with an ASRock Rack PAUL IPMI GPU in a chipset x1 slot to not give up any “proper” PCIe slots.

In the chipset x8 slot I use an Intel XL710 dual 40 GbE ethernet adapter, doing fine.

That’s why I prefaced that part with “This is a sample size of 1”
To be taken with a grain of salt, but it’s not the first consumer board I’ve had trouble with and I’m not alone.

Either way, you’re building a kickass server @Alupus

I mean we both know how our servers feel. I wanted a low-power, low-noise all-in-one server and AM4 platform was able to deliver. I wouldn’t mind another 8 cpu lanes along with a corresponding slot, but I managed to get everything I wanted. If you can arrange yourself with limited lanes and limited memory bandwidth, AM4 is a great platform. There are many flavours, but this one is mine :slight_smile:

Same here, I have a 2600X in an X470D4U, and despite the limitations I don’t think I’ll ever be limited by that board unless I find a need for passing a GPU through to my VMs. The only thing my PCIe lanes are used for right now is a single m.2 SSD and the intel 10GbE NIC I got for free. Still plenty of room for activities! I wanted a low power and low heat output server and Ryzen just over delivers in that regard.

I might end up selling this combo and upgrading to AM5 when those reach real people prices. Might be a while though.

2 Likes

I’m keeping my beauty for a long time. I was about to wait another year for AM5, but that’s new boards in limited variations, with teething issues and 12 Zen3 cores is more than enough. And I certainly wouldn’t want to buy DDR5 ECC memory during the first year or so. But second generation AM5 might have proven boards and reasonable prices.

2 Likes

Ok so Ive taken another look at all of this stuff and seem to have flipped back to the x570D4U-2L2T. Its got 2 onboard 10gig, 2 onboard 1 gig. Since my ISP is never going to give me more than a gig to my house I think I can just give pfsense the 2 1 gig’s if I go that route, 1 10 gig to truenas to my workstation and one trunk 10 gig for everything else.

I think the pci slots are fine. 1 x16 which will run at x8 for a gpu, the 8x slot for the hba and the 1x slot for the tv tuner if I ever get that. Ill have to run a pci riser to get the gpu out of the way. the only thing im unsure about is if the hba will clear the chipset heat sink on this board. Does any one have any answers on that?

totally agree with this … this server is going to be around for a very long time.

this is a problem for me, new tech on the horizon that will always be nicer to have than current end of life generation stuff but waiting forever doesnt get anything working right now :slight_smile: its an issue I wrestle with a bit

Thanks for all the input so far guys its been pretty eye opening.

2 Likes

x16 card DOES block the heatsink. I temporarily have a GPU installed there. It fits, but prevents using a fan. I recommend some degree of airflow or a 40mm fan because chipset gets rather toasty without any of that.

I used the 1Gbit ports for pfSense before, worked very well. I now use passthrough of single ports to VMs and pfSense got its own quad gbit card. Just be careful not to assign the IPMI port for anything else

1 Like

How plug and play is ecc ram? It seems like the 32 gig Kingston sticks have went eol and was replaced by KSM32ED8/32HC. While it would be logical to just assume that since kingston is saying the new part replaces the old, it’s not on the qvl list for the mobo

Retailers in Germany have the Kingston 32ME SKU still in stock. I obviously can’t comment on whether the new Hynix C-die SKUs (the recent ones were Micron E-die, ME=32GB Micron-E, 32HC= 32GB Hynix-C) are compatible. But checking the x570D4U or x470D4U forum threads here, or other forums/reddit/whatever may help in finding other SKUs from different vendor that work. Otherwise there is the AsRock Rack support by mail if you get desperate.
I know those boards are kind of quirky as I’ve seen users with non-QVL non-ECC memory didn’t get their memory to work at all (x570D4U-2L2T thread here on L1T).

Wasting 600+$ on wrong memory is not a trivial matter. In my country I can send the product back without any reason within the first 14 days of delivery. Try and error might be an option for the Kingston Hynix SKUs if that applies to you as well.
But just spam all the AsRock Rack board threads here, people are usually helpful :slight_smile:

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.