1U Ryzen Server Proxmox build for a Test Server at work

Been wanting to build something like this for a long time and finally got approval from the higher ups. Basically our existing Hyperconverged VM Server Nodes are EOL and We got a quote from the same company (Scale Computing), and the cost for some mediocre hardware + their HC3 VM Hypervisor-In-A-Box was right around $80k for a 3 Node cluster. Altogether it’s 48c/96t with 6 Xeon 6244’s, 768gb of RAM, and a decent amount of NVME Storage. So not terrible, the actual hardware is about $50k. The software licensing is a lot though, and due to COVID, that price is no longer easy to swallow. We don’t want to spend that money and not see a tangible improvement with our ERP software, most of which runs on a Microsoft SQL Server, with some old .Net code as the client (most of it’s single threaded sadly). So my idea was to build a test server to see what kind of improvements we would get with high end hardware, but for a much much cheaper price. This way, we could see if paying the big bucks would payoff in terms of a speed improvement. Maybe if it turns out well, we could build multiple of them and cluster them in Proxmox. Won’t know until it’s built and we’re able to play around with Proxmox. With that said, here’s the build idea:

Processor: Ryzen 3950x - Extremely good Cores/$ at high clocks and IPC.
$699.99

Motherboard: X570D4I-2T - Newish board from AsRock that can be placed in a small rack chassis, has IPMI, apparently has PCIE Bifurcation for NVME SSD’s.
Roughly $500, hard to find.

RAM: 128GB (32x4) Samsung DDR4 ECC Memory 2666Mhz - Listed as M474A4G43MB1-CTDQ on Memory QVL for that board. Very hard to find, if it’s unavailable, I’d just do a non-ECC G-Skill kit.
$760

Storage: 4x 2TB Inland PCIE 4.0 NVME SSD - Extremely high TBW at 3600TBW, and very good price as Micro Center’s in House brand. Could also go with Corsair MP600, roughly same price and no buy limit on that. In fact, I’m betting these are nearly identical SSD’s from the same manufacturer, just branded for each company.
$1560

PCIE Adapter - Asus Hyper PCIE 16x to 4x4x4x4 Adapter: Needed to connect all of the m.2 drives via the single pcie 16x slot. This machine won’t use a GPU at all, and will only be managed via the onboard mobo GPU, so the slot is free to use. I’ve read on a forum post that AsRock does infact support 4x4x4x4 bifurcation on this board, so the card with 4 ssd’s should work.
$70

Chassis - PlinkUSA 1U 9.84" Deepm Rackmount Chassis - This is the one area where I’m sure some improvement could be made. This is a chinsy cheapy case but I believe it should work. I would gladly take suggestions, I am slightly worried about fitment of the next piece of the build…
$55

CPU Cooler - Dyantron L3 1U AIO Water Cooler - The special sauce. How else can you cool a 3950x in a 1U? My only concern is it fitting in the case that I selected. Like I said, case suggestions would be great.
$100

PSU - Depends on chassis, but if I were to go with the chosen case, I’d use Athena Power 1U 500W PSU 80Plus Silver. It fits the SFF chassis, is server grade (non redundant, would gladly get one if I could find one.) and has ample power for the 3950x. Probably overkill honestly, but better to go too large than too small on PSU.
$110

That’s the build. Altogether comes out to $3855 for what I would consider a very powerful system that will give us insight into higher end server performance (should actually perform better due to high CPU clock and PCIE 4.0 ssd) Let me know what you guys think and what I should change.

Thanks!

P.S. I’m new here, so I can’t post links for each item, I’m sorry!

4 Likes

Welcome to the Forum. Sounds like an interesting build you have on order.

Have you considered off the shelf, instead of roll your own?

Something like this:
HPE Proliant DL325 Gen10+ EPYC 7302P 16-Core 32GB 8SFF P408i-A 500W, 3-Year NBD onsite warranty, $2257 as of today. If you can live without PCIE 4.0, the Gen10 (last year’s model) is $1500
https://www.provantage.com/hpe-p18604-b21~7HPE96FN.htm

Here’s a peek at the innards

1 Like

Absolutely, I’ve thought of Epyc for sure. My only concern is the relatively low clock speeds of the Epyc platform. Our ERP is running some very old code, most of which we are pretty sure is single-threaded. Everything we’ve been told from the Developers is that you want as a CPU with the fastest single core performance as possible. That why we want to see if a much faster and high IPC CPU would provide benefit. We are positive the NVME drives will provide a huge benefit either way, as our existing server barely scores 75MB/s linear reads on Crystal Disk Mark. WITH 16 SAS HDD’S. I honestly think there’s more going wrong under the hood of the Hyper-visor that’s causing such terrible Disk I/O, similar to how a Ceph Implementation is very computationally heavy and tends to sabotage IOPS due to the CPU and network load from replicating storage across the cluster.

At any rate, yes Epyc would be a great idea to go for the safe route. We just really want to test the ERP performance difference with a very fast single-thread loaded CPU, which I’m doubtful Epyc will be able to compete as well.

How many cores are you already licensed for?

Imo if there isn’t a specific feature you need from the x570 chipset … And the availability/price of the x470 is more attractive… I’d go x470

Be cautious about the CPU cooler and ram clearance… Is this 1u cooler rated for that tdp on am4?

If your company plans to lean more towards epyc Rome … I’d build your test/poc server based on it also.

The only way to know is to test.

Epyc has lower clocks but more cache and much more memory bandwidth…
Also won’t be limiting your nodes to 128GB of ram on time either.

…only way to know is to test.

This is the exact 1U cooler for my first gen epyc (7351P) server on an HPE G10.

It’s rated for 180W and I have run a sustain load of F@H overnight all cores and it never went above 62C in my temperature controlled environment (70 F) and the fans stayed under 50%.

2 Likes

Her thanks for the reply. We’re currently only licensed for 4 cores but will be testing at 8 cores to see if there is any measured improvement. We believe our main issue with SQL server is file IO as we don’t have close to enough RAM for the entire server to sit in RAM, so it’s swapping a lot. The main reason for going with x570 is PCIE 4.0, which allows for the SSd’s I’ve chosen. These SSD’s can hit 5GB/s linear read, where as PCIE 3.0 SSD is limited to half of that at best. the price difference between the PCIE 3.0 and 4.0 versions of the drives is small, and the motherboard is actually the close enough to not matter in our case. I’m confident the cooler will work as this board’s Memory and CPU are server oriented 90 degrees from standard consumer boards, and the cooler appear to have a very small surface area outside the cold plate. I’d go with an air cooled solution if there were any AM4 coolers that had the fins facing the correct way and I thought I could actually get enough airflow for it to work. The 3950x is not exactly cool unfortunately. This cooler is rated for 165W of TDP so it should be great.

Not sure this would work on AM4 however. Could work on a threadripper or Epyc though

Yeah this is for sp3 but it you wanted to know feedback about dynatron stuff it’s pretty banging

I believe you should be able to now and going forward. If not, let me know.

Yes, I was originally going to do a 2u with the Dynatron A24 (https://www.dynatron.co/product-page/a24) but then I found that liquid cooler from them. I’m pretty much exclusively going with them, as their coolers are usually made for boards with server layout. They make a 1u cooler but I’ve read it’s pretty garbage for anything remotely high TDP (https://www.dynatron.co/product-page/a18)

1 Like

Yeah I don’t think those work too well but I’ve never seen a proper review.

The issue is that the ones with blowers don’t really work for servers but not all servers have an available fan header near the cpu so that shot cable on this particular heatsink is probably not what you’re looking for. Most server comes with robust fans anyway.

https://www.alphacool.com/enterprise-solutions-sets

Why would you show me this on pay day?

Sick. I’ve never done custom water cooling, it seems straight forward but I wouldn’t want to risk it in a job environment. At home though…

Its becoming more common on server stuff

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.