Been wanting to build something like this for a long time and finally got approval from the higher ups. Basically our existing Hyperconverged VM Server Nodes are EOL and We got a quote from the same company (Scale Computing), and the cost for some mediocre hardware + their HC3 VM Hypervisor-In-A-Box was right around $80k for a 3 Node cluster. Altogether it’s 48c/96t with 6 Xeon 6244’s, 768gb of RAM, and a decent amount of NVME Storage. So not terrible, the actual hardware is about $50k. The software licensing is a lot though, and due to COVID, that price is no longer easy to swallow. We don’t want to spend that money and not see a tangible improvement with our ERP software, most of which runs on a Microsoft SQL Server, with some old .Net code as the client (most of it’s single threaded sadly). So my idea was to build a test server to see what kind of improvements we would get with high end hardware, but for a much much cheaper price. This way, we could see if paying the big bucks would payoff in terms of a speed improvement. Maybe if it turns out well, we could build multiple of them and cluster them in Proxmox. Won’t know until it’s built and we’re able to play around with Proxmox. With that said, here’s the build idea:
Processor: Ryzen 3950x - Extremely good Cores/$ at high clocks and IPC.
Motherboard: X570D4I-2T - Newish board from AsRock that can be placed in a small rack chassis, has IPMI, apparently has PCIE Bifurcation for NVME SSD’s.
Roughly $500, hard to find.
RAM: 128GB (32x4) Samsung DDR4 ECC Memory 2666Mhz - Listed as M474A4G43MB1-CTDQ on Memory QVL for that board. Very hard to find, if it’s unavailable, I’d just do a non-ECC G-Skill kit.
Storage: 4x 2TB Inland PCIE 4.0 NVME SSD - Extremely high TBW at 3600TBW, and very good price as Micro Center’s in House brand. Could also go with Corsair MP600, roughly same price and no buy limit on that. In fact, I’m betting these are nearly identical SSD’s from the same manufacturer, just branded for each company.
PCIE Adapter - Asus Hyper PCIE 16x to 4x4x4x4 Adapter: Needed to connect all of the m.2 drives via the single pcie 16x slot. This machine won’t use a GPU at all, and will only be managed via the onboard mobo GPU, so the slot is free to use. I’ve read on a forum post that AsRock does infact support 4x4x4x4 bifurcation on this board, so the card with 4 ssd’s should work.
Chassis - PlinkUSA 1U 9.84" Deepm Rackmount Chassis - This is the one area where I’m sure some improvement could be made. This is a chinsy cheapy case but I believe it should work. I would gladly take suggestions, I am slightly worried about fitment of the next piece of the build…
CPU Cooler - Dyantron L3 1U AIO Water Cooler - The special sauce. How else can you cool a 3950x in a 1U? My only concern is it fitting in the case that I selected. Like I said, case suggestions would be great.
PSU - Depends on chassis, but if I were to go with the chosen case, I’d use Athena Power 1U 500W PSU 80Plus Silver. It fits the SFF chassis, is server grade (non redundant, would gladly get one if I could find one.) and has ample power for the 3950x. Probably overkill honestly, but better to go too large than too small on PSU.
That’s the build. Altogether comes out to $3855 for what I would consider a very powerful system that will give us insight into higher end server performance (should actually perform better due to high CPU clock and PCIE 4.0 ssd) Let me know what you guys think and what I should change.
P.S. I’m new here, so I can’t post links for each item, I’m sorry!