AMD Epyc Rome - Proxmox Server Build (WIP) With Pictures

So It’s been a little while, but I posted a picture of an Epyc 7402p Processor in the What you acquired recently thread https://forum.level1techs.com/t/post-what-new-thing-you-acquired-recently/149881/6240 and said that a Proxmox build was incoming. Well, I finally have most of the pieces so I began the build yesterday.

I started off with the case I knew I wanted. The IstarUSA D-400L-7. Industrial Chassis | iStarUSA Products | D-400L-7 - 4U High Performance Rackmount Chassis This roughly equates to the IstarUSA D Storm Model 400 (4U Chassis). It has 7 total 5.25 Inch bay slots upfront (Hence the 7 in it’s model number) and it’s the long version (L) which has a middle of the case fan wall. I figured this would be a good case for a number of reasons but perhaps most importantly it can grow and accommodate a fair amount of hardware and cooling. I did not expect to find Hot-swap fans in the fanwall

I decided, early on, that I wanted to go EPYC with this build because of the sheer number of options when it came to CPUs available. I began contemplating a first gen Epyc chip because of it’s lower cost (2nd Gen was already out) but I ultimately decided that I wanted PCIe 4.0 compatibility for future proofing, the IPC improvements and the power efficiency improvement as well. So the Epyc 7402p was an easy choice. It’s not the biggest baddest chip on the planet but it’ll be more than enough for running a ton of dockers and VMs. It was also a lower cost option because it was a “P” SKU meaning single socket only CPU.

The choice of Motherboard was easy. I’m a big fan of Asrock Rack stuff and they have a platform specifically for Epyc Rome, the ROMED8-2T. Now those who are in the know, know that there are a number of dual socket as well as smaller form factor single socket Asrock rack solutions for EPYC processors. ASRock Rack > Products . I knew that with all of the lanes that one chip provided and the amount of cores that I could get on a single CPU I didn’t need any double socket solutions like I would have needed with the Xeons of yesteryear. I also didn’t want too small a board that lacked PCI expansion slots, because I wanted the ability to expand, do some pass through and maybe even try bifurcating out some lanes from a single slot. You know… all of the fun stuff that home-labers want to try out.
Now there is also the EPYCD8 board from Asrock rack. While less expensive, the reasons I didn’t go with that one is it’s PCIe 3.0 only, does not have 7 16x PCI Slots, and very importantly it does not have dual Intel 10GBe Ethernet built it. Once I started the build, I did run into a weird issue with the front panel connectors off the ROMED8-2T MB and this case, but I’ll get to that a little later.

Memory choice was pretty easy too. The Kingston KSM32RD8/32MER are awesome 32GB Regsitered ECC DDR-4 Dimms at 3200MHZ and they’re compatible with the ROMED8-2T MB.
https://www.provantage.com/kingston-technology-ksm32rd8-32mer~7KIN93T4.htm

PSU is a Corsair HX1200 1200W 80Plus Platinum Power Supply. I went with a platinum PSU because this will be an always on system and efficiency is important to me. It is unfortunately also one of the PSUs from the Corsair Recall https://www.tomshardware.com/news/corsair-issues-hx1200-hx1200i-psu-recall I checked and this does fall into the serial number range of the recalled units. However, it is working and delivering power to the board without issue so I think I’ll leave it be unless it acts up.

Cooler is the Supermicro SNK-P0064AP4-EPYC SP3 cooler https://www.amazon.com/Supermicro-SNK-P0064AP4-EPYC-Socket-Brown/dp/B078ZJSM65 Now, why this and not the Noctua NH-U9 TR4-SP3 you might ask. Simple… It’s air flow direction. The socket orientation on this board is such that the Nocuta cooler provides a weird side to side air-flow direction whereas the Supermicro cooler is designed to do airflow front to back in this orientation. Everything else in the server case is front to back so I figured that this would be the best way to go. It may not perform as well as the Noctua can, but I’m not running a 32 or 64 core CPU with an exorbitant TDP, so I think it’ll suffice.

While I intend on adding storage to this server as time goes on, I thought I’d start off with a 2TB Intel 660P NVME m.2 as a main drive and a 2TB Samsung 860EVO as a secondary storage drive for VM storage tasks.

And that’s where I am thus far. Now to the Pictures. I didn’t take a whole lot of pictures but I know you guys want to see the hardware and for me to quit yapping.

The standoffs going in:

Hex Driver 5MM from the iFixit kit drives in standoffs real nice.

The CPU Install. Finally another reason to whip out that awesome orange Threadripper Torque Driver.



Storage time. Just the NVMe for now.


Got the ram in and the cooler in and gave it a test turn on. I didn’t snap a picture in time but it posted after like 1 minute after initial boot. PSA Don’t Panic and don’t let those long server boot times fool you… It is working. Just give it a little bit of time to index all 128GB of that ram you installed. Oh and the Samsung drive is just hanging out for the photo, it’s not attached to anything.

Those hot-swap fans in the fan wall I was talking about. They have some sort of re-purposed serial connection screwed into the base of the chassis acting as a proprietary fan interface connection. It doesn’t bother me, everything is removable so If I wanted to install some PWM fans in there I could do that in the future.

For future storage purposes. Probably a small ZFS array. I added an IstarUSA BPN-340SS Black 3 x 5.25 to 4 x 3.5 Drive Hot Swap bay to the front of the chassis. The open spot on the bottom is getting an IcyDock 3.5 to 2 x 2.5 Tool-less hot swap bay in it for the Samsung SSD Plus Another Future SSD if needed.

Added the IcyDock 3.5 to 2x 2.5 Toolless Hotswap Bay https://www.amazon.com/gp/product/B07F22926M/ and Samsung 860 Evo 2TB And Wired up the Sata Drives to the Mini SAS HD connectors on board. I love breakout cables, they really make cable management clean and easy.

4 Likes

Now to a stupid little issue that I ran into while installing everything. The damn front panel headers. So because this board has PCI slots galore and the entire purpose of the platform is to maximize connectivity. Asrock in their infinite wisdom did something smart, they angled the front panel connectors down 90 degrees. This allows clearance for PCIe components so that the connectors aren’t running into expansion cards. Unfortunately, in some cases (pun intended) this interferes with the side wall of the chassis itself. And that’s the issue I ran into. So I was pretty pissed that this stupid issue had befallen me in the middle of trying to get everything built. So ingenuity to the resuce. I happened to have some of these single stack Dupont Connector Blocks from some Rasperry Pi project a long time ago and thought to myself, maybe these would work with some modification. So I pulled out the flush cuts, a pair of flat pliers and these strips and I turned these headers into right angle connector blocks then wired up the front panel connectors. It ain’t super pretty, but it works to turn on and reset the server which is the main thing. You’ll notice the Kapton tape in the background on the chassis. This was out of an abundance of caution so I don’t get any weird shorting of pins in the future. Especially as this server sits in a rack for years.


5 Likes

Reserved

Oh, my goodness this looks beautiful.

For me, a build is not complete unless there’s kapton tape inside.

There should be a sticker.

What’s it running for? You’ve said what you’re going to put on it and what’s going in it, but i’m interested in the why. Was it in the other thread?

Are you solving Big Problems? Are you running a Minecraft server? Are you running Microsoft SQL server for a really really really silly SSRS report?

1 Like

Great questions. So It’s my foray into having an always-on virtualization server. I would like to begin running some self-hosted service like having my own Next Cloud instance, an always on PiHole VM, a self hosted knowledge-base, a virtualized gaming system, a Windows 10 VM (cuz you know I don’t want to run it on bare metal any more). I would also like to get some home automation services and a home surveillance/NVR instance running. I’m getting into CAD design so that might be something I run in a VM. I’m also trying to wrap my head around coding because I have a few software ideas that I’d like to see if I can create something. To be honest, I don’t want to know everything that I want to run on it yet. It’s about the journey and experimenting with software, hardware and technology in general for me. Some will probably say that this machine is overkill for getting into home lab and running a few services and I agree, but I had the disposable income at the time and building is something I enjoy immensely.

The TL:DR is that it’s basically for self hosted services, home labbing and fun. I’m not really planning on running any crazy databases or trying to develop the next big thing. Just serious fun.

I follow this guy on Youtube. Every week he comes out with a new video on a container or service to try out on docker. So I do intend on having an Ubuntu Server setup with docker and portainer to stand things up and test them out. Probably have one VM for running services and another for just testing… Because, you know, I have the hardware horsepower to do so.

1 Like

Thats a great setup. and the solution for that connector at the edge of the case is great!

1 Like

One thing to note is that epyc Rome has an infinity fabric limit of 2933mhz. Matching the ram speed should supposedly give slightly better latency for things that really need it. Ram set to 3200mhz is better for throughput.

If you do gpu pass through to a vm, you’ll also probably want to figure out which pcie slots map to which cores/ccx group, so a pinned vm has the closest access to the gpu. Bug me this weekend and I can dig up my notes on how to do that, but the short answer is use lspci (and many something else, I forget)

1 Like

Thank you for the input. I do intend on doing some passthrough at least for the Gaming VM. So I would appreciate any help on getting around those pitfalls.

I’ve only just gotten the build together mostly so I haven’t done any OS/Software installation yet. I did intend on following this guide from @wendell for updating to a newer Kernel on Proxmox to get better hardware and driver support for this platform. Did you do this in your case?

1 Like

Proxmox is actually providing an updated kernel now for “testing”. It’s planned to be the default for version 7 I think.

We just uploaded a 5.11 kernel into our pvetest repository. The 5.4 kernel is still the default on Proxmox VE 6.x series, 5.11 is an option.

How to install:

  • enable the pvetest repository (pvetest is not recommended for production setups)
  • apt install pve-kernel-5.11
  • reboot

edit coldfire7’s instructions are a bit better

That said, Wendell’s post is still great if you want to try future kernels. I currently use the Wendell method, and may continue to do so, haven’t decided.

4 Likes

That certainly streamlines the process a bit. :+1:

3 Likes

A little update:

Finally got around to racking up the server. It’s at home with my FreeNAS Server in my rack.

Also, I was quite surprised to find a deal on some WD 8TB Red Plus Drives on Newegg. Especially in this climate, they weren’t the cheapest I’ve ever seen them, but they were reasonable. They’re in the trayless hotswap bays and will be my ZFS storage array for Proxmox.

3 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.