AMD Epyc Milan Workstation Questions

@udmh I’m starting to “feel at home” with my new EPYC machine so I thought I’d go back to your original post and try and answer as many of your questions as I can.

Regarding virtualization : the motherboard’s BIOS exposes all relevant settings including SR-IOV. I’ve only used VMWare Workstation over Windows 10 but there’s nothing special to report.

The KSM32RD4/32MEI sticks are as nicely generic as you could want : nothing fancy, they don’t get hot, I haven’t had the slightest issue with them. They are good value, I’d say.

You are right to want to use a PCIe card for NVMe drives : the placement and routing of the on-board M.2 sockets isn’t ideal. Heatsinks would definitely be required and those would interfere with at least 3 of the PCIe slots. A bad idea if you want to use long GPU’s directly on the motherboard.

Speaking of which, you’ll need to be extra-careful if you’re building into a tower case : those PCIe slots are not built to support heavy GPU’s at all. Make sure to tighten those I/O brackets or find a more “creative” way like I did.

Also, keep in mind that if your GPU’s cover the X550’s heatsink you will definitely end-up with thermal throttling issues. This heatsink is barely sufficient on an unpopulated board. I think Asrock could have done a better job. I doubt it would see any air even in a bona fide server chassis with jumblowiematron fans.

Speaking of the downsides mentionned by @wendell , I didn’t run into any. The one that really got me in the feels is the total lack of ACPI S3 and S4 support. I got an e-mail back from Asrock saying that they had no plans to support that. Which makes sense for server hardware, but still, it would have made this board the perfect workstation board.

However, it’s not necessarily a big issue. I’ve been using it for a while now and if you leave it alone (as in : turn off the monitor and go to bed) it only draws 65 W. What’s more, at that speed the PSU fan doesn’t spin, and neither do the fans on an RTX 3090 FE. If you can live with the additional expense 65 Wh represents, you may not need to go for ThreadRipper Pro.

Perhaps another issue is that the IPMI’s remote KVM can’t show you the desktop unless its own VGA port is included as part of your monitor setup. This comes with its own set of annoyances : that port is not 4K, obviously, and if you use a KVM switch Windows will mess up your windows’ size and placement any time you switch away from the EPYC workstation. I’ve had to disable that VGA adapter in the BIOS (by setting “external GPU” in the BIOS instead of “auto”).

I don’t have any AMD GPU so I can’t rest that reset bug you mentioned, but I’ve had no issue with Nvidia cards.

All in all, the machine you’re envisioning is a very viable workstation. It’s as fast as you can imagine a 16 core / 128 GB / NVMe platform would be.

As for me, I think I’ll keep it as my workstation until ThreadRipper Pro is slightly easier to get my hands on. I really want to go with EPYC mainly because it’s cheaper and the motherboards are smaller, but that Gigabyte WRX80 motherboard looks like it does at least everything the ROMED8-2T does and it’s not too much bigger. We shall see.

If you have any question, don’t hesitate to ask.

2 Likes

Really weird to get PCIe / memory channel interference!

Did you test the GTX460 in the mischievous PCIe slot? What was the result?

Looks like the perfect thing for my mountain hikes in late august. Always wondered how many warm-blooded (hopefully) creatures are watching me from nearer than I think :slight_smile:

Wow, didn’t know such cases existed! That was one of the ideas I dreamed about in the distant past when I wanted to run Linux and Windows in parallel, before GPU passthrough was a thing…

I fear something similar for the Broadcom chip on Supermicro’s H12SSL boards (10Gbe versions), their heatsinks are in the roughly same position. Actually I’m looking more at the 1Gbe versions lately, as I can always put in a PCIe nic later as needed.

Is it the hibernate issue that makes you consider TR Pro at this point? Or rather you’ll wait and see what will show up in terms of TR Pro / Zen3 within the next year? I can see how you are no longer in a hurry, if you are satisfied with Rome! I think for me, the lack of definite knowledge on when/if Zen3 for sWRX8 will come, and to which price, is part of the equation. If I choose e.g. 3955wx now, I’d like to know there is a Zen3 upgrade path at some future point. I kind of expect there to be one, but there would be the matter of price and gains.

I might also end up waiting to September or so with deciding, as there are a couple of summer months when I won’t have much time for my rigs.

1 Like

AM4 and SP3 have been around for a while and are long in the tooth. I’m not holding my breath but there’s rumors of a zen3+ “refresh” planned but nothing actually confirmed. Zen4 will almost certainly have new sockets (AM5)

Kinda depends on if AMD can get DRR5 and PCIe 5 working as quickly expected on Zen4. I will be surprised if PCIe5 doesn’t have some bad growing pains. If it’s a nightmare we might see a refresh of a refresh, or a weird Zen3+ on AM5 situation with all the drama that comes along with that.

If you can find what you need at a decent price, I’d personally just pull the trigger and not play the waiting game.

3 Likes

It’s been a long day, guys… very busy between software installs on the new workstation and dataset replication from old NAS to new NAS. I haven’t had much time to play with the hardware. Sorry about that.

Before I go to bed I just wanted to let you know that Asrock has no idea what could be causing PCIe slot 1 to run at x8 instead of x16 and have taken refuge behind the “this board isn’t meant to run Windows 10” cheap excuse…

3 Likes

PCI slot stability, and casing [Milan option]

I’m making a separate post about this, it will be a wall of text.

TL;DR: Up-side-down motherboard mounts seem to be better suited for heavy video cards, given weak (server-mb) PCIe slots.

The final platform-related issue I see with Milan (vs TR Pro) is the aspect that @Nefastor pointed to, the stability of PCIe slots.

I did not consider this at all before, and it is definitely an issue that needs to be solved. Supermicro is no better than ASrock or others in this department.

I am not so keen on vertical mount solutions as @Nefastor’s, as I need at least 2 cards, and I don’t want to shadow any slots, neither do I want a too big case for an ATX board. I’d prefer them to sit in the motherboard slots, together with various USB cards and other simple stuff for passthrough, making up for the lack of southbridge. (it will be as in the olden days, one card in almost each slot for various mundane functions + graphics :slight_smile: ).

I would add here that I seldom prioritize GPUs in my rigs, and this also reflects in that I know comparably less about them. I don’t track the market, wattage, and what different performance classes require. So my knowledge / expectations are probably off at times, and I’m happy for input.

GPU requirements
On the positive side, I don’t need the absolute top-tier GPU cards. For now I’m thinking of:

  1. Lower tier Radeon Pro WS card (2100-4100) for Linux host (like @udmh)
  2. Mid-range Nvidia (2060, 3060, non-OC) for Windows guest

(the Nvidia card will be in a year or so when the market has hopefully cleared up)

I don’t worry about the Radeon card. But even medium tier Nvidia cards get quite heavy heatsinks nowadays. So, how much of an issue will it be? How limited in terms of GPU model would I be?

Mitigation

I have two main strategies.

  1. Avoid the longest and heaviest cards. It stands to reason that a longer card will put more stress on the slot than a shorter one, even at the same weight, given how the card is supposed to be fastened. Of course barring cards long enough to fit some support structure at the front of the chassis, but non-oem cases never have those nowadays.

  2. (and this is the more advanced idea) use a reverse-mount computer case: one where the PCIe slots are at the top, and CPU and memory are at the bottom.

The reasoning behind (2) is how the mounting bracket of the GPU is oriented. In normal orientation, the bracket including its fastening is below the card’s gravity center. This makes the card pull out from the back wall, and put a lot of stress on the front end of the PCIe slot. However, in the upside-down-orientation, the fastening point is above the gravity center. This will make the card press against the back wall - with some of the force that had otherwise stressed the PCIe slot.

In this calculation I take into account that there are usually two fastening screws. Even though one is nearer the card’s base, the “average” fastening point (across the different screws) will be more in the middle of the mounting bracket.

This solution leaves me with fewer motherboard choices, however there are definitely viable options out there. BeQuiet Silent Base 802 and Seasonic Syncro both seem to fit the bill.

Mounting details
I would get a case with a top mesh (like the two mentioned above), and then put the larger and hotter (Nvidia) GPU at the PCIe slot farthest from the CPU. It’s fans will face the mesh, where I’d add a suitable number of intake fans blowing down. So the stronger GPU will get it’s own cooling zone, largely independent of the rest of the case (I’ll aim for a blower cooler first, second a regular one). The rest of the case, CPU, and other cards would be cooled by front-mounted fans + a single exhaust fan. This would help maintain positive air pressure in the case, giving me control of where the dust goes.

This seems to require a front-to-back blowing CPU cooler, which leaves me with something like Supermicro’s noise monsters until Noctua or beQuiet gets their act together and invent something suitable. But it would work with a front radiator too, if it turns out to be needed.

I really don’t know whether this sufficiently solves the issue. Does it sound like a viable mitigation of the GPU weight problem?

I definitely agree to that! I think my concern is that I do wish to end the ddr4 era on Zen3, and therefore I would not get TR Pro unless I knew I could replace the CPU with Zen3 at some point (if I knew I could, upgrading in a couple of years would be perfect). These kinds of rigs live long for me, at least judging from my past experience.

That makes Milan a clearer path, as for those I kind of know what I will get. However that implies either waiting a couple of months for the p-series to become available, or pay the early bird tax for the 7313 (non-p). (or, get a placeholder cheap 8c Rome for now).

(my main reason to speed up the process is that I have most time to build and set up the rig during May-June. The “need” issues will surface in the fall, but not yet.).

@Nefastor of course, take your time! Looking forward to the next update, whenever it’s time :+1:

If you’re handy with blender and 3D. Printing, you can make a fan mount right above the vrms and have the mounts attachment point be the atx screws that hold in the motherboard

Indeed. A laser cutter and some acrylic can also do the job. I did that recently in order to integrate a SoC motherboard into a CNC machine tool. I’ll try to find the photos to give you a clearer picture of what I mean. If you can access a fablab or hackerspace you should have no trouble finding a laser cutter.

I couldn’t find photos of the complete system, but this one should be enough :

This is an ITX board. I laser-cut two acrylic plates for it : one to adapt the ITX mounting hole pattern to the mounting holes in the electrical box and one to go over the board to carry a fan just over the CPU heatsink (and also provide mounting points for other hardware). The “fan mount” has the same ITX hole pattern as the motherboard and attaches to it using simple nylon risers. It’s a much simpler alternative to 3D printing. Here’s the first plate screwed at the bottom of the electric box :

The whole installation is rated IP64.

Runs Windows 10.

1 Like

I’m partial to twist ties and zip ties for mounting fans but you guys seem to have more resources available

Is there no hackerspace near where you live ?

Intel 28 core 5Ghz demo engineer: “Hold my beer!”

2 Likes

LOL is that the one with the cryo cooler that eats 750 W on its own ? :rofl:

Well, it had a 1500w chiller attached :slight_smile:

1 Like

BFE kentucky, we have meth heads instead

Ah, I see. I live in “communist Europe” :crazy_face: no “war on drugs”, and environmentalism means lots of corporations gift their old machine tools to various hacker communities because it’s easier than disposing of it legally. Cheaper too. You wouldn’t believe the stuff we get.

I don’t know where you stand on environmentalism and I certainly don’t want to derail this thread with politics, but this is one of the less obvious aspects of “going green” : less waste means lots of stuff gets a second life, usually in the community. And then we close the loop by helping the community. Since the plague hit, we’ve used the laser cutter and other machines to produce face masks for medical professionals.

I hope someone starts one near you. Maybe you could start one ? :wink:

2 Likes

That is quite the collection.

Your reply is made all the more funnier because of your title (Garbage Collector) : a lot of the “donations” we receive are quasi-junk, almost all of it requires enthusiastic retrofitting and oceans of elbow grease to turn from “garbage” back to “useful, productive member of society” :grinning:

(and yes, I do suppose you were rather referring to a process that is part of a lot of programming language, not to actual collection of physical garbage, but still, that was funny to me)

2 Likes

You got it backwards. While Garbage Collector is a thing in CS, in my case it refers to me giving old tech a home (I don’t think I posted my small collection anywhere here yet).

1 Like

That’s awesome ! I’m a fervent believer of maintaining the history of technology, especially as it grows more and more fragile with every generation. And of upcycling. Do you collect just for the pleasure of it, or do you use your collection ?