Return to

AMD Epyc Milan Workstation Questions

Hey all. I am currently planning a new workstation build to replace my faithful E3-1240v3 system (hopefully at the start of the summer). I mainly use the system for scientific research (simulations of quantum systems, some commercial RF simulation tools, and standard stuff like web browsing and document preparation).

One of my major goals for the new PC is to move full time to linux with the ability to run different OSs in virtual machines. In particular, I will want a Windows VM with GPU passthrough for some CAD software which I would swap back and forth with a linux VM with GPU passthrough for GPU compute. I will either use Fedora as the host OS or use debian as a hypervisor and then run my main Fedora desktop as a VM on my second GPU.

After some memory issues in the past, I would like to use registered ECC memory in high capacities, so I think that Threadripper Pro and Epyc Milan are my two choices. Based on what I would perceive to be better virtualization support on Epyc (no chipset) and what I expect will be a long delay until we get Milan TR pro, I currently planned to go with something like:

  • AMD Epyc 7443P (24c)
  • Asrock Rack ROMED8-2T
  • 4x32 GB 3200 MHz ECC RDIMMS (Kingston KSM32RD4/32MEI; later upgraded to 8x)
  • Mix of M.2 NVMe drives in a quad card and maybe a Kioxia CM6-V U.3 drive or two
  • A USB controller card or two to assign to the various VMs

I already have the GPUs

  • AMD Radeon VII (passed through)
  • AMD WX 3100 (for the normal desktop)

although I would love a prosumer version of the Instinct M100

In Wendell’s recent video on Threadripper Pro (Threadripper Pro: First Look at the ASUS Pro WS WRX80E-SAGE SE WIFI + 32 Core TR Pro 3975WX - YouTube), he mentioned that Epyc had some downsides for a workstation build like this. Does anyone know what he meant or have some advice for my build?

  • I guess one possible concern would be the reset bug on the Radeon VII.
1 Like

I would probably go TR pro just to have the support (I also bumped your TL you should be able to link URLs now)

1 Like

Hi @udmh ! I just happen to be in a very similar position. I’m replacing a 7-year old Core i7-5820K (X99 platform) used as a workstation and after careful deliberation, EPYC Rome / Milan appears the only way to go. I’ve already bought the exact motherboard and RAM you’ve listed, and I will eventually be using the 7443P as well. I don’t know you (yet) but clearly we must be great minds, since we think alike :grin:

In my opinion, people who keep their workstations for many years should always consider server hardware for a few reasons. One that is important to me is that a server motherboard contains no “flavor of the year” hardware. The Asrock Rack board may lack fancy bleeding-edge 20 Gb/s USB C connectors but guess what, it’s likely that this interface will be obsolete years before we retire our workstations. It’s been my experience that it’s much better to have a simpler board (fewer things to go wrong) and more PCI slots so you can make your machine evolve with the times.

Note that one of the interfaces you’ll be missing is audio because, well, servers don’t usually require Dolby Atmos surround sound and a clean microphone input. That is something you’ll need to buy separately, if you want to listen to Wendell tell you how you probably shouldn’t have used a server board in a workstation :wink:

Of course, air flow is a concern for a board that wasn’t meant to be installed vertically in a typical ATX tower case. You also need to worry about the PCI slots. The Asrock comes with surface-mount, plastic slots that have no reinforcement whatsoever. I’m working on a solution for both problems:

I will be reusing an RTX 3090 FE on this machine, and it would not only break the motherboard but also block a ton of slots. I’m hoping that this approach will work. That is, of course, a PCIe 4.0 extension. This is a “gamer” tower case so the whole front can be filled with fans. I’ll be installing some 140 mm Noctua’s and this should hopefully keep everything cool and quiet.

I’m waiting on an EPYC 7282 to finish the build. I’ll post all the details on my own website but also here, if anyone’s interested. Most importantly, I intend to stress-test this machine to see if indeed there are any compatibility issues. I’ll keep you posted.


So glad you started this topic! I think I’m 50:50 split between TR Pro and a Milan 7443P on a ROMED8-2T :wink:

I have been running Linux VM’s with FPGA and GPU pass-through on top of Proxmox/KVM for a couple of years now on my current Epyc 7281 box, and it’s pretty good with a couple of small pain points:

Running in a VM, I seem to have two sets of kernels to worry about now - and have been stuck for a few days once or twice after an update broke PCIe pass-through.

GPU performance on my CentOS 7 VM’s seems a lot worse than on bare-metal, and is also a lot lumpier when running benchmarks etc. The CL performance seems pretty good though - I did a fair bit of [email protected] last year.

Unfortunately I also don’t think my home-brew server case has enough airflow - for the I/O cards, not the CPU/VRM. I think my next build will be in either a rack-mountable wall-of-fans 4U case (e.g. a Supermicro tower chassis with a rack-kit) or a super breezy, but totally deafening 2U chassis, probably from Gigabyte.

Let me know how this goes. I’m doing basically your build on desktop parts because 1000 bucks (got half the shit free or as a gift). Only I have. Acrid K2 that my board can’t read or use , lol.

Will be watching.

Looks like a nice build. I did consider the 7282 as an upgrade to my 7281 but IMHO it’s still a bit pedestrian at 3.2GHz boost unless your applications are really heavily multi-threaded in which case it will fly.

Also you might find that you don’t have much airflow with big slow Noctuas - I have trouble getting air out the back of my FPGAs/GPUs without a nasty high-static pressure 10K+ rpm, 20W server fan in front of them. I have seen a few more ear friendly workstation cases with puller-fans that go on the outside - not sure how you’d plug in any cables though!

1 Like

Indeed I’ve experienced that on some builds, and I’m also worried about cooling PCIe cards. That’s why I bought that Enthoo Pro 2 case : as you can see on my photo it has a triple vertical PCI slot above the normal PCI slots. If I have cooling issues, I’ll place a fan bracket there pointing straight at the PCIe cards.

I really want to see how quiet I can make this machine.

As for the 7282, well there’s a “story” behind that choice. I actually need to build two different EPYC machines : the second one is a file server that may also run VM’s. I’ve never used EPYC before so I’m going to first evaluate it in this workstation build and then decide if I want to use it for my server. It may end-up upgraded to a Milan in the near future, that will mostly depend on availability (which is not a given in these “troubled times”)

EDIT : I’m new to this forum, it seems I should have hit “reply” instead of directly quoting you… sorry about that.

1 Like

I think I’m too far down the noisy route now - my UPS runs at 51dB! But silence is indeed golden! In a ‘real’ server the airflow is often managed with baffles - as well as fans, you might want to ensure that the air can only go across the hot cards and out of the case.

I think you’ll be happy with a 7282 for VMs - I struggle to keep my 7281 busy even with Jenkins running jobs for me on my half dozen VM’s, but for single threaded work it could be better. A make -j32 will however make you smile, assumimg your storage is up to it!

1 Like

Yeah I know what you mean, at some point I also decided that I could live with a jet engine of a PC as long as my Bose noise-cancelling headphones had power. But now I’m working from home, things are different, I want to enjoy my nice Philips aluminum cone loudspeakers :grin:

Besides, I do have a couple of secret weapons : a FLIR camera for my smartphone to help me pinpoint hot points and a 3D printer to make some baffles and fan mounts. I’ll probably end-up with an ad-hoc cooling solution that’s the perfect balance between temperature and noise. At least that’s the plan.

1 Like

Thanks for this @udmh. This is exactly my current question too, Milan vs TR Pro for a workstation in the near future. Also to me Milan seems the best choice currently. Milan’s relatively high boost clock speeds was an interesting surprise, I had expected lower speeds across the board. Given the IPC gains in Zen3 vs 2, the 4G boost speed of 7443p, and even the 3.7G boost of 7313(+/-p) suggest they are reasonable alternatives to current (Zen2-based) TR Pro. (I am considering both 3955wx and 7313 on my map, I’m after the pcie lanes more than the cores, though I feel 16c is minimum).

@Nefastor Good point with the lack of slot reinforcement on server boards, I had not thought of that aspect at all. Though I won’t get the heaviest gpus so it will probably be ok for me - but it’s an important point, esp. as consumer chassis seldom have the “rails” in the front to support gpus, that oem workstation chassis often have.

How do you guys reason about CPU cooling, air or liquid? I always preferred air, but it seems hard to find good air coolers with the correct orientation for front-to-back airflow (this problem is the same for Milan and TR Pro, as both sockets have different orientation from regular TR). I like that the air can flow along the length of the CPU, this seems better in terms of heatpipe placement. However still, most workstation-friendly air coolers that fit are built for regular TR, and thus they will have to blow upwards, forcing the airstream in a weird angle. I see people recommend Supermicro coolers that blow front-to-back, but they are for servers and have fan speeds around 3800rpm - seems too loud for a workstation.

Here are some additional issues and decisions I’ve been thinking about, I don’t know which will turn out important or not.

  • Spec vs. realistic boost speeds: Ok 7443p boosts to 4GHz, but will they run that high in relevant scenarios? Epyc >= Rome has afaik no such thing as all-core boost, instead they use the entire range between base and boost dynamically. What speeds would we see at, say, 8 (1/3rd) of cores loaded? I have seen no discussion of core speeds / number of loaded cores in the benchmarks we have seen so far.

  • Upgrade paths. It seems more likely that TR Pro motherboards will swallow a future Zen3 based TR Pro, than that we get to see a Zen3+ or whatever Epyc for SP3. So while Epyc seems better now, next gen TR Pro will probably outcompete Milan in 1T performance. But that assumes one bothers upgrading, and I’m not sure about that given what these CPUs tend to cost.

  • Availability. I’m told that most single-socket (“p”) models are a couple of months away, at least here in Scandinavia. While a smaller range of models are available “now”, such as the 7313 (non-p), I’m not aware that anyone who is not a server-oem has started getting them.

  • Motherboard. Any thoughts on Supermicro H12SSL-NT? I had considered that one, but then you guys mention the Asrock Rack board that I have so far not looked closer at.

I could add that I’m planning to go with some linux-friendly AMD card for host system that will run Linux, and then run one main work/play WIndows VM with an Nvidia card assigned. I have fairly modest demands in terms of GPUs, so I will run on old stuff while waiting for better days, and then get something mid-range.

This is about how far my plans have progressed. I’m leaning a lot towards Epyc for cost, architectural and tdp reasons.


I’ve got the Noctua 3U SP3 cooler on my 7281. It does indeed have fans facing at 90 degrees to the front->back case fan airflow, but doesn’t seem to cause any issues - assuming your DIMMs aren’t super tall.

I’m half tempted to go water-cooled but I do have a worry about water cooling though - if your CPU is nice and cool, what are you going to use to regulate the case-airflow that’s vital for the VRMs etc? I guess you could set up some fancy IPMI script to read the board temp sensors etc, but the BIOS fan control on my H11-DSi is very basic.

As for motherboard choice - I have no issues with my Supermicro H11-DSi - except that I shouldn’t have bought a 2 socket board - I’ve still got an empty socket, but would much rather have 7 PCIEe sockets. Oh and mine came without the Oculink sockets fitted, so I’ve never had U.2 support.

Personally I’m also after some motherboard U.2/U.3 support, as I’d rather have those than M.2’s. I don’t care about 10-BaseT, would rather have SFP+ or OCP.

But if you want lots of PCIe slots for GPUs, you have to be careful that your CPU socket isn’t behind any of them, otherwise you’ll be limited to half-length cards which rules out a lot of (non-server) GPUs.

1 Like

Regarding the Asrock’s PCI slots, they don’t even have a locking mechanism, or pegs going through the circuit board. My advice, if you have to use heavy PCI cards, is to install the board horizontally.

I’ve looked at all the ROME boards and if you are after PCIe lanes, the Asrock is the best choice. You should download its manual, it has a nice schematic. Basically, it’s the only board I’ve seen where you can actually use 7 x16 cards, which is 112 lanes out of the EPYC’s 128. The remainder is used by one of the NVMe slots, some of the SATA ports, the on-board 10 Gb NIC and the IPMI.

I’m an air-cooled guy. My current rig uses an AIO watercooling which, thankfully, has never had any sort of problem. But here’s the thing : that rig’s second career (like all my old machines) will be as a server. I don’t think I can (or should) fit a watercooler inside a 19’’ rack. All in all, water and electronics don’t really mix. And EPYC have lower TDP’s than ThreadRipper, so there’s less heat to dissipate.

I’m planning on using a Noctua NH-U14 TR4-SP3 : the heatsink itself is so narrow it’ll easily fit between the DIMM banks. However I’m replacing the original fan (a weird 140 mm x 25) by a 120 mm x 15 so I can still access all the DIMM’s once it’s mounted. This cooler is rated for 280 W chips, so it should be good for EPYC even with a smaller fan. I’ll let you know.

As you can take the fans off the cooler, I can still access the inner DIMMs on my board with the Nocuta TR4-SP3 cooler. You can fit the DIMMs and then clip the fans back on.

1 Like

Yes indeed, but if there’s a chance I don’t have to mess with my cooler once it’s installed, I’ll take it. I can be lazy sometimes :sweat_smile:

@oegat - I don’t think there will be a SP3 upgrade for Milan - Genoa is going to be DDR5 + more goodies. If you’re worried about an upgrade, you might find it better to buy a big-name box that’ll be easier to resell when you want to upgrade. Those used Dell R7515’s don’t seem to have dropped much in price at all…

1 Like

Them mainboards and memory are the breaking points, in debate of TR-P vs. Epyc

These server-esque boards, have a lot of hardware omitted, seen as frivolous to their intended workflow(s). So likes of Audio, [multiple] USB controllers, etc., are not even given soldering points [at times] → if they’re deemed necessary for your rig, that is expansion slot occupying, for what is menial bandwidth pull.

Servers using ECC. Strong chance [if not certainty] that the mainboard producer ain’t gonna budge, to idea of opening up memory options [as @wendell put, his Epyc chip would n-e-v-e-r POST, whenever exposed to non-ECC memory]

Milan is definitely swansong, for SP3 Socket… far longer ride, compared to the sTR4

Good point with the VRMs’ cooling. Fan control is a part that is probably good not to complicate, but let the BIOS do its job, as you say. On the other hand, perhaps the idle airflow is enough for those given that the CPU heat is moved out of the way. That’s an issue to keep in mind.

I agree with your point about two sockets, I did that mistake for my last workstation like this, a dual socket G34 that would have been better with one. Single-core clock of the 6140s became a bottleneck long before core count :slight_smile:

About upgrades, right your conclusion is pretty much mine too. I actually don’t see myself likely to be upgrading this rig, I tend to want to replace more than a CPU when I finally decide for it.


Why no love for ECC? Looking on my usual supplier, a 32GB 3200Mhz DDR4 ECC server dimm is £133, 32Gb of non-ECC comsumer stuff more like £180-199. So I get 72 bits for less than the price of 64! :slight_smile:

Thanks for the update on Asrock and pcie lanes. Although I think most boards I’ve looked at will provide the lanes I’m likely to use.

The Noctua sounds like one of the best options currently, though I recently saw someone complain that the U14 would not allow full boost headroom on a regular TR 3960x, even with a second fan installed, in a Fractal Design 7XL (non-mesh-front, but otherwise reasonably cooled case). And that’s a 280tdp chip. But as you say, the relevant Epycs are lower tdp so it should probably be fine! Do report your findings after testing, I’m definitely interested, thanks for that.

I have also been considering testing the more “cubic” shaped Noctua NH-U9 TR4-SP3 by putting the fans on the 90 degree off side, to get front-to-back airflow nevertheless, and compare with regular mounting. However I strongly doubt my “solution” would be better than the standard one, I’m mostly curious.

1 Like

Actually I think that might be the cooler I have - looks like it might flow, though you’d need new fan mounts.