AMD Epyc Milan Workstation Questions

I’m running one of the 4-channel optimized parts currently: EPYC 7252. I got it as a placeholder while waiting to order what will probably be a 7313p, which is a full-bandwidth part.

The 7252 has 2 CCDs, CCD3 and CCD5, both on the same half of the MCM (same side of the IO die), but on opposite sides of that half. So they should preferably use the 4 memory channels located on the same side as the CCDs. As a consequence of the layout, these CCDs cannot be configured as separate NUMA nodes (not that it matters for most use cases).

(@Log this is also the reason I haven’t got back about the PCIe slot <-> chip section mapping. I could not configure NPS ≥ 1 on my current hardware, so I don’t know how to expose the topology).

3 Likes

I just read about this and since I am not a server guy I am a bit confused as to what is the best action to take for my 3d/DaVinci Resolve workstation for this

AMD EPYC 7443P 2.85 GHz (24C/48T) - part as far as RAM goes - What kind of RAM and size and number of stick is the best option to start with - I am trying to keep it cheap as possible but don’t want the CPU to run at half speed while in Resolve or 3D rendering. As I understand even with fully populated difference in real world application cases is few % - if I understand this.

1 Like

I don’t know what kind of 3D rendering you’re doing, however I can tell you that Resolve won’t be a problem. I’m editing 4K ProRes files off my NAS with blazing speed, and Resolve 17 renders on the RTX 3090 at something like 3-4 times faster than real time. That’s for 4K video encoded in h.265. If you’re using the free version of Resolve you will be limited to CPU rendering, however on my 7282 the same type of video renders at close to real time. That’s with 4 sticks of PC3200 (the recommended ECC sticks mentioned in the mobo’s manual). With a 24 core chip, you shouldn’t feel limited unless you’re rendering feature films day in, day out.

Trust the Wendell, EPYC is a monster :grinning:

2 Likes

:slight_smile: I am even now running 8K RED RAW and 6K BRAW at 1:1 on 6K timelines and fulldebayer on my dino old z800 all thanks to the RTX 3090 GPU and I am running Studio version with a dongle and DeckLink 4K mini out to a grading monitor. The main reason why I am upgrading my OLD OLD OLD DINO Z800 dual Xeons is because I feel that RTX 3090 is being held maybe 10-15% by RAM speed ? For 3D rendering in Vray or RedShift for example I don’t see any slowdowns because once the rendering starts CPUs and RAM is almost not used at all. I wanted to upgrade z800 to z840 in years but then I just kept upgrading GPU adding second power supply and my z800 keept growing like a monster outside the case - so now I feel its a good time to build a new workstation and I like this ASRock as I see it can keep growing with GPUs on a “mining rig” type situation and RAids I have and I think I can keep going with it for many many years.

2 Likes

From my understanding, having 4 channels rather than 8 is only a bottleneck if your workload is such that you need the larger memory bandwidth to saturate all cores. With 7443p and 4 channels, for example, you would have 6 cores / channel - that’s fewer cores per channel compared to a 77xx chip (64c) with 8 channels of memory.

4-channel “optimized” means the cheaper CPUs won’t get better bandwidth with more than 4 channels, however it does not typically mean that stronger CPUs are bottlenecked by only 4 channels (unless the workload is very bandwidth-sensitive).

3 Likes

Naturally now with this CPU I would render more in Hybrid mode and CPU only mode for certain things - now I am all GPU and I know everything is going GPU anyways for rendering - it will be used only in After Effects as I see it and compositing situations. I guess I will go with 4 sticks of 32 gig for now at 3200 and then see where it goes.

1 Like

That sounds right, Milan runs infinity fabric at 1600MHz, so DDR4-3200 is optimal speed for Milan.

2 Likes

You might be on to something. Whenever I render videos GPU usage never goes to 100 %. I’m not sure why. I guessed it’s because I’m “only” doing 4K, leading to some inefficient use of the hardware, but it could also be a bandwidth issue. Especially since I keep my media pools on my NAS and am therefore limited to 1 GB/s on that path.

I’ll try again soon and see if I can maximize GPU usage. If I find anything useful I’ll let you know.

2 Likes

Looking at Epyc and the way it works I feel that this AsRock motherboard is like a “plugin” to enable us to tap into all its powers :slight_smile: I see down the road 7x RTXs mounted on my studio wall like a sculpture :slight_smile: - why keep everything in a box - times have changed - look at the layout - it begs for risers 1.5 m longs spreading like a octopus that it is :slight_smile:

2 Likes

Yes, depends on the codecs you use and the speed of your drives - but to be honest with BRAW for example from Blackmagic you can get away with 400-500MB/sec raid and RTX3090 to do 8K and even 12K BRAW I ran on 4K/6K timeline at 1:1 = everything is about GPU - DaVinci is held a bit by the CPU/Ram with compressed codecs but with RAW almost no difference when I compare my benchmarks with benchmarks from people having 20K rigs. So that is the main reason why I kept putting off upgrading CPUs and RAM because everything was being shifted to GPU for me from 3D renderings to colorgrading to finishing. Even simulation now works with GPU - this where the trends are - I guess these CPUs are now only for Data Centers and for us that want to use legacy RAIDs for massive storage on the cheap.

1 Like

That’s exactly what I noticed. I’ve used side by side a Core i7-5820K and an EPYC 7282 and when it came to Resolve there was no perceptible difference in responsiveness and rendering time. BTW, I shoot with a ZCAM E2-M4 and for my needs, ZRAW makes no sense (just a waste of storage). That helped me make the decision to get a 6850K and extend the life of my old X99 machine.

1 Like

Yes that is correct but with ZRAW you could work even faster if you have storage and have better quality at the end - you can then get rid of that data when you finish the project and just keep the final uncompressed 444 and stuff that you “might” need. I wanted to go with new Ryzen but I can’t be so limited with PCI lanes its crazy limited in my book - Epyc especially this P variant I am looking at is just 300 eur more but then you get massive amount of pcie lanes and a proper way to expand it down the road - I honestly think that this new build can last for 10+ years just like z800 did and still keeps going - I could even just get one more RTX 3090 and have 2xGPUs on Z800 and rock for years - but I want new platform for CPU rendering/ VFX work and these fast SSD drives for caching VFX work - 1000Mb/sec is all I need to be honest unless I start with multiple cam work and beyond 12K

1 Like

The technical term is a “breakout board” and yes, that’s exactly what that Asrock board is. Personally I love it. I’ve said it before in this thread, I come from an era where motherboards were pretty feature-poor and you even had to buy discrete audio cards (Sound Blasters were in every single computer back then).

Meanwhile, today’s motherboards are full of “flavor of the day” interfaces that will become obsolete long before the motherboard is retired from service. But you’ll still have to feed them power for nothing and they will still cause problems if they break. So when I saw the Asrock board, the choice was obvious. No chipset, just an IPMI and a NIC, and I get access to all the PCIe lanes ? That’s perfect flexibility and it should also prove very durable.

Actually I’ve talked about that in this thread. There are potential complications in achieving something like this. Here’s the exact post. The FLIR photos really illustrate the problem :

Basically, I do not think you can feed enough power to the 3090 through the PCIe slot if you use a very long riser, and the 3090 does use all 75 W of power out of the slot in addition to the 12 V connectors. They would need to make risers with really heavy gauge wires for power, and that would be prohibitively expensive.

But if you have the money and some time, I happen to have the expertise to make it happen :wink:

2 Likes

oh yes that is what I wanted to ask you about the power to the board so that I am sure that PCIe slots don’t melt if I plug 7 GPUs :slight_smile: is the additional power good for that - what is the limit? I guess we can make these riser cables with its own power like the powered risers for mining?

Well the motherboard is a real thick one, 12 or 16 layers, I’d say. And I have some FLIR imagery that indicates there’s a LOT of copper in there, definitely some beefy power planes. In addition, there’s an extra 6-pin PCIe 12V connector near the last PCIe slot that is specifically intended for bringing extra power when all PCIe slots are used at the same time. They really didn’t skimp, the price is justified despite how bare the board looks.

2 Likes

Yes that power connector for PCIe slots I think would be ok for GPUs - I used to run a mining farm so I am sure that it won’t be a huge problem to power 7 GPUs from this board with that 12V connector and good quality cables. What riser cable did you use in your rig I see on here? What was the price and lenght?

It’s this one :

Mine is 30 cm long. A bit expensive, but I wanted something built and rated for PCIe 4.0. There aren’t many options.

My case also came with a plastic mount for a PCIe riser : that part isn’t sold with the riser. I was lucky that the two were mechanically compatible, though assembling the whole shebang required some finesse.

I’ve verified with GPU-Z, the link does indeed run at 4.0 speed and I’ve never had any issue, glitch or whatever. So, money well spent.

1 Like

yes that is quite expensive I actually i have one of these that I used without problem with 2080ti and now 3090 if I want to put another card that is being blocked by the card - it is only 19 eur - don’t know how to post link on here so look it up Kolink Riser Cable PCIe x16 - x16 Mainboard - it has a molex power you can use - but its only 19cm :slight_smile: - will look for cheap solutions up to like 1m

Look up - Thermaltake PCIe x16 Extender 1m

If you have the budget for 7 GPUs, then wouldn’t you consider a chassis designed to take a shed-load of GPU’s like the beast Wendell reviewed: https://www.gigabyte.com/uk/Enterprise/GPU-Server/G482-Z54-rev-A00 ?

Otherwise, unless you really have no noise issues, wouldn’t a simpler thing be to use watercooled GPU’s, as without the heatsink/fans they’re only 1-slot wide, and you might find it easier to cool 2kW+ of GPU with water…

1 Like