Threadripper (Pro) Vs Unreal Engine. Epyc?, Cook & Package times

https://level1techs.com/node/2918

Amazing recent video

@wendell could you post up those results here?
I need to show them to some “budget holders”

I’ve worked on some smaller, and some very large Unreal Titles. Some thoughts:

Iterative compiles
Workstations and build nodes are never cleared of the intermidary build artifacts, so are always doing iterative compiles. Only rare events cause us to ever need to compile from scratch.
Number of C-objects needing compiled: usally sub 100. per change from a dev. Mainly the biggest holdup here is the Link-time.

Clang
Sony Consoles use Clang compiler, So does switch (arm). If you were blessed by the relevant authoirites to get SDK/compiler acess for these - The numbers would be intrersting.
Unreal can also target Linux native too (The path less trodden) .
Personally I’ve not seen any major diff between this and windows, but I’ve not gone deep like in this video with multiple different cpus.
Windows MSVC compiler is a good middle ground that everybody uses so the data in this video is still great.

I do wonder. How does EPYC Cpus factor in here :thinking:. For buildfarm nodes.


Cook and package
Biggest bane of my existance is the Cooking and packaging steps of unreal. This needs to be done per platform when any piece of content changes. Testing games for perfomance needs to be done on cooked data, and consoles can only run on cooked data.

The intial run takes an age, as it’s compiling shaders & compressing textures/models/sounds… etc and filling up the local or remote DDC Cache. Very game dependant.
but After the first run, cooking still takes an age each time. This is purely a disk IO bound task (I think). As it’s essentially just reading and saving out every asset with some metadata. This can take upwards of 40mins.
I would love :pray: data on how this scales with better hardware.

[UE4Root]/Engine/Build/BatchFiles/RunUAT.bat BuildCookRun -project=MyProject.uproject -platform=Win64 -cook

‘-multiprocess’ cook is a thing, but isn’t supported yet by some often used middleware, so hasn’t made it’s way fully into the light.
‘-iterative’ cook also exists, but is untrustworthy and not used unless I’m desperate.

The package step is a final compress that takes the cooked assets and bundles/encrypts them up with OODLE. This is a cpu melter. More cores=better. >15min is not unheard of

[UE4Root]/Engine/Build/BatchFiles/RunUAT.bat BuildCookRun -project=MyProject.uproject -platform=Win64 -skipcook -stage -pak -package -configuration=Development

Thanks !

Any other unreal devs in the house got the Cook madness?

4 Likes

Do you have any new info on this?
I’m also interested to learn about the experience, especially since I’m looking to build a new system and considering the price, really looking to optimize.
So far, a 256GB 7960X looks to be a sweet spot to get more PCIE, but for a build machine, a 7985WX seems to be the sane choice. 1TB RAM looks unobtainable, 96GB modules are also double as expensive as 64GB so that won’t fly I guess…

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.