I am running a 7950X with 64GB of RAM (6000Mhz, CL30 or something) - my main workload is software development via WSL2, using docker containers, NodeJS/Typescript compilation in large monorepos, Games programming (prev. Unity, now Godot) and general art & music stuff (currently using the Affinity Suite, which runs smooth).
With all the current talks about Zen 5 Performance and how the latest Win11 update changes “things”, i am curious if my setup can be improved:
I am still running Win10 22H2 (i don’t like W11), and I don’t know if an update would downgrade the performance. So far I’m happy with stability, but the 24H2 stuff makes me curious…
For some reason, whenever I boot up my PC I have to wait about a minute. No idea why, but the memory training doesn’t seem to stick? It’s not a biiig deal, but I basically have to do other things while it loads up (and restarts a couple times doing so)
The WSL2 performance isn’t bad, but Linux and Windows are constantly fighting for RAM, the 64gb? not enough at all, and I already upgraded earlier this year. I am constantly hovering around the 63gb mark, with audio stuttering until I have to do a hard restart (which takes forever again, see #2)
The large Typescript monorepos take forever to auto-complete whenever I code. I know this isn’t quite related to hardware, but I did notice a big performance uplift when I upgraded to the 7950X (from a 2nd gen Threadripper), since it has a bigger cache size. I was wondering if an update to the 9950X would make sense? Are there any coders that could tell me if a CPU upgrade helped them in their daily work? Any benchmarks I saw were either using games or the default 7zip/chrome compile tests, but I don’t know how they’d compare…
I haven’t done any BIOS optimizations yet, and those that I did somehow reset themselves. Every couple months, I have to re-enable things like EXPO, tweak fan curves and whatnot. I tried the curve optimizer in the past, but I don’t know if that would help with performance? The CPU isn’t at 100% load all the time, so it runs quite cool, generally (I use an air-cooler, but it all works, no heat issues)
Would a switch to Linux help? I guess I can run the couple apps that actually need Windows in a virtual box or something, but I am unsure if there’d be a difference. I do have a couple things that might require me using Windows, so I am just testing the waters here (not a newbie to linux though).
It sounds like you would benefit more from better configuration/linux and more ram than the cpu upgrade.
Memory context restore enabled? It should help with boot times.
If your pc needs multiple attempts to for memory training something might be wrong with your settings.
Why are your bios settings resetting? Off of a failed boot perhaps? My board does this on failed memory training… Or due to bios upgrade?
Are you on the latest bios?
Switching to Linux could help, I don’t have too much experience with WSL2 but I believe it is more memory hungry than Linux. But if you need to run a windows VM constantly that will eat away the difference.
You could get 2x48gb to run at similar speeds, or go for 4x32 or 4x48. It will run slower but it will always beat running out of memory.
Memory context restore could help with that, as said above.
WSL2 uses a VM underneath, so I guess it’s expected that you’ll be constrained on RAM.
Yeah, I believe that’s mostly a windows issue, and also the overhead between windows and wsl2 filesystem sharing.
Nah, I don’t think an upgrade from 7000 to 9000 is worth it for you case, my bet is that you’ll barely see any gains for what you mentioned.
I guess Expo, fan tuning and enabling memory context restore should have you covered. I don’t think that messing up other settings are worth it for minimal gains and a chance of instability.
Oh, for sure, specially on the part you mentioned that auto complete is slow. As long as your Windows stuff don’t really rely on GPU acceleration, you should be fine running those inside a VM.
Thanks a bunch for the fast and detailed responses!
I think it is, but I’ll check the next time I reboot. I am turning off the full power to the PC at the end of the day, but I assume the data should be kept alive with the CMOS battery anyway?
With the restarts I mainly meant the regular “boot up, do the training, shut down, now boot up properly” procedure. It’s been that way since day one for me. Should it work differently?
I honestly don’t know why it happened, and no, it wasn’t due to an update. I am also not on the latest BIOS as I haven’t had the time, it’s probably one from February/March or even older.
I definitely want my RAM to run at fastest speeds. Most of the time it’s not a big problem, only after having it running for too long with bigger tasks. For some reason, Resource Monitor shows a total of less than 63GB for all claimed RAM so I don’t know what’s going on. I limited my WSL2 to 42GB of RAM, but I can def. try tweaking that setting a bit more.
I don’t access any files on Windows. All my files are in the linux distro, not using any mounts whatsoever. VSCode is using the remote access thing where even the VSCode server runs inside of WSL. There shouldn’t be a performance issue, and usually it’s quite fast, just not with bigger projects. Might just be a JavaScript thing since the codebase is quite large in my example.
Thanks, I kind of figured but good to have confirmation
I do rely heavily on GPU-driven apps since the Affinity Suite is still not out natively for linux. There are other thins like USB-lock-devices that might only have windows drivers, so I might not be able to switch in the first place.
Yes. Something weird is happening if your bios spontaneously resets. Either something’s not stable and it resets after a memory training failure, or cmos battery is not in properly, or …?
Mine doesn’t do that. I have a 7950x on an Asus proart x670 board. I have only observed that on bad/unstable memory settings.
I would try latest bios and keep everything auto (at least PBO and memory settings) and see if you still have this weird behaviour. If it’s okay for a while try EXPO/XMP again, and see if the weirdness comes back (the reboots during boot & the bios resetting). If you’re unlucky with your CPU EXPO/XMP may not be stable. You might need a bit more voltage or a bit lower speed. Run memtest and other stability tests. Maybe reseat the CPU & memory.
You could jump to 96GB. If it helps you not run out of memory it should significantly boost performance.
WSL2 within windows is a virtualized environment. It is effectively running in a VM. I’m not too experienced with it; i’ve only used it to run some docker containers or to ssh into a linux machine; but the file system is virtualized somehow and probably isn’t quite as fast as native. I’m also not a JS dev so I don’t know what the performance expectations are here. Since you probably know enough linux via WSL, it could be worth trying native linux to compare (if you have the time).
I run affinity apps and Capture one in a VM with GPU passthrough and it works great. I can’t distinguish performance from native and benchmarks are within 5% of native. It can be tricky to set up properly and a bit of a headache if you’re not well versed/don’t like fiddling. But if you’re up to it, it works great once set up.
There is a registry setting called clearpagefileatshutdown which is default off (0), try turning it on (1).
What kind of storage device are those projects sitting on? Could be as simple as the SSD behind it being a bit sluggish.
When I built my new machine, I ditched Windows10 for Debian 12 (and then Fedora 40 because…).
The 3 pain points:
Some bug in Steam on Fedora40 means it only launches correctly from terminal
Davinci Resolve is VERY picky about GPU-drivers and libraries
Affinity suit needs Windows
Yay my machine reboots in 20-ish seconds, but then I also have to boot the winVM, and had to invest a lot of time into getting a select few pieces of software to run properly, either in Wine/Proton or the WinVM.
Just something to be aware when switching one OS for the other.
Update you board uefi to latest available first. Training every boot means MCR is disabled and that was default setting for early AM5 uefi versions.
Enabling MCR on that old firmware leads to extreme instability quickly.
And on newer versions MCR is both stable and on by default.
Also yes, switching to linux is night and day vs windows, if your workload and toolset allows it.
Hardware performance isnt your limiting factor here, its the software and perf limits inherited from os design.
I think those repo autocompletes run on a single thread. 2nd gen threadripper is zen+. Zen+ had awful single threaded performance and threadrippers where running on lower clocks as well. single threaded the 7950x is more than twice as fast.
Okay, yeah I think I had it enabled at first, but it wasn’t sticking. This could explain that behavior. I wanted to do a BIOS update for quite some time, but I can’t afford to mess my computer up at the moment, maybe I get some free time over the weekend and see - fingers crossed - if it helps with some of these issues.
I figured it’s windows, but from what you all wrote I don’t think a switch can be done. I have a 4090, and it seems NVIDIA support is still a bit wonky. I also use Adobe Lightroom + Photoshop from time to time (even though I’d love to use Affinity, but their RAW photo handling isn’t the best). It’s these little things that seem to pile up.
That doesn’t sound great for my use case. Since this is a work machine it needs to simply work. I can’t afford fiddling with drivers for days when a “simple” windows install would do the trick
Yeah, I’ve been trying to optimize the TypeScript setup, and there’s definitely room for more. The new Rust-based Linting oxlint runs in like 60ms over the whole codebase (using all 16 cores), whereas eslint takes about 2 minutes. I’d call that a big difference.