Dude, nice comparison! For straight‑up workstations, the 9950X3D is a sweet single‑socket option awesome boost clocks (up to ~5.7 GHz) and way cheaper to run, especially if you’re just using one GPU and don’t need crazy PCIe lanes. Threadripper 7000 offers beefier memory bandwidth and lanes if you’re stacking GPUs or doing dev work, but it’s expensive and power‑hungry. EPYC shines for mega‑scale setups dual‑socket, tons of lanes, memory channels but single‑core boost is meh unless it’s newer Genoa/Turin silicon. To keep the bookkeeping straight: GPU count + PCIe needs and memory demand define your sweet spot.”
Sounds like someone spent big bucks on a Threadripper thinking it was the best at everything and now feels the need to defend that purchase no matter what ![]()
1.) None of the benchmarks I was pointing to were ancient. The video I linked above was part of the launch reviews of the Threadripper 7000 series.
2.) Starfield only came out in 2023, and suffered from poor GPU utilization on high end GPU’s even at high resolutions and max settings (4k Ultra) due to taxing the platforms hard. Theories were that it was mostly RAM bandwidth and latency related, but I’m not sure anyone ever got to the bottom of it.
3.) Stalker 2 came out in late November 2024, 7 months ago. Hardly an “ancient title”. It is actually running on Unreal Engine 5.
4.) If you don’t understand why CPU benchmarks in games are always run at low resolution, then you don’t deserve to claim to understand the technology. It is done intentionally to isolate the GPU out of the performance benchmark.
A CPU game benchmark at high resolution would - for most titles - be a completely useless activity, as you would essentially be benchmarking nothing but the GPU.
GPU performance scales with resolution and quality settings. With the exception of a small number of quality settings (like, quantity of AI NPC’s in large open world titles) CPU load is mostly constant across graphics settings, at the same framerate.
When you use a low resolution to ensure the GPU is not the limiting factor, you can then look at independent reviews for both GPU’s and CPU’s to roughly assess how they will perform when used together.
If - in the GPU review, at the resolution you play at, the GPU can achieve 85fps average, with 65fps minimums, and the CPU can achieve 95fps averages with 55fps minimums, your actual experience when combining the two is going to be roughly 85fps with lows dipping down to 55fps.
You are correct that in most titles out there, at high resolutions and settings, you are going to hit the GPU limit before you hit the CPU limit, but as I have pointed out there are some exceptions (like Starfield and Stalker 2). I don’t deny that Stalker 2 is an outlier here, but if I am going to spend big bucks on a system, I’ll want it to handle everything. And Stalker 2 has been a title I have been waiting for for 15 years, so it happens to be of very high importance to me personally.
But once you factor in that resolution upscaling is very much a thing these days, that it works pretty well, and that the average gamer demands higher framerates than they did in the past, even at the expense of resolution and quality, there are now many more titles.
Lets challenge an assumption here. That 4k is a desirable modern resolution. That just because you have a fancy and expensive CPU you are also going to demand high resolution and high quality settings in games.
Firstly, according to Steam’s hardware survey as of June 2025, fewer than 5% of people who regularly log in play at 4k resolutions or above.
Secondly, the majority modern PC gamers care more about framerate than they do about quality or resolution.
Yes, us old guys still love the ultra settings at high resolutions for the most eye candy, but the current generation of kids that play games, will set everything to minimum, and reduce their resolution down to 720p if it means they can go from 120fps up to 240fps. From their perspective, the more framerate the better. They don’t let the graphics get in the way of achieving their goal, which is as high framerate as possible. The kids these days are paying big bucks for ridiculous 320hz and 500hz 1080p monitors, and see them as the holy grail for games.
But all of that is besides the point.
In the example I used above, of Stalker 2, recent title, only 7 months old, running on Unreal engine 5, a 7000 series Threadripper would only average about 62fps, with lows dipping down to 25fps.
At the settings I was selecting (4k, Ultra settings, Transformer model scaling at “Balanced” level or CNN model scaling at Quality, not sure yet) my GPU would have outperformed that, and the CPU would have held me back. It would have been particularly annoying during the dips down to 25fps.
This is probably correct. I haven’t seen any numbers on it though. But I would like to. The registered RAM is likely still going to be an issue, but doing something like this (albeit a bit extreme) is still likely to help.
I don’t think anyone has suggested buying low end CPU’s for games. I have however suggested that workstation CPU’s may not be the best fit for that workload, and that even mid-level previous gen CPU’s (like the Ryzen 5 7600x will outperform Threadripper 7000 in most titles.
GN showed this conclusively in a large suite of titles in Steve’s review video. But you can go searching. No one has posted any conflicting reviews. Pretty much all reviews I have seen slot Threadripper 7000’s in somewhere under a Ryzen 5 7600x in game performance) There are some where it does slightly better, and some where it does slightly worse, but the general pattern is pretty much the same.
But you are correct that in most titles, at old school enthusiast level resolutions and graphics settings (max resolution, max quality) you’ll likely hit GPU limits before the Threadripper 7000 becomes the CPU limit.
It’s just that there are a growing number of titles for which this is no longer the case, and that’s not how many play games anymore. They prioritize high framerate at lower settings.
That, and, once you have invested $6000 (or however much the combination of a Threadripper 7000 CPU, a compatible motherboard and RAM) costs once added up now, you are going to be reluctant to turn around and upgrade that behemoth every generation.
At that price, unless you have some sort of unlimited budget, it is going to have to last a while. It’s much cheaper and easier to just turn around and pop in the next generations Ryzen 7 x800x3d CPU when it launches for $479.
Meanwhile that Threadripper 7000 is going to fall further and further behind with each generation while you try to justify spending another few thousand on a replacement.
That’s where I am now. I learned my lesson with my Threadripper 3960x. I don’t need anything newer or faster for my work, but it is wholly inadequate to keep up with the games I want to play. And the Threadripper 3960x wasn’t even as far behind its consumer contemporaries in games when it launched, as the Threadripper 7000 was.
And it is significantly cheaper and easier to maintain a separate system dedicated to games than to try to regularly upgrade a pricy workstation platform.
Did you even read the original question in this thread? Refresher, it was
I’ve run so many game benchmarks over the years I can still visualize most of the Doom III timedemo. For years my job was to tune cpu performance of graphics drivers.
You are reading benchmarks, and steam surveys and not understanding what they actually mean. Yes a vCache part can outperform a threadripper in some, possibly most, games. The margin should not be large, but it’s not zero either. No one is arguing that point.
However, that wasn’t the question. It was for workstation use as primary. The real questions here are what are the demands of the workstation workloads? Once you solve that, it will run games fine. The goal isn’t to post scores to the web, it’s to be able to enjoy the game.
Now as to:
Do you seriously believe that people are spending $5-15k on a workstation to pair it with a 1080p monitor? That same steam survey shows the vast majority of users having 32gb or less of ram, do you want to challenge the assumption that a TR will be built with more memory?
I’ve had to think about this question as we do hardware refreshes. These days I do game development. The problem with a AM5 chip is two fold. First it just is to limited in the amount of ram it can reasonably hold. Our older 128GB machines run out of ram regularly. Just loading the latest UE level (in editor) caused my machine to use 130GB, just for UE. Second, you run out of PCIe lanes really, really fast. That kills it, fully stop, for most workstation uses.
I didn’t see a clear description of the workload in this thread. As a result I won’t guess as to the requirements for ram or add in cards. However, the lack of expandability on AM5 is a very, very, large con.
Gaming performance being slightly lower, mostly at resolutions unlikely to matter, is a con of TR, one the OP explicitly said they were not concerned with. Understand the problem before arguing solutions.
Baffling as it may sound. Yes, in many cases. Well, at least for high end consumer models. Workstation clientele are a little bit of a different population.
My kid was actually disappointed when I got him a decent 1440p monitor as a gift, and then proceeded to use it at 1080p, centered in the middle of the screen. He had sufficient hardware to get good performance at 1440p, but someone out there is telling them that they need as low resolution as possible and as high framerate as possible all the time.
No. But resolution preference and RAM needs are not the same thing.
To be clear, I’m pretty sure AM5 platforms support 256GB of RAM, but it can require dropping the clocks a bit.
I’m totally with you on this.
I’ve been using HEDT platforms for a long time now.
2008-2011: Intel x58 (Core i7-920) with 36 PCIe lanes.
2011-2019: Intel x79 (Core i7-3930k) with 40 PCIe lanes.
2019-Present: Threadripper 3960x with 64 PCie lanes.
I can’t imagine going back to a consumer platform with just 28 lanes for my work. Heck, even for my main home use. That’s just unusable to me.
I’d only consider those platforms for games, and even then the 28 lanes become very constricting, especially if I want multiple NVMe drives, and an 8x lane high bandwidth networking card to maximize my use of my NAS, all without dropping the lanes assigned to my GPU down below 16x.
I could have sworn the OP in the thread I was posting included some light game needs in addition to pure workstation use. Maybe I got my threads confused.
On that topic, depending on what type of light gaming you mean, and what you consider light gaming, I guess my point is it can either make a difference, or it can not.
For me it did.
My Threadripper 3960x is primarily a home work and home productivity machine. I run several VM’s, do some rendering and encoding, and I play with some CUDA stuff. I have excessive local storage needs and use high bandwidth networking to my NAS for even more storage.
I also use it for general purpose IT support type of work for my own systems and those of family/friends. The PCIe lanes are invaluable in this application.
I play an occasional game, but it is not a huge priority. I DID want to play Stalker 2 when it launched. as the original old Stalker series was a huge favorite of mine, and I had been looking forward to a sequel for 15 years, but quickly discovered that while my Threadripper 3960x still met all of my work and productivity needs, and seemingly will for a long long time to come, it was wholly inadequate for this game I was really looking forward to which was a bummer.
The whole point of my original response was that when I did my research, I found that since the Threadripper 3960x, the already expensive platform had become even more expensive than it was and chasing upgrades on a Threadripper platform was going to be prohibitively expensive, especially considering it meets all of my productivity needs as-is already.
For me, it was cheaper to pursue a dedicated consumer class machine for games and keep what I have for productivity than it was to start chasing Threadripper upgrades that in many cases wouldn’t even be adequate for the game I was interested in.
That is all I was conveying.
But yeah, to your point, the game requirements are apparently irrelevant to this ops question. AgenAnon was sending me essentially the same responses in two different threads, and I decided to reply in just one, and I must have accidentally chosen the wrong one.
That is my bad.
Probably correct, however, 256gb was what I put into the 2023 build, not what I would buy today (512gb). Then there is that whole ECC thing.
As to resolution, look at the popularity of video cards in the same survey. It will explain a lot why 1080p is popular. Sadly I’m getting a link error at the moment. Last time I looked the vast majority of the video cards were not going to run high res well.
I’m sure there are a handful of people chasing the highest possible framerate, just like some people put stickers on their race car and think it makes them faster. However, I’d bet that those people don’t start by buying for “workstation use”.
Yeah,
Consumer ECC is iffy at best.
Some motherboard manufacturers claim they support ECC UDIMMs on AM5, but who knows if they actually mean fully functional ECC reporting or if they mean “well, it will work if you insert ECC sticks”. They have played that shitty game of words in the past ![]()
ASRock and Asus do. Last I checked MSI still didn’t have support.
For historic values of they definitely equal to fine print in Gigabyte’s manuals but I’m not sure about any other mobo manufacturer. So far as I know this doesn’t apply to AM4 or AM5.
Sounds like someone can’t afford HEDT and wants to convince himself that his low end desktop CPU isn’t weak. No one has to go for the best, but elaborate mental gymnastics to justify your low end CPU with ancient 1080p benchmarks, which do not matter at all because that resolution is so easy to run you will have an extreme FPS surplus, is silly. Especially while ignoring the benchmarks that actually do matter for real world use:
Nope. The Steam hardware survey is commonly cited by people to justify terrible hardware, but it’s misleading because it isn’t remotely accurate, isn’t vetted, and includes things millions of PC cafe machines in places like Asia and India.
Correct. There isn’t much more to say here.
Says it well. To add to this, the technically illiterate myth that CPU meaningfully matters “for gaming” is something that was started by marketing departments as a way to manipulate.
I wouldn’t call 9950X3D workstation, honestly. It’s 105W part or something like that. The perfomance just isn’t there and you are severely limited on AM5.
16 cores isn’t a lot and I realized this when trying to procedurally generate cities for Unreal Engine, which would take hours.
I frequently run our terrain generation. 56 cores (avx 512), + 5090 cuda and it still takes 72 hours. Next machine I swear I am building a custom case to tame the noise while doing that. Turns out it’s hard to cool 1100 W quietly. Last gen I went Intel because the AVX 512 was poor on the older TRs.
Depends on the game. For us where we are doing a lot of simulation (not a FPS), it can matter. That said, we shouldn’t have the issues with high thread counts some games have.