Understanding computer builds for gaming performance

How do you guys figure out what’s causing bad performance in games? Not obvious IO/GPU bottlenecks, but the small, infrequent stutters that makes a game feel fidgety instead of smooth.
For me the solution has always been to buy the best hardware i could afford and tinker around graphics settings, but since i plan on building my first high end PC and i want it to last a few years, i was wondering how you guys do it. Is there a method to build a PC that doesn’t have bottlenecks?
Of course some games are just badly optimized like WoW or cyberpunk, but i’m talking about a methodical approach to building a PC and choosing which parts to use, is there a way for me to know for a fact that my RAM isn’t a bottleneck for my GPU/CPU? Instead of just guessing by experience.

Feel free to point me towards more academic/theoretical material, i would love reading it.

ram usually does not matter to intel systems to AMD systems it all depends on the chip and mobo combo. the best is a 1:1 clock rate for the infinity fabric and the ram.( what this magic number is changes each generation of the chip)
and the lower the CAS latency the better. 4000Mhz ram is the absolute top for ryzen 5 and that can be a bit touch and go.
wendell did some ram speed and timing tests on ryzen 5 ( not digging through videos tonight to find all of the time stamps.)
and a system with 0 bottle necks does not exist. there will be a bottle neck somewhere be it GPU CPU or even game engine limits. when it appears all depends on resolution. at 1080P the CPU is going to be the bottle neck in most cases. at 1440P it switches to the GPU, and 4k it is the GPU again. with the size of modern AAA games going over the 30gig size and some over 100gigs. loading speed on spinning drives is just awful so SSD/NVME is required.
so with this knowledge get the best GPU you can. pair it with a mid- mid top end chip,for AMD 5900x is the highest i would go for a pure gaming build and the lowest i would go is a 5600x.
for Intel you can go nuts on the ram speed it has the advantage of being monolithic so it plays better with 4000+Mhz ram. as far as which GPU to buy well that boils down to how much is that extra 2-3% worth it to you to go from 3080-3090 or 6800XT-6900XT ( both top end cards are nearly pointless.)

2 Likes

Inconsistent performance IMHO is normally either IO (including bandwidth between card and system) or software related.

Which means you can look at fixing the software, Improving the IO, or reducing the requirement for IO.

Kill background services, update drivers, install game on faster storage.

RAM capacity (both system and video memory) can be a factor in performance consistency as when either needs data purged or loaded into it from storage, there’s often a pause to wait for that. If more can be held in system/video memory, then it less frequently needs to be paged in or out. Ideally for video for example, you want all the content for the game’s current state to be held in VRAM without the need to swap in/out during gameplay.

So think either more video memory, more RAM or reducing texture sizes to fit into the memory better (e.g., don’t run ultra quality textures if you don’t really have the VRAM capacity for it because the card will be constantly thrashing the PCIe bus to load/purge texture data).

As soon as a video card needs to hit the PCIe bus for bulk amounts of data (e.g., textures), performance tanks - VRAM is much, much faster than even PCIe 4.0.

Every system will be bottlenecked by something. Whether or not that limits performance in a perceptible disruptive way (e.g., a hitch or stutter) is another thing, but everything will have a limiting factor.

1 Like

I don’t mean debugging my specific system or removing bottlenecks as a factor. What i’m trying to find is more of a theoretical background on how to measure and reduce bottlenecks methodically, instead of just guessing what is probably the cause. Reducing to the lowest possible level given a certain requirement. Or at least pinpointing what specifically is causing the bottleneck, something as simple as opening htop and seeing if i’m using 100% of CPU.

For example, if we have an application we can trace all individual functions in that application and measure how long each function takes to run, then we can direct our optimization efforts towards the lowest running functions in the system. Is there a similar approach for measuring how a game is running? of course not pinpointing to individual API calls but just monitoring what piece of hardware is being stressed and when.

I mention this because on @wendell’s 6800XT videos i saw some pretty detailed reports on game fps throughout a period of time and that got me thinking that maybe there’s a better way to do this than trial and error.

To summarize
1 - Can i measure how stable a system will be in any way besides trial and error? How do i know a pc build is stable before buying it? Know as in, methodically know, instead of just inferring by experience.
2 - Given an existing system, how can i measure what is causing a specific bottleneck? Are there any tools that track that for me? Can i launch a game and get a report on what parts of my hardware lagged behind and when? I’d like to specifically know if i’m running out of VRAM or if my normal RAM is too slow, things like that.

I think that’s a tough one. Keep in mind that you are also always participating in the silicon lottery, which could also affect your deduced values.

Generally speaking though, I think you can methodically avoid bottlenecks by throwing money at it. So, in order to eliminate a GPU bottleneck as good as possible you “should” go for a 3090. For the CPU go for a 5950X and so on…

However, when it comes to not wanting to spend as much money as seemingly possible, it becomes really hard to tell.

Take VRAM for instance: Many games allocate more VRAM than they are using, which makes deducing how much VRAM a GPU needs pretty hard. Furthermore, AMD uses a different compression than Nvidia, which leads different needs.

My methodical approach is to have a look at various reviewer suggestions and estimate what I want for my PC and purchase accordingly.

Afaik, both cards offer more gains than 2-3%. The 3090 is on average at 10-15%. This doesn’t justify double the price, but it shouldn’t be downplayed either. :wink:

1 Like

Let me rephrase that for you:

“How can I know, in advance, that one of a bazillion random combinations of components will give me the gaming experience that only I can subjectively rate — and which I am unable to communicate to others — before spending any money?”

or

“Guess what I’m thinking, then guess what I want to buy, and guess what I want to use it for, but be damned sure you scientifically make sure I don’t spend a cent more than I have to.”

You see where I’m going with this, I hope?

Computers are complex things. There are tens of thousands of permutations of CPU+Mobo+RAM alone. Forget GPU and storage. Forget software. You may as well call that number infinity. An infinite number of combinations of hardware and software. And you think it is even remotely possible to optimise that problem space ‘methodically’, without inference or experience? It is not. You are asking an impossible question.

To be quite clear: There is an answer to your question, but it is impossible to practically arrive at that answer within a human lifespan. The problem space is just too large.

If your question was, instead: “I want to build a computer which runs OperatingSystem version X, for the sole purpose of playing Game Y, with this list of graphics settings: [ … , … , … , … , … ], and I want a 1% low of no less than 75 FPS on ThisSpecificDisplay at ThisSpecificResolution. I have pre-selected RAM to MakeModelCapacity, CPU to MakeModel and Mobo to MakeModel with UEFI version V. What GPU will make me happy without spending more than I need to?” … then an answer to that question could be ‘methodically’ arrived at in between 3 and 36 hours.

In the real world you don’t know how a system (combination of components) will perform for sure until you build it… and then use it for precisely the task(s) you want to use it for. That’s just the way it is.

The closest practical way that you can even get remotely close to an answer — without spending any money — is watch a YouTuber playtest and benchmark a specific system, look at the results they got, and then build the exact same system they did. In short, let others do the testing, then just copy what they did. Don’t concern yourself with why it works, just be happy that it works.

It’s not a terrible approach. The only real issue you will have is finding the right YouTuber that plays the same games as you do, at the same resolution you want to, with the same graphics settings you would like, and who happens to test all those games with a single system.

Alternatively, get comfortable researching, analysing, prioritising, compromising and taking risks. Accept that the world is rarely black and white, that you rarely have full knowledge at your disposal, and that no course of action will ever be the perfect one. That will serve you well for the rest of your life.

3 Likes

I’m not asking you to guess what i need specifically, i’m asking “when you’re in the same situation and also need to build a system, how do you do it? is there a better approach than mine (inferring by experience)?”

If there isn’t, sure, that’s life.

What i’m trying to say is “How do i arrive at something better than wildly guessing based on previous experience”, not “how do i factually get the better value possible”. Sorry if i came across as a smarmy asshole dismissing people’s advice, maybe it’s just a language barrier, English isn’t my first language. If how i’ve always built systems is how you guys do it too, sure, no problem.

The second issue i’m still grasping at straws though: In an existing system, how do i debug what is causing a specific bottleneck, are there are tools you guys use to trace bottlenecks to their source? Specially if they’re not huge freezes, just enough to create an unpleasant experience.
Once again, sorry if i sound like a demanding asshole or something, this is definitely not the intention.

Doing Youtube searches, but limiting to a particular controlled element [most obvious example - a GPU under a set res.], can help in seeing the combination variations [Intel / AMD processor, RAM, etc.]. These folks barrel along multiple games, at listed game setting(s), either creating a [repeatable] benchmark sequence or making use of an included benchmark]… Occasionally these benchmarks may incorporate additional comparable GPUs [see below for an example] or showing off generational improvements [like “Vega56 v 5700XT vs 6800”]

… Don’t ignore possibility, that it could very well be the game itself at fault [ex. Metro 2033s Engine requiring rehash via Redux]. Do check if your game(s) have any history [large / small issues alike]

2 Likes

Your English is excellent.

Previous experience allows you to develop a reliable “gut feel” for how system components generally work together. In the absence of a laboratory, and lots and lots of time and $$$ for testing, it is likely to give you an ‘acceptable’ outcome. Probably not ‘optimal’, but also very unlikely to be ‘unsatisfactory’.

You may or may not know that pretty much all of the world’s ‘Elite’ Special Forces are trained to operate within an inevitable “Fog of War”. Regardless of how much training is performed, or intelligence is gained, in advance of a mission, there will always be ‘unknowns’ that crop up when the mission is underway. The soldiers are trained to expect, accept, and adapt to the these unanticipated events or factors. Missions are never ‘perfectly’ executed. They are, however, usually ‘satisfactorily’ executed. Complete and perfect situational awareness is neither possible nor necessary for satisfactory mission completion.

You are in the same sort of boat. You have a goal, a lot of complex ‘moving parts’ (components) which are only partially understood, and the interactions between those moving parts will inevitably produce unanticipated/unexpected behaviours. Some of those behaviours will be beneficial. Some will not be. Some may seriously jeopardise your chances of reaching your goal.

Due to the very nature of those components, there is very little that you can do to actually ‘fix’ anything. You can’t ‘re-wire’ anything to correct architectural or design flaws. About the best you can do is ‘tweak settings’.

So, since there is little opportunity to ‘correct’ problems, a common strategy that folks use when building systems is to simply overengineer/overspec everything. If you accept that unknown problems will result in a 40% drop in FPS (for example), simply spec a GPU that delivers 40% more FPS than you actually want. Then if the problems do manifest you are still happy, but if the problems don’t manifest you end up really happy. Such excess provisioning can also be called a ‘buffer’.

Of course, large buffers cost a lot of money to create. If money is limited (and it usually is), you can invest time into researching components with the specific goal of better understanding how they will interact. The more you know, the fewer unknowns will remain, and the smaller your buffers need to be.

Only prior experience will tell you how good you are at understanding systems and judging/minimising buffers (‘right-sizing’ your system).

If you specced a system to get (for example) 1% lows of 75 FPS in GameX, then purchased the components and built it, take the time to benchmark the game and see how close you got to your goal. If you were close, great! If you missed by a significant margin (in either direction) spend some time working out why you got it wrong (low or high, you still got something wrong). In 3–5 years when you upgrade or build your next system you won’t make the same mistake again, you should get closer to your goal, and the size of any miss should be smaller. Rinse and repeat for the rest of your life. After 20-30 years you’ll be able to nail the process pretty consistenly.

You’re only ‘wildly guessing’ if you don’t reflect on past attempts and don’t keep researching and learning. If you reflect, research and learn then you are making ‘educated guesses’ — a completely different thing, and absolutely fine.

Control variables. That’s it. Control variables. Apply the scientific method. Lock down every variable you possibly can (e.g. eliminate the human from the equation, eliminate undocumented ‘features’ like energy saving from the equation) and then change one and only one variable. Run your tests. Record the results. If the problem persists, then restore your system back to the controlled base-line, and change the same variable by a different amount, or change a different variable. Run your tests. Record the results. Rinse. Repeat. Expect to spend hundreds/thousands of hours doing this before you get even a slight response/improvement.

Remember: 99.999…% of all the moving parts are hidden from view and beyond your control. You are, essentially, just hoping that you will stumble upon some setting that — for whatever reason — makes a positive difference.

There is no magic tool or set of tools that make diagnosis of bottlenecks possible. There are an infinite number of combinations of hardware and software and thus an infinite number of possible conflicts and thus an infinite number of possible reasons why you are experiencing that one, very specific symptom that is irking you on your unique system.

Even if you can find a way to make it go away, the same problem may not exist (and even if it does, your solution might no longer work) the moment any of your software (operating system, drivers, game code) gets updated. All bets are off, of course, the moment you change any of your hardware. You then start from scratch and do it all over again.

All-in-all optimising an existing system is, to a great extent, a massive waste of time. The Law of Diminishing Returns kicks in instantly. You will probably get 90% of all of the improvements (you are ever going to get) in the first hour of tweaking. You probably will only get another 9% in the next hour. Another 0.9% in the hour after that. Note that these are just percentage improvements, not percentages toward perfection or your goal — those are completely different. A fun way to pass time during a COVID lockdown, but hardly a productive use of your time.

Possibly the best bang-for-buck strategy? Move your graphics quality slider down by 1 notch. Takes all of 10 seconds. Gets you over 90% of the way to where you want to go. Makes the majority of ‘typical’ problems ‘go away’. Call it done and get on with life. IMHO a few extra ‘pretty pixels’ just isn’t worth the grief.

2 Likes

With the risk of sounding anticlimactic, the way to “Understanding computer builds for gaming performance” is knowledge. Researching PC components for a few years and keeping up to date with the latest stuff (even though I did not buy anything) made me knowledgeable enough to guesstimate what components would go well together, ie: not have (noticeable) bottlenecks for a certain price point. Been in this “business” since 2012 (back when “older 2nd hand core 2 duos” were the best bang for the buck). You don’t need “years of study and research” behind your back to build a PC, but you need at least an intensive month of research (at least in your budget / price range).

Sometimes you can guess what components would go together based on their prices, but you still need some prior knowledge of the performance of these components (especially in those times of inflations, price hikes, artificial scarcity and hoarding). Better use MSRP rather than actual prices, but you still need to check the prices to see what you can actually afford.

To get knowledge of these components without buying them, you basically need to watch and read reviews from different sources.

2 Likes

If you can give us some games you play, it’s easier to figure out what your system issue.

Gamers Nexus has some guides for optimizing for games


They also have some older game coverage if you want to read



My personal opinion

I think new mid tier hardware is a good starting point. Alternatively you can get last gen’s top tier, which usually matches current mid tier prices/performance, as stock or deals come up.

If you are willing to wait to get a current generation higher tier hardware, wait to your heart’s content. It’s much better to have a overkill system than to have a underwhelming system.

However, if there’s a high opportunity cost (potential opportunities to make money with your rig or being unhappy for a long period of time), it doesn’t hurt to buy now.

A side story

In fact when I was picking parts for my build, I bought a 5820k before ryzen launch and a R9 Fury X when RX 480’s were being bought by miners. I was perfectly happy with my build until I started gaming and streaming Overwatch.

Initially I learned that the Fiji hardware video encoders were couldn’t handle 1080p60fps, so I upgraded my graphics card to a used rtx 2070. There was a improvement, but I would still occasionally get stuttering and BSODs. I later learned my power cables weren’t properly seated into the graphics card, so that explained BSODs while playing Overwatch.

I was a happy gamer until I started doing video production for a summer gig. My 5820K took a while to encode videos, so when I encountered a amazon day deal for the 2700X and Crucial RAM, I upgraded buying a b450 tomahawk from newegg. When my parts came in, and before I was going to take the old system out, I decided to test the new faster ram (3200, cl16) to see if it would solve stuttering in Overwatch. Unfortunately, the RAM I previously had with my 5820K was 2400, and I learned that overwatch had more consistent framerates with the faster 3200, cl16 RAM.

At that point, I decided to keep the 2700x and b450 tomahawk, since the 2700X was going to be more power efficient than my 5820K and would perform better in video encoding. The performance in video encoding saved me time and allowed me to be more productive, getting more work done for my summer gig.

1 Like

benchmarks

as has been stated above, without testing the software you care about on the actual hardware you’re considering, it is a crap shoot.

you can make educated guesses, and make trades in one system area (i.e. spread your budget so you have a balanced system which doesn’t suck in any single area***) to mitigate potential issues in another based on experience and test results on similarly engineered hardware. but that’s it.

benchmark, develop a theory on what the software in question bangs on pretty hard, and buy more of that.

there’s no short cut. as to predicting future performance - well that’s a case of observing industry trends and taking a punt on what direction things are headed - then buying hardware good at that type of thing.

me? i’ve been building my own boxes since 1995. i still get it wrong (guessing what will perform well for the longer term) occasionally.

edit:
*** it is worth spreading the money for better overall spec than dumping everything into a single component because the further up the stack you go the less gains per dollar spent. so spread those dollars to all areas and get “good” spec all round instead of awesome in one area and suck in another - for equal spend. helps avoid getting caught out when something you skimped on suddenly becomes more important for new software

1 Like

Don’t do what I did :joy: really shouldn’t be laughing but it makes me feel better :roll_eyes::wink:

I wanted/not needed to build a new computer (it’s been 10 years) and I revisited some of forums that I had gone to last decade of those who still exist.

Did a very poor job at my research other then to conclude that AMD was the new champion in multithreaded CPU, and mistakenly concluded that a Ryzen 3900x would be all I needed. Threadripper would be an overkill - because I only looked at CPU performance and not looking at the system holistically. What caught me out was the PCIe lane limitations on the X570 chipset - the cpu was plenty powerful.

I needed more PCIe lanes for storage and so after a couple of months I had to start again with a Threadripper system build and now I need to sell off my parts that I can’t reuse … I guess my point to you is

Look at the system as a whole… what are your constraints and then start adding components together while checking for inter compatibility

It was an expensive mistake to make…for me at least - ask lots of questions

4 Likes

Yeah, i agree. I’m trying to figure out what the bottlenecks on my current system are so i can build the next one with that info in mind, since i plan to continue with AMD GPU/CPU. For example, I’ve never seen my CPU above 50% usage, so i’m thinking of keeping MOBO/CPU, maybe change MOBO if i need something specific from newer gens.

Some strange behavior pops up sometimes though, in civilization 6 for one i get a constant 20-40 fps, even in lower settings. Civ6 specifically i discarded as just the game’s fault since changing settings doesn’t meaningfully change FPS, but in other scenarios (like cyberpunk) i can directly trace GPU usage to lag spikes and that’s great. What i can’t trace, however, are IO bottlenecks. I never know if i’m at maximum IO for a given scenario or not, and since i always blame GPU/CPU first, that lack of metrics just bothers me a little, but it’s alright, i’ll just follow your advice and buy a m.2 for my next pc, if anything it will remove IO as a variable.

Thanks!

Yep, the first pc i ever had the money to build is a solid mid tier (5700xt + 1800x) and it served me well for a few years. Now that i upgraded to a 1440p monitor it’s showing it’s age though. I can still play games like witcher 3 on near maximum settings and 60 fps easily, but newer games sometimes won’t run smoothly even on the lowest possible settings. Some games it even seems like the problem is my read/write speed, which is really weird.

Yeah i can imagine, i’m just trying to build a pc for gaming on linux in the next 2 or 3 years, since my current last-gen-mid-tier can still last a while. I’ll make sure to ask plenty of questions!

2 Likes

That’s the way to go.

Civ6 is usually CPU-bound, so tweaking GPU settings is pretty-much a waste of time.

Well, since you later say:

…I assume that you are already running Linux? If that’s correct, then something simple like:

$ iostat -zx 5

…will let you see the avg-cpu %iowait figure, as well as I/O stats for your various devices.

iowait is the percentage of time that your CPU was idle because it was waiting for I/O operations to complete.

Sounds like a metric you could monitor to determine if I/O is actually the (or at least ‘a’) bottleneck.

I’ve had a M.2 1TB Samsung 960 Evo as the primary drive in my daily driver since early 2019. Ridiculously fast. Never going back to SATA for anything except backups. There’s not much that regular folks do that can even challenge a good M.2 SSD. Overkill, but in a nice way. :wink:

1 Like

Yeah that’s what bugs me, a 1800x should be more than enough for civ6, during the benchmark my cpu usage doesn’t even go above 50%. No other game has this issue. Just found this video however and it may be the linux port that’s causing the weird performance. I’ll try running the windows version through wine just in case.

That’s exactly what i was looking for, thanks a lot!

1 Like

Just because your CPU never goes above 100% (err… “to 100%” rather), doesn’t mean CPU can’t be a factor.

If a game can only use say 4 threads, and you have 8, then it may peg those 4 threads at 100% and be hitting a clock speed wall. The total overall consumption may be 50%, but its 100% of half your cores with no way to make use of more… for example.

There’s heaps of (mostly older, but still some current) software like this out there, but as time moves forward, the trend is toward higher core count processors.

So, looking to the future, more cores is probably better to aim at than higher clocks with fewer cores. Just be aware that older less-core-aware software will suffer.

Again, it comes down to WHAT SOFTWARE you want to optimise for. My argument would be that older software runs “fast enough” for the most part on a current generation “potato” these days, so build for the future more so than the past.

But if older, or harder to make multi-threaded software is more important for you, you need to build for that.

Be aware that there is no machine that is best at everything. Know what your particular workload(s) are and make compromises for that.

e.g., i’m a gamer, but do a lot of virtualisation, and don’t want to spend enough for a threadripper box (I have other hobbies/vices to spend my money on).

2 years ago, for me, that meant ryzen 2700x. Yes, it didn’t run games as well as say an 8700k (in 1080p, with 1080ti, blah blah), but i wanted more cores (for lots of VMs), and i wanted the am4 platform for ease of future upgrades. It was “close enough” for me for gaming, but had benefits everywhere else that mattered (for me).

Plus, i (correctly) surmised that core count would be more important moving forward than clock speeds, as they’ve been stuck around 4-5ghz for the best part of a decade now. They simply aren’t getting much faster, and core counts are growing much faster - so new software will be heavily biased towards trying to take advantage of core count for better performance.

I took a small hit to single threaded/game performance to get that.

3 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.