DX12,Vulcan, Performance, and the Thermal Nightmare

I would really enjoy hearing Wendell and Logan discuss this on an episode of The Tek, and if anyone else has any opinions on this subject feel free to throw some words.....

So I have been loosely following the claims being made by both Khronos and Microsoft respectively.

I have at many times through out the years donated cpu/gpu cycles to the Berkley Open Infrastructure Networked Computing (BOINC), and one of the pitfalls of doing this is you absolutely had to keep track of your temperatures, because of the nature of how these BOINC applications could literally hammer your CPU or GPU endlessly at 100% for hours on end....

It is my opinion, that when all of this untapped potential performance gets unlocked, that a large portion of hardware, especially in the case of the Xbox One which has a standardized cooling solution, when this hardware starts getting hammered on to the max, will there not be many situations where people who are accustom to hours on end gaming sessions but are not too affluent with a wealth of knowledge about termperatures either run into a hypothetical situation that either

-thermal protections kick in,forces the cpu / gpu to throttle back, hampering performance , or in a nightmare scenario
-fries hardware due to the performance envelope being pushed so hard, hardware simply dies a horrible smoky death

??

Did Microsoft have a premonition about the underlying AMD APU being hammered @ near 100% usage and use over-the-top cooling solutions in preparation for squeezing every last drop of performance with thermal control in mind? If you hammer the hardware so hard that thermal protections are enabled that pretty much defeats the purpose of the gains............

Supposedly DX12 has been in the works for longer than the Xbox One has been out in the wild, so I am wondering if they are already aware with great usage comes great thermals,,,yeah that sounded cheesy as hell haha.

Why would there be more usage? Isn't the point of the new apis to reduce overhead or reduce the influence it has when running the code so wouldn't usage actually go down or rather things are executed quicker not necessarily more usage?

side note. if running boinc why do you constantly monitor temps unless your cooling solution is not up to par? i used to fold and do coin mining now so i know what kind of heat gets created just wondering because after the initial trial runs i dont monitor temps anymore.

Why wouldn't there be more usage? Please understand that I am here with many questions and zero answers....

Well I pretty much am as well. From what little I have read is that the whole point of dx12 and vulcan is to reduce the overhead that comes with drivers and get it to the execution speed/efficiency of a console.

also it will provide better multi core management instead of just having one core doing a lot of work and the others just sitting there it will shift some of the work to both cores. Usage would be the same just distubuted better so it can run faster. ex.

You have a barrel of water
And one person is holding a hose letting water out. while this works and the barrel will empty.
Say we add another person who also has a hose letting water out too, now it will empty in half the time.

so effectively lowering the the time that it takes to talk to the hardware and the execution of the code. (this is just a guess)

this article has some info that may help

In some games (The Witcher 3 being the latest example) the GPU is taxed to the max almost all the time, which makes things run hotter. It only uses less than 100% of the GPU in the inventory menu and the map, so some people have to manually set their GPU fans to reduce the heat.

OP thinks that cards might overheat if they're suddenly working at full capacity due to increased efficiency of Dx12 and Vulkan.

I don't think that's going to be a problem. Surely that's something that software engineers and GPU manufacturers would factor in and find solutions to prevent that from happening.

As far as I know a gpu should be near 100% usage or rather utilization? are these different?. Unlike a cpu. for instance dayz(yes in alpha not done blah blah) when gpu usage is 99% game runs perfect usage drops down game is shit and unplayable. thoughts?

Usually the game uses 100% in very graphically intensive situations and drops down when the situation doesn't require full usage. In The Witcher 3 it's constant 100% unless you're in a menu somewhere. Maybe it's different if you cap the framerate to 30, I need to check. But there isn't a place in the entire playable world where the usage would drop below full usage as far as I'm aware. I guess that OP thinks that's what Vulkan and Dx12 would make GPU's do, among other things. Which would lead to more heat. Which would be the opposite of efficient as far as I'm aware.

Hmm, I never have heat issues with my gpu even when playing for hours at 99%(never seen it hit 100% even when coinmining) usage in bf4 and i dont change the fan manually it is odd that others would have to unless they are just worried about the heat.

but we are off topic now

I am in no way right, I am probably wrong, as I am just going by what I read on the www.....

PC hardware using DX11 , or DX11.x in the case of an Xbox One , was designed with a operating thermal envelope in mind. It is very rare that a CPU/GPU gets utilized to its fullest potential. (like you see when mining or running BOINC)

DX12/Vulkan or Vulcan, however it will be spelled, purportedly are going to go "Closer to the Metal" with a low level API, which will utilize the hardware more efficiently, which if I am to understand correctly, will allow MORE of the hardware to be pushed harder.

High level API's have not pushed the hardware to its maximum performance due to the very nature of the design (again, I don't know this to be fact I am speculating on what I have read).

It is in my opinion that this untapped potential hardware,when it comes time to tap it to its fullest extent, could result in what you or I perceive to be adequate cooling solutions, will result in the hardware being used more (instead of just 2 or 4 cores being used at capacity, all cores would or could be used at capacity) effeciently, the more the hardware gets used, the more heat generated. I am talking in general, and there is no reason for anyone to flex their liquid cooling solution in my/your faces. Thats great, I am not concerned about "power users".

Xbox One , and any other APU solutions in the wild right now, will blatantly see more heat generated when the hardware begins to get used more effeciently.

^ That post is a chaotic mess of inner thoughts slapped onto my screen via keyboard, yes it's mess , thanks for understanding and forgiving me for it.

From how I understand it, currently the hardware has to run a lot of overhead which is taking raw power away form your hardware that could be used to run your games. So with the lower level API's, dev can by pass this and use less to get more. This won't help older games unless they are updated to take advantage of DX12.

Your hardware should behave the same now as it does except the performance in games that take advantage of the new API's. If they are optimised right they should use less of your GPU power.

So, when they say "more efficiently" this means....

all of this is hypothetical

Game X using DX11 that maxes a cpu usage at 50% respectively for Y performance...
Game X gets DX12 patch, will allow the cpu usage to drop under 50% usage for the same Y performance...

^ Is that what is being presented by "more efficiently" , which my concern comes into fruition with this..

Game X using DX11 maxes a cpu usage at 50% for Y performance.
Game X using DX12 unlocks full usage of cpu, pushing usage above the normal 50% hypothetical, resulting in above Y performance.

It is my understanding that these two new ,admittedly not even released api's DX12/Vulcan, will allow the game developers to utilize every last drop of available compute resources. In a perfect world example, being 100% cpu and gpu usage.

Anyone who isn't using aftermarket cooling solutions , or who are locked into a cooling solution, would be dealing with more usage from the components, which will in turn generate more heat than was anticipated while living in the preDX12/Vulcan scenario.

Maybe this is my entire problem for understanding the situation....

when they say "more efficiently" , they mean that more resources will be available for usage. Performance gains will come from tapping more of the available cores on a cpu, and tapping more of those cores to their fullest potential (above whatever was standard with DX11) will yield more heat being generated because more of them are exposed to the developers.

If DX11 was a pizza,and we was only allowed to have half of the pizza at any given moment and our bellies are full with just half of that pizza, will there be room for the entire pizza in our bellies once DX12/Vulcan arrives?

Pizza divides really nicely, so using it as an example for hardware being utilized and bellies being the example of thermal capacity seems like a great way for me to understand this. I really like pizza btw. If you can craft an analogy for me that uses pizza I think it would make more sense.

I only have two hands to hold two slices of pizza at any given moment, if I am understanding correctly, the new api's are going to give me extra hands for consuming more pizza, but my belly will remain the same (the thermal part of the equation).

My head might explode arrggg

You are looking at this in the wrong way, currently you have high level API's which means you have to have a lot of things running on the hardware that doesn't actually need to be using the hardware while a game is being played. The new lower level API's allow the devs to by pass this and use more of the hardware for the game. Freeing up more power to be thrown at the games that are made for DX12.

A game would have to be remade for DX12 to take advantage of this extra horse power that is now available. It was always there, it just couldn't be used before. It's taking away restrictions that were in place basically.

for your analogy: You have a plate in each hand, currently they have one slice of pizza on each plate. However each plate is actually big enough for two slices, if you move the first over slightly. The new API allows you to do this and now you have 2 slices of pizza on each plate.

And there won't be an increase in thermal output due to these extra slices?

Why would would there be?? The hardware is not being pushed any harder than it was before, it just is able to allocate it's resources better.

so cpu/gpu utilization won't increase, it will remain the same, just with more performance.

proverbially "More Juice for the same amount of Squeeze"

This is maybe the best way to put it, you load A+B onto the hardware and deliver them, A gets off but B stays and comes back with you, you then load C and B stays where it is. You deliver C and repeat.

Here say the CPU/GPU is 100% out put, B is holding up how much you can do though.

Now imagine you have A+B, you deliver A+B, done. Now you can load C/D/ and you are done. Then you can load E/F/G/I as they are the same load as A/B or C/D.

Here again CPU/GPU is 100%, you are getting more work done because of how use the hardware.

Thanks! that makes a bit more sense.