TDP/Heat Created/Electricity Consumed by the CPU

Continuing the discussion from Best CPU with the Best TDP for 24/7 server:

To keep the mods happy as we are going off topic again,

I was ONLY talking about the CPU. Yes, electricity is also converted into heat by your motherboard but that is a separate number. My point is that 99.9% of the power (electrical) that enters into the CPU via its power supply pins is converted into heat and leaves the silicon via the Thermal Interface Material/Heat spreader/Heatsink. Yes some of the heat will leave the CPU silicon via the socket connection and go into the motherboard ... this heat is still counted when you would calculate the TDP.

The only measured figures for a system that you'll see in a review is the overall system power consumption. This is because they cannot physically place their power meter on the pins to the CPU. Of course the overall system power consumption will be greater than the power that the individual CPU consumes.

As an aside, the power consumed by the CPU (electrical) should not exceed the "Peak" TDP. This is not the figure that Intel prints on the box (84W) but is something like 1.5*TDP for milliseconds, 1.25*TDP for 10 seconds, etc. Again their will be huge power draw during transients ... but this is for a tiny amount of time and burst usage will be supplied by the decoupling capacitors in the motherboard. We only need to know the average power.

Also Heat (Power) != Temperature.

Computers are effectively space heaters ... 99.9% of the power in the wall socket leaves as heat (save maybe your LEDs).

It is my opinion that system builders can continue to use the TDP figure as a good guide to how much power they need to make available for their CPU. Intel's turbo-boost nonsense just means that TDP must be multiplied by 1.25 before you can use that figure, and you'll still need to estimate the power for other components (Motherboard, GPU, RAM, HDD, etc), and multiply that final figure by a 1.25 again before you pick your PSU.

0.01% efficient CPU design. Seems legit.

7 Likes

Your statement suggests that power can be stored in processed data??

Your HDD is not a battery. It would be awesome if we could run a laptop off the power stored by the data in its hard drive. It would save the need for a battery.

Efficiency with CPUs is measured in Operations (of some sort) per Watt. I don't think that anyone has managed to analyse the energy contained in processed data.

Think about it this way: If you did lock energy inside data when you processed it, then would your CPU get hotter more power when doing something pointless? Would performing the reciprocal operation cause your CPU to get colder? It just doesn't make sense.

Say 10% of the CPU's power was "stored" in the data inside a computer ... say a server which is at ~ 100% load all the time for 3 years (say its rendering or something) ... 100 Watts * 10% * 60 (seconds) *60 (minutes) * 24 (hours) * 365.25 (days) * 3 (years) =~ 690 MegaJoules. One stick of Dynamite is approximately 1 MegaJoule, so that computer would have stored 690 sticks somehow.

Maybe the NSA is stealing our electrical power (work) with their wire-tapping? That explains everything

Also, we are researching how to store data in the spin of electrons. If this worked, we would be storing something in something which is many, many (10000 or more) times smaller than the current, multiple atom storage systems like Hard Disks. So a computer emitting heat for 99.9% of its input power is perfectly reasonable.

Please don't get confused with the last thread. The last thread mentioned that power in to the CPU will exceed the TDP value either for short amounts of time, or when the manufacturer misrepresents the TDP. I am saying that the electrical power is is approximately equal to the thermal power out of the CPU.

That one sentence argument. Seems like you got your feathers ruffled mate, go have a coffee

Wot m8

Throwing numbers doesn't make your argument conform with the laws of thermodynamics.

I don't know what all this other anecdotal fluff is around the topic (NSA? Storing power in a magnetic interface? I'll leave that aside.) but here's the facts.

You have an electron, you feed it through a wire, into the board, and directly up into the CPU. Now your electron goes through a predetermined pipeline and is pushed through the base of a transistor (which at this point is reading x value.) The electron charges the transistor, flipping it to a new phase (we'll call this y value). In this process there is leakage, energy that is spent in the charge, and is converted from electric potential energy into thermoelectric effect, as well as mass as the transistor changes phase. This follows the first law of thermodynamics. The amount lost in this instance, which mind you is a best case scenerio, would be 1%.

Your model of 99.9% loss to thermoelectric effect is completely and totally the worst case of efficiency. I didn't even have to explain all of the above, as you can simply just look at the workings of a standard BJT transistor (which has the peak efficiency of 99% as I modeled) or go lookup Intel's own research on the Tri-Gate transistor to see their efficiency ratings are around 83%.

Ok, so you are saying that 1% of the power into the chip is lost as heat?

But the TDP (Which we all agree refers to Thermal power or Heat as it is Thermal Design Power) is still ~ 84Watts. So if only 1% of this leaves as heat, and this 1% is 84W on a Haswell, then I would be running with a 8400W or 8.4KW power supply JUST to run my CPU.

Take your more conservative estimate of 83%, then 17% = 84W, So then I still need a 500W Power supply. I run a 95W CPU (at full load sometimes) off a 450W PSU. So that doesn't add up.

Talking about individual transistors is NOT relevant. Neither is talking about efficiency with respect to electrical terms. Efficiency of a transistor might refer to how much current a transistor can switch at a certain base current or gate charge. This efficiency doesn't even relate to power as if I double the voltage in my load, I can usually still turn on my BJT and achieve the same base current, hence pulling the same collector current, but at twice the voltage I've switched twice the power. But that 99% value for the Transistor would not change as it does not relate to power.

Everyone on here seems to think that efficiency of a computer has to relate to heat. It Doesn't!!! Efficiency when we are processing measures Operations Per Watt.

Or how much Math you can do per watt of power you consume. You'll never see a manufacturer advertise that their product is "90%" efficient because it is not a useful measurement. Intel was advertising their Transistors, NOT their CPUs when they claimed 83% efficiency.

Look at the NVidia adverts: http://www.nvidia.com/object/nvidia-kepler.html

All measurements were based on the previous model, as in the improvement over last year.

I simplified it to a single transistor because you can scale the model. If you take our 83% rating, and scale it up to 4.8 billion. Now we can say we have a CPU that takes in 1.205V (which comes from dividing the transistor count by the proposed efficiency rating.) Now we'll run that 1.205V with the normal operating amperage of a 450W PSU, which would be 37A. This gives our CPU a total power income of 44.6W. Now we apply that 17% loss of electric potential to the model. If that percentage is lost directly to thermoelectric effect, we would see a TDP of 7.6W. I.E. a heatsink applied to our model would need to conduct and convect 7.6W of thermal energy wasted by the CPU. We could also factor in resistance, leakage across silicon, and frequency which would raise the TDP as these all tribute to thermal dissipation by the CPU.

Arguing that TDP (i.e. the amount of energy lost in the conversion of electric potential into thermoelectric effect and the final outcome of work) has nothing to do with thermal characteristics is completely asinine. You argue that a thermal output rating does not correlate with the laws of thermodynamics.

Everyone on here understands that efficiency is dependent on energy to produce work, rather than be converted into thermoelectric effect. If your CPU is producing heat at 99.9% of its wattage intake, you are not converting energy into work, you are converting it into a different energy that is discarded!

That 0.01% of energy cannot have enough electric potential to manipulate the phases of the transistors, meaning your CPU now has ZERO operations per watt.

THAT IS EFFICIENCY.

I posted this in the other thread, not realizing that the conversation had been moved here, so allow me to post it here as well.
"
I talked to a physics professor here at my university about
the efficiency of computer chips and how much of the electricity that
they use is turned into heat. He was absolutely no use whatsoever. What
he said was basically, these things are very controlled and the people
who design them know exactly how much heat they put off. When I tried to
explain TDP and how generically it was assigned to chips, and it's used
in the real world with regard to heatsinks as well as that I was
looking more for how much overall heat was actually being put out into
the environment, he didn't really didn't have much more to say. I
honestly don't think that he has a good enough understanding of computer
hardware to be able to answer my question. I am guessing at this point
that the closest thing that we can get right now is that the we can use
the chip's TDP and the average electricity use of the chip while at
stock and maxed out in order to find the overall efficiency. I say that
because the closest thing that I can come up with about all of this is
that the TDP is the average heat output (hence its use with heatsinks)
while maxed at stock. Or maybe it is the heatoutput while under a
"typical" workload, whatever that means. IT could also be the maximum
heat output while at stock (as opposed to the average).

The thought that these things are just really well regulated and
understood by manufacturers (which was his point) seems completely wrong
considering the performance issues related to thermal throttling of the
core m processors which leads me to believe that either TDP isn't a
good real world number to use, or that they are much dumber than we are
giving them credit for (or they are playing us for fools by hoping that
no one will notice the thermal throttling problem and will just buy
things because "it has the best of the best")."

So, going off of what I was saying in that comment, and applying numbers as you guys have been keen to do here, we can find the approximate efficiency. Just taking a random, popular processor, lets looks at the 4790k. It has a rated TDP of 88W. If we assume that that is the average amount of heat output while maxed out at stock, then we are looking at 88W of heat while using 109W of power (from guru3d). That results in an efficiency of 19% (that is to say that roughly 81% of the electricity used by the cpu is turned into heat). You might be able to find better power use numbers than that as I wasn't too satisfied with their methodology (it didn't necessarily isolate the cpu), but from what I am guessing, more accurate numbers would have a lower power consumption (when isolating the cpu) and would then decrease the efficiency even more.

1 Like

This is perfectly acceptable. I gave a model of 83% efficiency on a transistor, scaled as if each transistor held that efficiency. The model that 19% of electric potential is being collected as a work output, and 81% is lost in energy conversion is very much a real world example of scaled architecture. I assume that the CPU could potentially be more efficient, but resistance and capacitance are not factored in here. The CPU could very well be operating at much higher efficiency while we can only observe 19% at the other end.

Good input.

^this

TDP is not at all a good metric for power draw. The purpose of TDP is to create a dissipation metric for heatsink performance. By definition TDP is the THEORETICAL OVERESTIMATED MAXIMUM amount of heat (in watts) that a CPU could dissipate under NORMAL operating conditions (which prime95 does not fall under). Nothing about the TDP will tell you about the power draw because they aren't related. TDP also means that at ANY power draw the CPU will need at least the rated TDP worth of thermal dissipation to operate properly without undergoing thermal runaway.
So that means that a CPU with a TDP of 100W running with a power draw of 60W will need a heatsink with a minimum dissipation of 100W if the same CPU was running at a power draw of 45W it would still require a heatsink with a minimum dissipation of 100W

All of this is leaving me with the idea that the only way to get a solid answer as to the whole heat output problem is to get some real world numbers. The problem there is figuring out a way to actually test that reliably in a manner that actually means anything. The best that I can come up with is that we could use a water cooling rig, and pump fresh water in constantly and collect the heated water throughout the test. Imagine it this way, the reservoir is massive (think a giant bucket) and instead of going to a radiator and being put back into a loop, the heated water is collected (in another giant bucket). Then we find the difference in the temperatures between the water in and the water out (measure the temperature of the two buckets) and find the volume (and thereby get the mass) of the water heated in order to find the amount of overall heat put off by the cpu (or gpu, if you have the water block, this could work on anything). The calculation would be a bit of a hassle, but I am sure that it wouldn't be too hard. It would be something like Joules = mass x specific heat x delta T (all of the water that was heated). Then divide the Joules by the time that the test took to complete. Obviously, you would need to start collecting water AFTER the cpu was maxed out and had reached it max temp, or else risk including the errors of heating other components while performing the test. Likewise, you would want to minimize the amount of heat that the heated water transfers after in between being heated and being measured, so collecting in an insulating container (Styrofoam cooler with 1 hole for water input and another for air output and the rest sealed), and then assume (wrongly, but whatever) that no heat is lost to the environment.

I hope that Wendell or someone on the Tek teams finds this an interesting enough topic to want to cover. I would love to see this finally tested one day. I don't have the water cooling equipment necessary to actually test this.

Not many people would have the necessary equipment. This is almost a challenge for the Mythbusters!!

Simple calorimetry equipment is not very accurate - at best 50%.

If we are all able to accept that power through a simple resistor is Voltage * Current, then we could put a resistor inside a CPU head spreader, and generate different thermal powers to compare this to the CPU. Then we could tell if the CPU produced more or less thermal power than the resistor. Even then, the results could include errors which cloud the verdict.

^^ If you CPU only has a power draw of 45W, then you only need a 45W heatsink to dissipate that power. You'd never do this as sooner or later your CPU's workload would change and you'd then be using 100W and dissipating 100W.

Due to conservation of energy, your CPU is NEVER, EVER going to produce more heat energy than the electrical input.

Regarding TDP ... again the acronym is Thermal Design Power. Reading this figure has turned into a bit of a science in itself. http://en.wikipedia.org/wiki/Thermal_design_power

These days, to find the Peak TDP of a part can be 1.5 * the TDP advertised. This is only for a short amount of time though (10 ms on an Intel Haswell). Over a long amount of time the power input average to a lower figure. So, the figure you should use for the TDP is 1.5*84W.

TDP is not in any way related to power consumption, its an assbackwards way of saying you need a heatsink with x dissipation coefficient to operate this CPU.

Well, the Guru3D was measuring the ENTIRE computer system's power consumption, not just the load.

Even then, the TDP (and therefore input power) can exceed the advertised value by 1.25 times for extended periods, making the TDP to ACTUALLY be 84*1.25 = 105W ... which is what Guru3D observed.

So Guru3D really established that our processor is consuming Much less than 109W of power (say load minus idle) = 104W-35W = 69W.

So recalculating efficiency: 84W thermal (which won't be the case, but it was your argument) / 69W (to remove the other PC components is the actual power draw of the CPU) = -21% That doesn't make sense

Power does not mean electric it could be thermal work.

TDP refers to thermal not electrical power, that needs to be dissipated via a heat sink to ambient, Jesus Christ, why is that so hard to understand. The thermal power/losses is caused by the resistance of the circuit.

Go take a physics class before you spout such misinformation.

Here is a great intro to thermodynamics:

1 Like

TDP is the heat energy output by a processor under a "normal" load. It is used to determine what an adequate cooling solution would be for that processor. IE a 150W TDP processor should have a cooler that can dissipate 150W of heat, and that will be enough for "normal" loads. That's how TDP is defined.

BUT, TDP is also an indicator of approximately how much energy a CPU will pull under that same "normal" load.
If 50 watts of energy enters the CPU, 50 watts must exit the CPU (anything else would imply that we're constantly building up a huge amount of energy in the CPU, which obviously isn't the case). So if 50 watts of electrical energy enters the CPU, where does the rest of this energy go? Other than the output from the cpu to other components, no significant electrical energy leaves, but this can be reasonably be offset by the energy from inputs from those components. This doesn't leave many options for the remaining energy to be converted to. Kinetic energy? Maybe a little bit. Electromagnetic waves? Surely some, but how significant is that, really? Thermal energy is the obvious answer. The other forms of energy output obviously exist, but when compared to the thermal output, they're insignificant, which means that the thermal output is approximately equal to the electrical input. Unless someone wants to argue that energy is not in fact conserved.

1 Like

You have no idea what you are talking about.

Why do you think that work is required to do a mathematical operation?? You cannot turn electricity into math ... it just doesn't work that way.

For reference: http://en.wikipedia.org/wiki/Reversible_computing

On a simple scale, I can take a logic signal, pass it through two NOT gates (inverters) and I get exactly the same signal out of the output ... but I've used power to do this. I have generated a circuit by your definition is 100% inefficient ... many algorithms can be reversed in a similar way - so the action of processing the data MUST NOT change the energy in the data.

Sure, your transistor might be 83% efficient, but we have thousands of Transistor connected in series, each only 83% efficient on its input ... so after 2 transistors cascaded we have 68% efficiency ... and after a thousand we have 1.19x10^-79 % efficient ... OH WAIT that's many times less than 0.01% efficiency.

Now, with digital circuits, our transistors are configured in such a way that zero current is required to drive the input of a gate. So if an output of a transistor never has to drive any current, then efficiency (power out / Power in) is always (0/0) which is indeterminate. Which proves my point that EFFICIENCY IS NOT RELEVANT.

Repeating again, Efficiency quoted on a product is always given in number of operations the CPU can perform per watt, or as a percentage increase over the previous generation of products. It will NEVER be given as an absolute value as in (My processor converts 10% of its electrical power into mathematics) as mathematics/processing is not a structure which can contain physical power or work.

I asked on the EEVBlog forums, and at least those people seem to agree with me. http://www.eevblog.com/forum/chat/moore's-law-is-50-years-old/15/

In that case, I don't see why I should agree with you. Perhaps you would prefer if I didn't continue arguing? (I'm intending to be condescending, I just want to know if you'd like me to stop arguing or if we need to investigate this further?)

1 Like