Best CPU with the Best TDP for 24/7 server

"Total energy in = Total electrical energy out + Waste Heat...."
Yeah, but where is the energy out going?
The energy going out portion of a cpu is so small you might as well ignore it, as its just the interface in which it talks with other components.

Power in is always going to be slightly higher than the amount of heat that's needed to be dissipated by the heatsink yeah, as some is used by the interface, some is given of as EMI and some is dissipated through the socket and motherboard idly.

I don't want temperature sensors. I know how to read degrees. What I want to know is the actual amount of heat output by the chip which is then dumped into the environment. Heat transfer is measured in watts. Temperature sensors tell me degrees. Not terribly useful in answering my question.

I think that you are misunderstanding MisteryAngel here. She is saying that the 5960x uses more electricity than its TDP, which is what you are saying as well, from what I can tell.

Agree, but how much electrical energy does a CPU put out in relation to the energy it takes in? The main energy output of a silicon device is thermal energy. At sustained full duty cycle, the TDP is high for the specs, like Xeon processors, unlike non-Xeon parts that reduce the duty cycle enormously to meet the TDP. So throughout the operational envelope, it's pretty safe to say that as far as the i7-5960X is concerned, the energy usage and TDP will be pretty much equal don't you think? The TDP will not be significantly lower on any modern CPU, because the electrical energy and emissions that are not thermal energy combined, will be very low in comparison to the total electrical energy input. So silicon is a semi-conductor and not very efficient... there is no better alternative that is marketable in the same way at this point in time sadly.

I do agree that a modern CPU can greatly draw more electrical energy than the TDP it is rated at, but only in bursts, and the duty cycle would have to be reduced radically to compensate. A Xeon part is made to run at full duty cycle for sustained periods of time, basically for 100% of the time, which is why it has a reduced duty cycle but basically the same litho as an extreme edition part. Both will however hit the same average TDP target and will consume the same average electrical energy, and those values will again be almost equal. The Xeon will only permit very short bursts, but will hardly throttle, the extreme edition will maximize the burst-and-throttle action. The application is different. An extreme edition or consumer/gaming SKU is not a good choice for a 24/7 server just because of that. The TDP sets the operational envelope, a Xeon part will fill that with very small standard deviation, an extreme edition (basically the same as a Xeon part but with a different operational program aimed at burst performance instead of constant performance) or lesser quality litho part (i.e. a consumer/gamer version part, basically the same as a corresponding Xeon part but with lesser quality so reduced strain tolerance) will fill that TDP with a very large standard deviation. Over time though, even if they show a difference in peak power consumption, they will not show a significant root mean square power consumption over time, and the wattage of electrical power consumption will be almost exactly the same as the TDP. When the energy consumption is measured, the question is what exactly is being measured. Where are the measuring points on a CPU? That is part of the problem. Another part of the problem, is that all CPU's do not consume electrical energy in the same way, in that some have more functionality than others, and some have power regulators on the die, whereas others have it on the back of the package. So the specification of power consumption is not really useful anyway. It's therefore better to go with the TDP as guideline, as that determines the thermal envelope, and the maximum power envelope at the same time, since the silicon turns electrical power into heat, and it can't produce more heat than the power added, unless it would go exothermal or something.

1 Like

Again TDP is just heat output.

If the provided video is not good enough, then i give up lol.

okay, guys, the TOTAL energy output of the cpu is equal to the input. however, not all of that energy is included in the TDP, as not all of it becomes thermal energy. a significant portion is still electricity.

@thelonewanderer to complete a circuit.. the energy out goes VIA THE OTHER SIDE.. INPUT OUTPUT ON A CIRCUIT FOR CRY SAKE... anyways... @Zoltan True but What I am merely correcting the misconception. No she did not say it uses more electricity than the TDP totally... It uses more electricity than the tdp but it puts out 140W of that electricity in heate... for cry sake people... geesh. your talking to an Electrical Engineer here.. Who has experience :P

So let me state this

TOTAL ENERGY IN = Electrical Energy Out + Heat Energy out.. PERIOD.. theres no way around it.. Now whether or not the majority of that electricity is dumped into extra heat is a slightly different subject.
@Zoltan Whether or not the majority of that energy goes out in heat.. (which determines TDP) does not affect this fact below... I just wished to point that out...
TOTAL ENERGY IN = Electrical Energy Out + Heat Energy out

LET ME DO THIS A THIRD TIME
TOTAL ENERGY IN = Electrical Energy Out (your processed data) + Heat Energy out (heat: and the average or nominal loss... is your TDP)
Go do some research or hell take a silicon or digital labs class.. then come back and talk to me

1 Like

exacly :p

1 Like

I got your back ;)

I also hope it was educational to some folk :)

lol well i give up on it, i provided an explaining video.

1 Like

Sometimes you just have to let things be :D

You realize that's a self defeating argument, right? If Electrical Energy In= Electrical Energy Out + Heat Energy Out, and the Electrical draw of the CPU is Electrical Energy In - Electrical Energy Out, then the Electrical Draw of the CPU is equal to the Heat Energy Output. Which is actually an accurate approximation, given that CPUs produce negligible amounts of other forms of energy.
That being said, TDP for Intel parts is rated as the required heat dissipation for a "standard" workload, iirc. Obviously a nonstandard workload can push the heat output and energy draw well above that rating.

1 Like

I'd look at another C2750 board. They're inexpensive(comparably), support ECC ram, and they're designed to be extremely low power. They're basically ideal for your application, which I'm assuming is basically going to be a FreeNAS box running a plex app, because that's essentially what they were designed for. Even if you're skipping FreeNAS and just making a linux box with a bunch of hard drives to run plex, it's still remarkably powerful for a remarkably low power draw. Supermicro makes one with quad ethernet ports too, I believe, at around the same price or cheaper than ASRock.
An i7 or i5 t could work, but they'll still use more energy, they don't support ECC memory which makes good redundancy difficult, and quite frankly the performance from them is wasted in what is basically a NAS build. Using an ATX board instead of mini-ATX or micro-ITX is also kinda shooting yourself in the foot as far as energy use goes.
All that being said, even running 24/7, neither of those builds should make a very noticeable difference in your power bill, since most time will be spent idling.

+1

Going for a dedicated low power server motherboard is the best option. It has been designed from the start to conserve power.

I would still be more concerned with getting an efficient power supply and HDD/etc.

If you already have the z87 board, then all of the Haswell processors idle at virtually the same power. Take a look at this graph: http://www.anandtech.com/show/8774/intel-haswell-low-power-cpu-review-core-i3-4130t-i5-4570s-and-i7-4790s-tested (second from the bottom). Only when 2 of the cores are disabled do we see a drop in idle power ... by 1 watt. If you want the 4770 (for instance) to use similar power to the S (low power) version, just limit the frequency in BIOS or your OS. Remember to also use DDR3L RAM and such.

As for TDP, The intel Datasheet for the Haswell states: By default the TDP can be exceeded by 25% for a period of 10 Seconds. System builders can choose to change this (for desktop, that would be the motherboard manufacturer). All this means is the TDP is now 125% of the labelled ... so for an 84W part it should read 105W. Furthermore, that TDP can be exceeded for a period of 10ms http://www.mouser.com/pdfdocs/4thgencorefamilydesktopvol1datasheet.pdf page 80 Fortunately, this would be short enough for the power used in this case to come from the decoupling in the power supplies ... so hopefully you wouldn't need to allow power supply headroom for that.

Please note, very little power gets transferred between components like the CPU and RAM. Conservatively, 99.9% of the electricity used in the CPU leaves as heat. Therefore heat out is approximately equal to average power in (instantaneous peaks will be "stored" in the CPU die and Package). Very little power will leave in the signals to other components (Chipset, RAM) as: 1. there are only ~2000 output transistors vs 200 million other transistors inside the CPU (2 transistors per pin), and any power we lose through an output pin will likely come back to us in an input pin.

Also, we don't need to exceed TDP to heat up a chip. Our thermal junctions need a delta in temperature to start to transfer heat, our chip simply heats up (gains power) until it exits through the heatsink at the same rate that it enters via the power supply.

Streaming for home? neighbourhood? etc etc? Because the more media streams you supply the more grunt you need.
If its just for home even a quad core fm2 chip (760K) will suffice for a few 1080p streams. They're also pretty good on power consumption. If budget permits though i5 or greater (or xeon) would be nice.
Also for a plex server that mobo is way overkill.
Havent watched this vid yet >> https://www.youtube.com/watch?v=quXrAW6gAl8

(grabs bag of popcorn)

So I decided to think about this on a totally scientific level. The amount of energy given off is equivalent to that input. Thus can be proven by the law of conservation. However the heat given off would then be less than the total power consumed. Why? Partial energy is used for communications betwixt other components such as the motherboard thus requires more energy than that of the heat output. The energy is later emitted as heat from other components, not the CPU itself thus draw is greater than the TDP (heat output of the silicon).

Yes, but the amount used when communicating (say to the RAM) is tiny compared to the overall power, and we get it right back again when the RAM sends an electrical signal back.

But during transfer heat is emitted from the motherboard. The temperature of the silicon us always less than the wattage required for an operational system. I would say there could easily be a deviation of 10 to 15w between the actual TDP and wattage required.

I talked to a physics professor here at my university about the efficiency of computer chips and how much of the electricity that they use is turned into heat. He was absolutely no use whatsoever. What he said was basically, these things are very controlled and the people who design them know exactly how much heat they put off. When I tried to explain TDP and how generically it was assigned to chips, and it's used in the real world with regard to heatsinks as well as that I was looking more for how much overall heat was actually being put out into the environment, he didn't really didn't have much more to say. I honestly don't think that he has a good enough understanding of computer hardware to be able to answer my question. I am guessing at this point that the closest thing that we can get right now is that the we can use the chip's TDP and the average electricity use of the chip while at stock and maxed out in order to find the overall efficiency. I say that because the closest thing that I can come up with about all of this is that the TDP is the average heat output (hence its use with heatsinks) while maxed at stock. Or maybe it is the heatoutput while under a "typical" workload, whatever that means. IT could also be the maximum heat output while at stock (as opposed to the average).

The thought that these things are just really well regulated and understood by manufacturers (which was his point) seems completely wrong considering the performance issues related to thermal throttling of the core m processors which leads me to believe that either TDP isn't a good real world number to use, or that they are much dumber than we are giving them credit for (or they are playing us for fools by hoping that no one will notice the thermal throttling problem and will just buy things because "it has the best of the best").