Best CPU with the Best TDP for 24/7 server

+1

Going for a dedicated low power server motherboard is the best option. It has been designed from the start to conserve power.

I would still be more concerned with getting an efficient power supply and HDD/etc.

If you already have the z87 board, then all of the Haswell processors idle at virtually the same power. Take a look at this graph: http://www.anandtech.com/show/8774/intel-haswell-low-power-cpu-review-core-i3-4130t-i5-4570s-and-i7-4790s-tested (second from the bottom). Only when 2 of the cores are disabled do we see a drop in idle power ... by 1 watt. If you want the 4770 (for instance) to use similar power to the S (low power) version, just limit the frequency in BIOS or your OS. Remember to also use DDR3L RAM and such.

As for TDP, The intel Datasheet for the Haswell states: By default the TDP can be exceeded by 25% for a period of 10 Seconds. System builders can choose to change this (for desktop, that would be the motherboard manufacturer). All this means is the TDP is now 125% of the labelled ... so for an 84W part it should read 105W. Furthermore, that TDP can be exceeded for a period of 10ms http://www.mouser.com/pdfdocs/4thgencorefamilydesktopvol1datasheet.pdf page 80 Fortunately, this would be short enough for the power used in this case to come from the decoupling in the power supplies ... so hopefully you wouldn't need to allow power supply headroom for that.

Please note, very little power gets transferred between components like the CPU and RAM. Conservatively, 99.9% of the electricity used in the CPU leaves as heat. Therefore heat out is approximately equal to average power in (instantaneous peaks will be "stored" in the CPU die and Package). Very little power will leave in the signals to other components (Chipset, RAM) as: 1. there are only ~2000 output transistors vs 200 million other transistors inside the CPU (2 transistors per pin), and any power we lose through an output pin will likely come back to us in an input pin.

Also, we don't need to exceed TDP to heat up a chip. Our thermal junctions need a delta in temperature to start to transfer heat, our chip simply heats up (gains power) until it exits through the heatsink at the same rate that it enters via the power supply.

Streaming for home? neighbourhood? etc etc? Because the more media streams you supply the more grunt you need.
If its just for home even a quad core fm2 chip (760K) will suffice for a few 1080p streams. They're also pretty good on power consumption. If budget permits though i5 or greater (or xeon) would be nice.
Also for a plex server that mobo is way overkill.
Havent watched this vid yet >> https://www.youtube.com/watch?v=quXrAW6gAl8

(grabs bag of popcorn)

So I decided to think about this on a totally scientific level. The amount of energy given off is equivalent to that input. Thus can be proven by the law of conservation. However the heat given off would then be less than the total power consumed. Why? Partial energy is used for communications betwixt other components such as the motherboard thus requires more energy than that of the heat output. The energy is later emitted as heat from other components, not the CPU itself thus draw is greater than the TDP (heat output of the silicon).

Yes, but the amount used when communicating (say to the RAM) is tiny compared to the overall power, and we get it right back again when the RAM sends an electrical signal back.

But during transfer heat is emitted from the motherboard. The temperature of the silicon us always less than the wattage required for an operational system. I would say there could easily be a deviation of 10 to 15w between the actual TDP and wattage required.

I talked to a physics professor here at my university about the efficiency of computer chips and how much of the electricity that they use is turned into heat. He was absolutely no use whatsoever. What he said was basically, these things are very controlled and the people who design them know exactly how much heat they put off. When I tried to explain TDP and how generically it was assigned to chips, and it's used in the real world with regard to heatsinks as well as that I was looking more for how much overall heat was actually being put out into the environment, he didn't really didn't have much more to say. I honestly don't think that he has a good enough understanding of computer hardware to be able to answer my question. I am guessing at this point that the closest thing that we can get right now is that the we can use the chip's TDP and the average electricity use of the chip while at stock and maxed out in order to find the overall efficiency. I say that because the closest thing that I can come up with about all of this is that the TDP is the average heat output (hence its use with heatsinks) while maxed at stock. Or maybe it is the heatoutput while under a "typical" workload, whatever that means. IT could also be the maximum heat output while at stock (as opposed to the average).

The thought that these things are just really well regulated and understood by manufacturers (which was his point) seems completely wrong considering the performance issues related to thermal throttling of the core m processors which leads me to believe that either TDP isn't a good real world number to use, or that they are much dumber than we are giving them credit for (or they are playing us for fools by hoping that no one will notice the thermal throttling problem and will just buy things because "it has the best of the best").