Big Little servers

Would a multi socket server platform ever look at doing big little but by socket? ie regular beefy 32 core smt in one socket and then a low power say, 8core smt in another??

Not on current technologies as far as I am aware but I think intel is investigating this for a future gen. Wait and see!


I’m having a hard time thinking about an enterprise workload that might benefit. Since you can scale replicas of your servers (k8s pods) up and down with the apparent/expected load why would you even consider having low power/slow cores? When the demand/processing pressure shrinks you can tune down the number of replicas and make room for your background/batch processing work. Right before morning or when you have higher utilization, you can turn replica count back up.

I can see why it would make sense for battery operated devices where you can turn off cores entirely to save the last little bit of power, but if you’re running a server farm, why would you want the silicon/pcb/datacenter space used by slower CPUs?

It may make sense for edge nodes where loads are bursty and a big.Little architecture can save real power over the lifetime of the device. If you can eliminate dedicated HVAC it can add further savings.

However I doubt you would use a multi socket device in such a use case and you are more likely to see chiplets and heterogeneous compute on the chip.

1 Like

Because workloads can actually have quite a bit of variation of performance/watt depending on the core and ISA. Putting tasks that are I/O bound or that otherwise don’t befitit from OoO on simple in-order cores could save a good chunk of power. Some research is even being done on heterogeneous systems where different ISA’s can opperate in the same cache-coherent memory space.

Right now SMT is the go-to for improvements in performance/watt in the server space, but there are negative security and determinism issues in doing so.

1 Like

I had no direction with my question. Just a thought for situations where say, no one works weekends and you’re in a hot climate, being able to bring power usage waaaay down may make sense. Or in a situation where you’re on backup power allowing mostly normal workloads but in a lull having near zero energy consumption.

good points so far for sure.

For small enterprises (e.g. up to a few racks, up to xxk cores total). One can turn nodes off over the weekend or over night (e.g. when you don’t have cheap solar) and leave only a couple of mostly idle machines (etcd, and so on).

The trouble is usually with high performance storage that if it happens to live attached to the same hardware (and is not disaggregated), you can’t really ensure it’s available when needed for turnup (Monday morning).

How much power does idle storage use relative to idle cpu?

Don’t modern CPU’s have power states to ramp down use when not active? And I’m sure operating systems spin down parts that are idle for a while?

Laptops are notorious for needing as little power draw, and I think benefits work their was back up the stack?

Oh, and don’t “Server” motherboards have BMC’s/iLO/iDRAC or IPMI on Intel chips, so effectively a little CPU on the board(or as part of the CPU)

1 Like

You’re already going to want to locate your datacenter somewhere electricity is as cheap as possible. And servers already scale down their power consumption by a factor of 5X or so, idle versus maximum power.

Cluster managers (ex: VMWare vSphere) have the option to use IPMI to power up/down entire servers as loads demand, so on a large scale you can minimize power usage that way.

Your IT department probably already does something useful with the quiet times, like full backups or patrol reads, or other maintenance tasks.

With “cloud” type computing methods, you can always put idle CPU time to use with other number-crunching tasks that aren’t needed on a tight deadline like normal operations.

1 Like

Socket architecture design would largely be thrown out. You would likely see chiplet and BGA designs in what AMD and ARM have both called HSA.

The reason for this is you must look at the benefits of such a system at low power levels.

BGAs are a solution for production of miniature packages for an integrated circuit with hundreds of pins. BGAs do not have the common issue of adjacent pins bridging together when the solder is factory-applied to the package. This leads to higher density. This is what you would like in an HSA. BGA packages with discrete leads (i.e. packages with legs) have lower thermal resistance between the package and the PCB. This allows heat generated by the integrated circuit inside the package to flow more easily to the PCB, preventing the chip from overheating especially in the prescence of limited and small/passive heating and cooling solutions. BGAs, with the very short distance between the package and the PCB, have low inductances and therefore have far superior electrical performance to leaded devices, this is necessary to increase the efficiency and reliability of really low power chiplets and products. Again a necessity in HSA and the architecture you speak of

So why would you want to use sockets?

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.