Power Architecture in enterprise datacenters

Also… IBM has blades designed for Linux on Power.

https://www.ibm.com/it-infrastructure/power/os/linux

My work has/is used/using IBM Power servers for Oracle databases on AIX and Intersystems Cache .

2 Likes

Ahh yup. Fucking Oracle.

Can you give any more detail as to what type of security customers these would be?

Do any of them already use me_cleaner?
Are these more like AV vendors, who (in my opinion) would be more willing to trust proprietary code?
Or something more like an HSM manufacturer? Or a Certificate Authority?


My opinion on security companies is pretty much “snake oil until proven otherwise”, so I would be curious what your experience with them is.

As someone who has run POWER (an IBM p5-550 if I remember correctly) in the DC, I have trouble seeing why anyone would see it as inappropriate for the DC. For i-Series (AS400) workloads, there’s not a lot of choice, and for long-running jobs that require a very reliable host to run on, POWER is probably a better choice than x86 - hot pluggable processors and memory for a start.

Some specific points :-

  1. Everything is compiled for x86. If you’re talking about Linux, then almost everything in a Linux distribution is available for POWER (and z-series, ARM).

  2. Out of the top 10 list of supercomputers, 1 and 3 are POWER-based (and all of the top 10 run Linux).

  3. POWER isn’t a general purpose processor. Since when?

1 Like

From tests I have seen the DBAs do ( cisco UCS M4 vs Power 7 )

SIngle threaded performance is much better on the power 7.

[drooling] I want that so bad (but definitely can’t afford it)

How does something like that work, anyway? Do you have to tell the OS, “hey shift everthing off of chip N” and then just pull it out?


You would never want to do it, since it’s in no way designed for it, but I wonder if the capability is still there, could you theoretically hot swap the modules in a Talos II?

Last I checked a datacenter works in complete storage, not so much in general compute, which is whder I think POWER fits. I also thing POWER would work in consumer products if the 4 and 8 core stuff was a bit more power efficient. The problem is convincing companies that people would have an interest, and thats where stuff like the TALOS comes in, but also future AmigaONE machines, and I do see a lot, A LOT, of energy for PPC and POWER on the archlinux, gentoo, and Void linux channels on IRC. Almost evey night 2 weeks ago I was talking to people who were working with POWER machines at their work, had POWER based desktops at their desks that they wanted at home, but had little incentive to drop 4000 dollars minimum on one of the IBM provided POWER6 or 7 machines that were being sold to other companies.

POWER has a lot of potential, but using it in a storage environment and only in servers / workstations… Mmmmm, not so much.

To anyone interested in general discussion about POWER or PPC, have a thread.

All datacenters are different. It really depends on the application. My company’s datacenter uses a lot of compute. More so than storage (if you were to assign a dollar figure to everything), so it really stands to reason that POWER would do well in it.

Eh, I see differently. But I’ve only done so many things.

POWER has more use than servers. People just need to actually try using it, and most wheeze over some shitty dual core laptop instead of something that would stomp it to dust.

Eh.

Actually @olddellian @SgtAwesomesauce shouldn’t this technically be in the PWOER/PowerPC discussion thread :3 I actually don’t know tho

If you have certain posts you would like moved I will oblige.

1 Like

Can’t remember that level of detail about the p5-550, but the big Sun servers I was more familiar with had the same feature. You would tell the hardware supervisor which processor/memory board you wanted to remove; it would tell the operating system to migrate stuff off that board, and eventually the hardware supervisor would be told to switch off power.

At which point you carefully slid the board out of its slot, and a colleague would fit a blanking plate; quickly before the whole machine shut down because of over-temperature.

Inserting a card was simpler - insert the card, and the hardware supervisor would wake up (“Cool! A new toy to play with”), fire up the board, run diagnostics, and tell the operating system it has some new processors.

This was done at the board level and not the processor level, so there’s not much hope this feature will be there in the Talos II.

1 Like

I’ve played with power9 in a DC-like environment.

Compared to more typical Skylake, you get more io over just pure cpu compute, and the cost is software… In practice, pretty much every library can cause a problem that might not get caught in pre deployment testing and you basically need a team of developers full time to maintain a typical stack of microservices and prevent it from breaking over time.

I don’t think Intel/AMD will have anything to worry about before power10 / 2020 in typical DC environments, HPC environments where much of the software is written from scratch is pretty much where the buck stops for now.

Every time I’ve tested, say, a server side Java web app on POWER and compared it to, say, the same app running on a VM running on VMWare, the POWER server wins. This has been true going all the way back to POWER5.

A lot of the time it comes down to allocating enough resources to the VMs, but one big thing POWER has in its favor are those enormous cache sizes. Cache data hit, data hit, data hit.

Having said all of that “everything is compiled for x86 on Linux.” Huh? If you have some software you want to run on another architecture, and you have th source, port the stuff over. It’s not rocket science, really. I’ve ported lots and lots of GNU software over to POWER (Linux and AIX) when the binaries of the latest source simply weren’t available. It’s often going to come down to wading through dependency hell, but again, not rocket science. It’s barely even computer science.

I have lots and lots of experience with POWER and PowerVM, HA, porting apps from other UNIX systems to AIX, and all of it was big business / Fortune 500 type stuff. If that’s not Data Center, I don’t know what is.

Some people think if it’s not running in AWS / Azure / Oracle Cloud Infrastructure it’s not data center, well. I’ve got news for you…plenty of collocation providers would like a word with you.

1 Like

Fortune 500s that already spend a lot of money on computing might be ok hiring someone to do this.

Your typical web shops / telcos / banks and other skimpy integrators (there’s exceptions in those industries too), that usually don’t really maintain things past the initial deployment, aren’t as likely to jump ship to power, IMHO anyway.

Also, on a more technical question, OpenCAPI, how’s that different than driver giving you an mmaped handle to PCIe buffers?

Many of those “smaller” shops are running IBM i on POWER, especially when they have specific business application needs such as JD Edwards. BTW, the IBM i platform can also host an AIX or Linux LPAR (“VM”) or two (or more), since it’s identical hardware with the same firmware, but most companies are just going to run IBM i OS with its integrated DB2 and call it a day.)

As far as OpenCAPI, well, it’s technically more secure than a memory mapped handle into a PCIe buffer, for starters. there’s no kernel or device driver overhead, it can be implemented by a hardware accelerator, I mean…one is a method within an OS and the other is an open standard that is not particular to one specific hardware architecture…not really sure what lane you’re coming from when you propose this question.

Like, there’s IOMMU?

I’m wondering where’s the value add it brings?

Things like DPDK that basically splits the tcpip stack between the nic and userland have been around for a while, is this just another name for more or less the same features coming from a different set of vendors?

It sounds like the CAPI system treats the attached device more like an additional processor than just a peripheral. If normal PCIe just accesses the memory, but CAPI/OpenCAPI and NVLink interfaces directly with the processor’s cache system, that could explain the speed improvement.

The presentation I linked is talking about CAPI 1 on POWER8 chips, while POWER9 supports both CAPI 2 and OpenCAPI 3. CAPI is run over physical PCIe, while OpenCAPI uses BlueLink/PowerAXON as the physical connection.

As far as I know, [Open]CAPI, NVLink, CCIX, and probably Gen-Z also, are all focused around interfacing with processor cache directly as a faster alternative to PCIe.


If you’re curious, here’s an article that covers some OpenCAPI related information,

https://www.nextplatform.com/2018/08/28/ibm-power-chips-blur-the-lines-to-memory-and-accelerators


This marketing video also has a “visualization” of CAPI, although it’s a small part of the whole video:

1 Like