The POWER and PowerPC General Discussion / News Thread

i know these might move slower due to price so i will keep you in mind im currently saving for a g5 case mod for the ‘Modern PowerMac’ i need a raptor power based system

Looks like the itemised talks from the recent North America summit were uploaded about a day ago to the OpenPOWER youtube channel:

Much nicer than having to wade through the multihour videos they uploaded several weeks ago.

Raptor’s Condor announcement (what @Nawrocki is considering) is here:

1 Like

Do you have something in particular planned for the OpenCAPI/NVLink/Bluelink/PowerAXON slot/connector?

The Sforza modules actually have more PCIe lanes than LaGrange, and both are limited to six PCIe endpoints (called “root complexes” in section 3.3.2 of both datasheets); so if you were looking for PCIe expandability, a dual-CPU Talos II might be better, with its 5 PCIe slots compared to the Condor’s 4.

I would be curious what the block diagram looks like for Condor, and how they manage to get those four endpoints; both Blackbird and Talos II use up four endpoints for BMC, NIC, USB, and storage controller, and have only two free PCIe endpoints left on CPU 1. For Condor, they can probably safely assume than users will want NVMe, so no need for a storage controller, but will they cut out USB to make room for that extra fourth slot? BMC and NIC seem pretty non-negotiable.

I do realize the issue of less PCIe lanes compared to Sforza. I didn’t have a particular idea yet for OpenCAPI but wanted to have access to it if and when I wanted to use it.

I considered just sticking to Talos II… but frankly I’m not sure I care for the EATX form factor. It’s a bit too big for my taste and I don’t quite need two CPU sockets. The Lite model is also rather wasteful on the PCB in my mind. Finally, I’d like to see if Raptor has the sense to incorporate 10 Gbe NIC and NVMe right on the board itself without having to resort to an expansion card.

alright, now figure out how to make it boot and then tell me what i need to fix :stuck_out_tongue:

i don’t own a ps3, so…

2 Likes

there’s a PCIe switch on the board, as is otherwise common

the onboard peripherals are probably all switched, that way they can get away with using a commonly available PCIe 3.0 switch chip and still expose PCIe 4 via the slots

It’d be insane to go on the site and see “Now supporting PS3, PowerPC, and OPENPOWER”

1 Like

I am pretty sure that it is trying to claim more than 248MiB of RAM. The PS3 is not entitled to all 256MiB of System RAM.

I will have to work on it. May need to do different stages like Rene does with T2 Linux.

On-board 10 Gb NIC and NVMe

They very much want to avoid proprietary code on their motherboard, so I imagine that limits the selection of 10 Gb Ethernet chips available.

The wiki has links to a coreboot mailing list thread about why they chose the hardware they did; specifically about the Broadcom NIC, there is this reply from Timothy Pearson:

On 09/08/2017 10:38 PM, Taiidan at gmx.com wrote:

Also want to add why broadcom? I heard they didn’t have a good attitude to open source and as a large company I imagine they have a lot of institutional inertia preventing that from changing? - why not one of the smaller NIC makers such as atheros, mellanox, solarflare etc?

Cost was the main driver for this design. Our focus was on getting the product to market at a reasonable price point, and Broadcom’s NetXtreme series already has excellent Linux driver support, meaning that we didn’t need to invest additional resources into driver development. Depending on what the uptake of Talos II is (and therefore how much “nice to have” development we can justify vs. simply keeping a functional, RYF-certifiable product on the market) we may consider changes to the NIC supplier in the future.

As to why not Intel, in addition to the obvious issues in relation to sourcing a critical system component from a direct competitor, we have experienced issues with Intel GbE NICs in the past under Linux related to the on-chip firmware locking up and requiring a host reboot. The NetXtreme devices appear to be quite stable, and their internal operation has already been partially documented, meaning development of a true open firmware port is at least possible. The same cannot be said for the other players in this space.

I have added the emphasis on the NIC supplier sentence

So they are not married to Broadcom specifically, but with the Project Ortega work on the BCM5719, I imagine a similar chip for 10 Gb might be their first choice, since it would reduce the amount of reverse-engineering necessary.

Theoretically though, it would be cool to see an on-board Mellanox NIC connected using CAPI 2, but I imagine there is a fair amount of proprietary code on those chips, and it probably has not had many firmware hackers trying to reverse-engineer it, especially the CAPI versions.


For NVMe, I cannot imagine Raptor adding a dedicated U.2 or M.2 connector, as this would either:

  1. Reduce PCIe slot count, and therefore user flexibility
  2. Add latency, or bandwidth issues by using a PLX-style PCIe switch

Condor PCIe switch?

Is that a guess, or are you sure of that?

Does a PLX switch affect the CPU’s ability to securely isolate devices via IOMMU?

If it does not, I imagine a Condor board would tuck BMC and USB together behind the switch, since the workload being accelerated by OpenCAPI probably cares much more about NIC latency and bandwidth, than if BMC or USB latency is a bit worse.

1 Like

im working on getting my wii and game cube running void now

im lazy and havent started

soon

2 Likes

Big gay.

Definitely not a guess, I was in a twitter thread where Raptor directly said there is a PCIe switch. I don’t know how exactly it is switched (i.e. what is on which lanes) but there is definitely a switch :slight_smile: There was a switch on the original never-released Talos 1 POWER8 board also.

I don’t think it has any effect on IOMMU. The enumeration is recursive and every endpoint should still be protected just the same.

A switch also has barely any effect on performance/latency/etc., so you can pretty much safely disregard it. But yeah, I’d expect the onboard peripherals to be switched and the rest of the lanes broken out directly, since Gen4 switches are not a thing yet (and when they will be, they’ll be expensive) while all of the onboard perihperals are likely to be Gen2 or Gen3.

2 Likes

On Void news: I finally put up a new site at https://voidlinux-ppc.org/ and docs at https://docs.voidlinux-ppc.org/ - based on the official jekyll+mdbook sources, just altered for the fork.

Should be a lot prettier and easier to navigate now, plus we got an actual domain, so that I don’t have to host it on my game engine’s domain anymore. The $20 for the 2 years won’t kill me and I hope that by then it will be official already…

I’ll also be speaking about my experiences porting Void to POWER/ppc at OpenPOWER Summit Europe next month

so make sure to watch my talk :stuck_out_tongue:

6 Likes

Duh o:

You are a gentleman and a scholar!

2 Likes

I did a bit of searching, which you can actually do without needing an account despite Twitter heavily pressuring account creation, and I think I found the tweet in question:

I did also find talk about future silicon possibly having more endpoints; as an aside, this probably referred to POWER10 not the POWER9’/Axone/AIO, since I imagine the latter will be like Monza, and focus on “spending” socket bandwidth on OpenCAPI not PCIe.

If you could indulge my curiosity, what is this endpoint limitation then?
I was assuming it meant that the CPU (and therefore I assumed its IOMMU) could not internally handle more than six permission groups, and therefore two PCIe devices behind a PLX-style switch would be treated as one IOMMU group, and could peer into each others address space. If this is not the case, then it begs the question, why do x86 chipsets have such a problem with IOMMU grouping? Why not give each device its own group, and nip DMA-style attacks in the bud, as well as helping VM passthrough?

Perhaps a better question might be, is there a good overview of PCIe somewhere that would explain this, or should I really just buckle down, grab a copy of the PCIe spec, and Read The Manual?

Possibly this PCIe discussion might belong in its own thread, in the #wiki section maybe, but IOMMU differences between x86 and Power seems still slightly on topic

no clue, this is not my area of expertise

i guess we’ll see once raptor has actually released the board…

2 Likes

ok nvme doesnt work with the g5 sofar and my guess to why is how nvme communicates to the chipset and i am just too stupid to figure out what is being done. getting pcie boot as main source works but at pcie 1x16 speeds 4gb/s and if running the drive as sata wasn’t the bottle neck it would be nice… still need more testing to confirm speeds… will get back to this

1 Like

OH MY GOD HE WAS STILL WORKING ON THAT LOLOL why

2 Likes