I’m wondering if there are any generic patches for the “hangs on shutdown” version of the reinit AMD bug. There are client side workarounds but I’d really like to see a set and forget fix.
I am very happy with AMD’s success but talking out of my butt I think it is pretty disingenuous to say that there are a lot of PCI Express lanes directly to the CPU in Thread ripper. Did you mean direct lanes to the “infinity fabric”? because really what is a CPU? I am thinking about all those cores… What happens when two different cores want data from the same (or different!) PCIe devices?
Sooo… If I understand it right, there is just an UEFI module in the BIOS/UEFI () on X299 and other Intel chipsets with TB that just needs extracting and inserting into an X399 BIOS.
UEFITool shows 3 such modules which it could be, I guess?: TbtDxe, TbtSmm and TbtPei
I don’t think the binary blob would just be about ~30KB??
But from what I gather I even need to get a bit of stuff from the TB add-in card and from an active Intel board, so I’m out of luck since I don’t have that?
Infinity Fabric is part of the CPU, so what’s your point?
Also even if you were not to count IF to the CPU for whatever reason, those are still native lanes, and that’s the point. They are not bottlenecked by a slow connection in between.
They just access them? It’s not like the access is exclusive.
Xeon dual (or quad) socket systems also access each others PCIe devices, same thing, not a big deal. And the connection between those two is even slower AFAIK (the old one is a ring bus (WTF?), the new one is a mesh that is totally-not-similar to IF).
Lol, well since I don’t have a TB add-in card I can’t test it, just inserting the modules would be no problem. Reminds me of the NVMe BIOS mod for boards with UEFI that didn’t support NVMe, that was fantastic