And I’d like to use them in my x570 ws ace server. However dmesg shows:
What I already tried:
a bunsh of livecd some arch some debian same issue
CPU: r5 5500 and r9 5950x same issue
Above 4G decoding is enabled
pci=realloc
MMIO no space - shows quickly
this just craches the during boot. I just get the busybox but since the doesnt seem to be neither more or less included I cant scroll through the dmesg…
chipset lanes and cpu it makes no difference
a couple is the recent bios’s no difference
In freebsd they maybe working nothing seems to stand out in the dmesg however I’m not a bsd pro
I ordered the oculink to sff-8639 cables just today so I cant test with ssds
Unfortunately I don’t have any experience with Intel PCIe Switch AICs for NVMe SSDs but I’ve tested Broadcom x16-to-x32 PCIe Gen3 and Gen4 switches on the very same ASUS Pro WS X570-Ace without such issues.
When looking at your photos my DANGER sense is tingling: Did the cards actually come without any heatsink?! You see the intended mounting holes in the PCB.
ANY active PCIe switch I encountered so far required a heatsink and additionally active cooling - don’t know about the Intel one specifically, couldn’t see a TDP or typical power draw value on their website.
Another gut rumbling I got is from Intel’s website mentioning the AICs needing to be connected with a special cable to their server motherboards to function.
but I doubt thats is the problem.
I also can test it with other boards… z690 tachyon or x10qbi but I was hoping to avoid that…
May I ask what bios you are using?
Have you checked running Windows (see Windows drivers on Intel’s website) with the card and check if the latest public firmware version is installed?
On my Broadcom PCIe Switches that came new from a big destributor horrible non-public pre-production firmwares were installed with which they didn’t even function properly.
It is not “right” but unrelated to your cards. This is the nouveau driver crashing.
Your two cards (PMC-Sierra Inc. PM8533 PFX 48xG3 PCIe Fanout Switch) are detected and drivers are loaded (pcieport).
The cards don’t seem to be identical. One is reporting Memory behind bridge ... [size=8M] and a single memory controller as subdevice; the other one reports it as disabled (Memory behind bridge: [disabled]) but lists 7 subdevices. All in their own IOMMU groups - nice!
I’d say you have two (differently) working PCIe cards. Report what you find when the cables arrive.
My guess is that a valid install of the nvidia driver will fix the issue of the nouveau driver crashing.
the latest dmesg and lspci are just with one card.
With both cards and Kernel command line options pci=reealloc or pci=assign-busses,realloc the kernel crashes during boot.
With one card it seems to be working however BAR’s init needs a couple of tries. But adding PCI Devices also causes the kernel to crash during boot
But I cant figure out how to check the dmesg in busybox…
CSM is disabled however I just found out that the cmos battery died… So some test may have been with 4g Decode off, SVM disabled more testing needed
Also I dont have an IOMMU option in the bios just AMD SVM.