I think American Digi-Key is trying to test out how much of a cuck a German customer can be…
I’ve been ordering stuff from around the globe since 1999 and have never experienced such treatment by an online shop. I have never f’d over any seller with forced PayPal or credit card refunds and my item return rate in general is < 1 %.
Trying to find an alternative - trustworthy - source for the 1200P-32i.
Were you trying to get the IX2S version of the card or the older IXS version?
I’d hardly think the card warrants any kind of export control.
funny story, I recently bought some test equipment from a Cologne company and apparently some significant amount of the semiconductors inside it were produced in Russia and upon import US customs became unhappy.
A small positive intermission, got my first little AM5 system with a 7800X3D and an ASUS ProArt X670E-CREATOR WIFI, works fine with quality passive PCIe Gen4 x16-to-4xM.2 Bifurcation adapters like the Delock 89017, 0 PCIe Bus Errors after quite intense loads:
For me personally PCIe Gen5 SSDs aren’t that desirable at this point in time, an active PCIe Gen5 Switch HBA on the other hand would pique my interest but they don’t seem to be a thing, yet.
The passive Delock 90091 PCIe Gen4 x8-to-2xU.2 Bifurcation adapter also works perfectly with old-AF Samsung PM1733 U.2 7.68 SSDs with their original, buggy firmware.
Since it was said that the Adaptec HBA 1200p-32i would also handle firmware updates, I was a bit hopeful to use it to update the PM1733 SSDs that are “bulk” and don’t get firmware updates from Samsung directly. Firmware update files can be found but Samsung’s update tools themselves don’t detect the SSDs, even when they are directly connected to CPU PCIe.
Tested every user-accessable PCIe Interface on the ASUS ProArt X670E-CREATOR WIFI simultaneously with spare M.2 NVMe SSDs and it works fine with 9 NVMe drives that are electronically directly connected to the motherboard (only a 4xM.2 PCIe Bifurcation adapter for the main PCIe x16 slot, no active PCIe Switch HBAs).
0 stability issues with parallel load on all 9 drives
PCIe AER only seems to work on CPU PCIe, not chipset PCIe (similar situation to AM4)
With a 7800X3D and dual channel DDR5-5600 ECC the combined maximum data per second read or write on the drives in total is about 39 GByte/s (only have PCIe Gen4 and Gen3 SSDs for testing), depending on the test data (9 instances of CrystalDiskMark) the poor CPU can’t keep up and is at 100 % load. Am curious how the hypothetical AM5 Zen 5 flagship with one 8-core Zen 5 3D V-Cache chiplet and one 16-core Zen 5c chiplet is going to perform here.
Nice improvement with AM5 over AM4: In this max load scenario Windows keeps on working completely fine, when I did similar tests on AM4, trying to find the absolute maximum of the platform Windows became janky (temporarily frozen mouse cursor, navigating through file explorer with forced pauses etc.)
Well, I’m a GUI Normie-Pleb, that’s why I’m using CrystalDiskMark.
Testing AMD’s motherboard RAID is a good idea, AMD acted like a dick and decided to not update the AM4 RAID drivers anymore (even though they have been shitty for years), so I haven’t looked at AM5’s “new” motherboard RAID, yet.
I was looking at mdadm on kernel.org recently (it acts like the VROC “driver” for linux) and the last release is 2 years old, with a fair number of commits from Intel since but none of them meaningful. So AMD is not alone in not pushing the softraid angle hard.
A bit off topic: I don’t understand what these Intel/AMD-provided solutions do that isn’t already offered by any operating system or dedicated RAID AIC. What exactly do they bring to the table that keeps them on people’s radar?
It’s RAID that doesn’t require extra hardware and you can boot from it.
Regular softraid (like whatever’s built into Windows, or mdadm without the hardware key) won’t let you boot because the BIOS can’t see the volume. You’d have to make a boot partition off-device for Linux (on Windows you can boot from RAID1 LVM volumes, not RAID0 or 10).
I doubt anything would change performance-wise if I removed the VROC key from the motherboard. (Things might stop working completely in Windows if the Intel driver said “no” but there’s no way Linux would have any issues.)
You can easily boot Linux from mdadm RAID1 or any other software RAID if you leave the EFI partition out of the array. I’m not sure if Windows Boot Manager can do the same if only you leave the EFI partition out.
If anything it’s the safer choice because you’re not bound to any vendor in case something breaks, like the motherboard or the CPU.
Windows is completely hosed in that case, it can un-bitlocker stuff from bootmgr but that’s as far as it goes. It can’t do anything with RAID volumes because there’s no kernel yet, so there are no drivers either. At that point it’s forced to still rely on the BIOS.
I’m pretty sure it’s a similar situation with Linux too, you need /boot to be mountable with just BIOS IO. The stuff on /efi is super thin by default, even grub.cfg there just points to the real grub.cfg on the /boot partition via a GPT GUID and the file path. Mdadm itself lives with the kernel in the ramdisk image on /boot.