A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly

I’d be happy to investigate, still haven’t been able to actually order one from Digi-Key. :frowning:

  1. Their system automatically flags me
  2. I have to contact them, send them document copies & stuff
  3. They say “Everything is fine now, but your order was automatically canceled, you can just make a new order”
  4. Back to 1.

Their customer service could be coming from the same people that brought you Broadcom’s.

1 Like

Are you residing in North Korea or something? :joy:

Here is how to connect huge amount NVME on cheap. It is a pity this kind of mining boards are no longer in production.

7 of these slots are Gen 1x1, wanna know which ones? xD
And even these that are Gen 3x1 are only marginally faster than SATA, so there’s that…

I admit that is ugly. :rofl:

I think American Digi-Key is trying to test out how much of a cuck a German customer can be…

I’ve been ordering stuff from around the globe since 1999 and have never experienced such treatment by an online shop. I have never f’d over any seller with forced PayPal or credit card refunds and my item return rate in general is < 1 %.

Trying to find an alternative - trustworthy - source for the 1200P-32i.

1 Like

Were you trying to get the IX2S version of the card or the older IXS version?

I’d hardly think the card warrants any kind of export control.
funny story, I recently bought some test equipment from a Cologne company and apparently some significant amount of the semiconductors inside it were produced in Russia and upon import US customs became unhappy.

I got mine from mouser.

The current (only?) version available with the part number 1200UP32IX2S.

Listed as not actually in stock at mouser.de :frowning:


A small positive intermission, got my first little AM5 system with a 7800X3D and an ASUS ProArt X670E-CREATOR WIFI, works fine with quality passive PCIe Gen4 x16-to-4xM.2 Bifurcation adapters like the Delock 89017, 0 PCIe Bus Errors after quite intense loads:

For me personally PCIe Gen5 SSDs aren’t that desirable at this point in time, an active PCIe Gen5 Switch HBA on the other hand would pique my interest but they don’t seem to be a thing, yet.

The passive Delock 90091 PCIe Gen4 x8-to-2xU.2 Bifurcation adapter also works perfectly with old-AF Samsung PM1733 U.2 7.68 SSDs with their original, buggy firmware.

Since it was said that the Adaptec HBA 1200p-32i would also handle firmware updates, I was a bit hopeful to use it to update the PM1733 SSDs that are “bulk” and don’t get firmware updates from Samsung directly. Firmware update files can be found but Samsung’s update tools themselves don’t detect the SSDs, even when they are directly connected to CPU PCIe.

So I figured out why setting primary and secondary boot mode for the drives screwed everything up so badly:

Its funny that there is a more detailed description of options and their effects in the raid card’s bios than in the full fledged Windows GUI.

Windows GUI description of same operation:

1 Like

oh I was thinking you might have hit the page for the old version of the card and that was a cause of some problem.
https://www.digikey.de/de/products/detail/microchip-technology/1200UP32IXS/16549308

I should mention I hadn’t actually tested this functionality out, but I saw that it was there and played around with it and it didn’t seem broken.

1 Like

Tested every user-accessable PCIe Interface on the ASUS ProArt X670E-CREATOR WIFI simultaneously with spare M.2 NVMe SSDs and it works fine with 9 NVMe drives that are electronically directly connected to the motherboard (only a 4xM.2 PCIe Bifurcation adapter for the main PCIe x16 slot, no active PCIe Switch HBAs).

  • 0 stability issues with parallel load on all 9 drives

  • PCIe AER only seems to work on CPU PCIe, not chipset PCIe (similar situation to AM4)

  • With a 7800X3D and dual channel DDR5-5600 ECC the combined maximum data per second read or write on the drives in total is about 39 GByte/s (only have PCIe Gen4 and Gen3 SSDs for testing), depending on the test data (9 instances of CrystalDiskMark) the poor CPU can’t keep up and is at 100 % load. Am curious how the hypothetical AM5 Zen 5 flagship with one 8-core Zen 5 3D V-Cache chiplet and one 16-core Zen 5c chiplet is going to perform here.

  • Nice improvement with AM5 over AM4: In this max load scenario Windows keeps on working completely fine, when I did similar tests on AM4, trying to find the absolute maximum of the platform Windows became janky (temporarily frozen mouse cursor, navigating through file explorer with forced pauses etc.)

2 Likes

Very impressive. Can you do RAID-0? I’m curious to see how the AMD version of softraid fares. VROC is trash and will decimate your IO numbers, no matter how many times Wendell says “VROC is good now”.

As for the CPU utilization, if polling is used for IO that’s kind of understandable and doesn’t neccessarily mean you’re actually capped.

What’s wrong with diskperf? CrystalDiskMark is just a (fugly) shell over it.

Well, I’m a GUI Normie-Pleb, that’s why I’m using CrystalDiskMark.

  • Testing AMD’s motherboard RAID is a good idea, AMD acted like a dick and decided to not update the AM4 RAID drivers anymore (even though they have been shitty for years), so I haven’t looked at AM5’s “new” motherboard RAID, yet.

I was looking at mdadm on kernel.org recently (it acts like the VROC “driver” for linux) and the last release is 2 years old, with a fair number of commits from Intel since but none of them meaningful. So AMD is not alone in not pushing the softraid angle hard.

A bit off topic: I don’t understand what these Intel/AMD-provided solutions do that isn’t already offered by any operating system or dedicated RAID AIC. What exactly do they bring to the table that keeps them on people’s radar?

It’s RAID that doesn’t require extra hardware and you can boot from it.

Regular softraid (like whatever’s built into Windows, or mdadm without the hardware key) won’t let you boot because the BIOS can’t see the volume. You’d have to make a boot partition off-device for Linux (on Windows you can boot from RAID1 LVM volumes, not RAID0 or 10).

I doubt anything would change performance-wise if I removed the VROC key from the motherboard. (Things might stop working completely in Windows if the Intel driver said “no” but there’s no way Linux would have any issues.)

2 Likes

You can easily boot Linux from mdadm RAID1 or any other software RAID if you leave the EFI partition out of the array. I’m not sure if Windows Boot Manager can do the same if only you leave the EFI partition out.

If anything it’s the safer choice because you’re not bound to any vendor in case something breaks, like the motherboard or the CPU.

For me personally it’s this situation:

  1. Be able to boot Windows off of the volume*.

  2. Compatible with BitLocker in software AES, not TPM mode.

  3. NO interruption of operation of a running system or data loss in case of a physical drive defect.

  4. Would be great if the performance of modern NVMe SSDs wasn’t completely wasted but this point is at the end of my priority list.

*Would love a situation with booting Windows over the network from a potent but efficient ZFS Server (that does all that stuff and Windows isn’t handling anything complex) with a 40 or 100 GbE ethernet adapter - but I am still too ignorant regarding Linux/ZFS and my little NAS project to learn the necessary skills for at least my specific use case unfortunately doesn’t seem to be something that attracts those people with practical knowledge of this stuff.

Windows is completely hosed in that case, it can un-bitlocker stuff from bootmgr but that’s as far as it goes. It can’t do anything with RAID volumes because there’s no kernel yet, so there are no drivers either. At that point it’s forced to still rely on the BIOS.

I’m pretty sure it’s a similar situation with Linux too, you need /boot to be mountable with just BIOS IO. The stuff on /efi is super thin by default, even grub.cfg there just points to the real grub.cfg on the /boot partition via a GPT GUID and the file path. Mdadm itself lives with the kernel in the ramdisk image on /boot.