Maybe a cheap alternative to expensive PCIe Switch Solutions?

Hello everyone,

long follower of LVL1Techs on YT but just today joined the forum. I’m searching for information with this post but let me start from the beginning.

I have a existing “server” setup and are quite happy with it. My server consists of

  • Mainboard Supermicro A2SDi-8C±HLN4F
    (Mini-ITX with soldered Intel Atom C3758 - 8 Core/8 Thread CPU)
  • 64 GB ECC RAM
    The mainboard has 4 SATA Ports and 2 MiniSAS HD Ports with additional 8x SATA directly onboard. All storage is directly attached.

Storage “OS”
2x Supermicro SATADOM 64GB (mirrored OS)

Storage “Data Archive”
Data HDDs: 8x 18TB HDDs in ZFS RAIDZ2 (WDC WUH721818ALE6L4)
ZFS ZIL/Log: 1x Intel Optane M10 32GB SSD (mempek1j032ga) in m.2 on mainboard
L2ARC: 1x Samsung 860 Pro 256 GB

Storage “Proxmox VMs and Docker Container”

  • 1x Samsung 860 Pro 1TB

This server quite literally does all i need - and my list of services im hosting after 8 years is surprisingly long.

This server is placed in a “U-NAS” 8bay Case - I bought this server with the predecessor supermicro mainboard from ebay around 7-8 years ago. I’m planning to migrate the server into a 45homelab HL15 to have more space for PCIe Cards who need good ventilation - my HBA is not really happy right now.

“Sadly” now I found a new usecase - copy all our dvds to the server and host a jellyfin instance BUT Jellyfin would need a GPU for transcoding - but the Intel Atom does not have a iGPU. Sadly i only have one PCIe Slot on this mainboard and there is already a external SAS HBA for my Quantum Superloader 3 Tape Bot (yes, i do tape backups xD).

So the problem is: One PCIe Slot PCIe3 x4, but i want to use two PCIe Cards.

I already found the quite expensive PCIe switch solutions from One Stop Systems and others - sadly they are too expensive for me :frowning:

So, like the tech nerd I am I let this problem bounce around in my head for the last months and I think I have found a solution - but i dont want to buy all the parts and find out it does not work.

My proposed solution for a cheaper “PCIe Switch”:

  • Dell 0P31H2 - “NVMe U.2 PCIe x16 Controller Extender Expansion Card”

  • Delock 85694 - MiniSAS-HD to oculink Cable

  • SFF 8611 (Oculink) to PCIe x16 Adapter Board
    /SFF-8612-SFF-8611-PCI-Express-Mainboard-Grafikkarte/dp/B0BP1ZWHHV

So: I’m searching for someone who knows some answers regarding the following questions.

  1. Does someone have hands-on experience with one of these Dell 0P31H2 cards?
  2. Does someone have a manual for these Dell Expansion cards?
  3. Are they really only PCIe expanders? The really big heatsinks suggest otherwise.
  4. Would someone with this Dell expander card be willing to get the cable and the delock adapter to try this combination? I would naturally give you the money for the cable and adapter from amazon if you are willing to help me out here :slight_smile: .

And for all who made it this far: Big thanks just for reading this all :slight_smile:

Greetings
CPUMiner

Hi, as I can see from mainboard specs there is m.2 slot PCIe 3.0 x2. So if is this speed enough for one of your pci cards you need only something like this:
https://a.co/d/6cJONLs

@maTTko-gusgus
Sadly the M.2 Slot is already occupied with the Intel 32GB Optane SSD. I need this SSD as the ZFS Log Device in front of the big HDD pool so syncwrites will get finished much quicker. I did not have this SSD from the start and had really big iowait problems on the hdd pool.

So, answering my own thread. Found this thread here:
https:// forums .servethehome. com/index.php?threads/dell-ypnrc-questions.26795/page-2

Someone mentions that the MiniSAS(-HD) on these Dell Cards seem to be proprietary aka non-standard. Sad, so not a solution for me.

I think the issue might be because of the PCIe enumeration scheme the particular board’s BIOS uses.
There are quite a few people using the more generic SSD7120 that get the same problem because they are using it with motherboards without a BIOS that was designed to do this.

First of: What i meant was a proprietary pinout of the MiniSAS(-HD) connector on these Dell cards - according to the serve the home forum thread.

But yeah, BIOS could be another factor in this whole mess :smiley:

But: I found a way cheaper card which should do the same function:
Intel AXXP3SWX08040 - A PCIe x8 to 4x Oculink Card. Found some mentions in the internet that this card worked for some people out of the box on a variety of linux distros (rhel, sles but also Ubuntu). It cost me 125 € so i already bought the card.

Will follow up here when i have the hardware and can test it.

1 Like

If I understand what you want to do correctly, without one of those much-more-expensive pcie switches then your board would need to support bifurcation to be able to address more than one card in a physical slot. With only a x4 slot I wouldn’t expect that to be possible, and a quick look at your motherboard manual supports my suspicion.

On second look you might be right, the photo of the dell card has attached cables that look weird; usually those grey colored cables are reserved for the sideband connection because they are lower speed.

yeah nope :smiley: thats exactly my problem - i know my mainboard does not support bifurcation. But i bought the mentioned intel card (was “only” 125 €) and it is a pcie switch card. They seem to be a tremporary solution from server vendors for using multiple pcie ssds in one pcie slot in a server which does not support bifurcation.

So Follow-Up: Sadly my solution does not work. I can see the Intel Card in my linux OS - the card seems to work. Sadly i dont see any pcie device which i plug into the “oculink to pcie x16” adapter board - Maybe one of the connections has non-standard pinout or the chinese oculink to pcie x16 adapter board just does not work. And no, i did also test non-gpu cards so it cant be too low power from the sata power port.