Return to

M.2 layout for Rackintosh

I am about to finish up my Rackintosh build when I get the two 1TB Samsung PM981. The idea is to run those in raid0 as a scratch drive at max speed. Which is why I want one of them in the main M.2 slot and one in the second full PCIe slot of my X470 Taichi ultimate.

That leaves my with a system drive (970EVO) in the other onboard M.2, which goes through the chipset. Now, I know that is not the fastest way to run a drive (because chipset) but so far I think it should be fine.


Sounds fine to me biforcation not supported on board could put not scratch disk on single raiser


Oh damn, have to check. That would be great! I could have those two drives in the second PCIe slot running 2x4 lanes, the 970EVO in main M.2 and use the chipset lanes for something else.

1 Like

So, I wanna use a dual M.2 to PCIe x8 card but there isn’t many of those, it seems.
Does anyone know if a Dell 0JV70F would be the right part for that?
(Still have to check BIOS for bifurcation but anyway.)

AOC-SLG3 from Supermicro
PEX8M2E2 from Startech

The supermicro card is suspiciously cheap, it’s probably proprietary for a SM motherboard only.

1 Like

The Startech seems to have a controller instead of letting the board do the bifurcation. That is why it’s so expensive.

Found the Supermicro on amazon. Was the last one, insta-ordered. :stuck_out_tongue:
So, we’ll see if / how this works out.

1 Like

OK, got the drives and the Supermicro dual NVME card. Installed everything and switched PCIe mode from x16 to x4x4x4x4, which enables the BIOS RAID options. Didn’t create any array because I want to manage the drives in the OS. A quick kubuntu install shows all drives. One thing left is to make sure the GPU is running in x8 and not x4. I am hoping the board is smart enough for that.

@wendell? You know a good way to do that? sudo lspci -vv wasn’t really helping because it shows width x16 which is impossible.

Quick sitrep:
ASRock X470 Taichi Ultimate (3900X, 64GB RAM)
970EVO 500GB in primary onboard M.2
Vega 56 in primary PCIe slot
2x PM981 1TB in AOC-SLG3 dual NVMe to x8 PCIe in second full slot

Asus PCIe USB 3 card in bottom slot (chipset lanes)
(for when stupid USB thingies go all iffy)


That’s awesome that the card is working! I might be grabbing one so I can fan out an x8 slot into x4 slots using NGFF riser cables.

try dmesg | grep link or something along those lines. At boot the kernel will complain about pcie devices that are connected at less than their optimal bandwidth. Sorry I’m not at my machine right now to recall the exact command.

According to stack exchange, lspci run as ROOT will give you the info as well

1 Like

On my x399 system

# dmesg | grep link
[    0.339663] pci 0000:07:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x2 link at 0000:02:04.0 (capable of 32.000 Gb/s with 5 GT/s x8 link)
[    0.357206] pci 0000:43:00.0: 7.876 Gb/s available PCIe bandwidth, limited by 8 GT/s x1 link at 0000:40:01.3 (capable of 126.016 Gb/s with 8 GT/s x16 link)
[    0.359361] pci 0000:44:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x16 link at 0000:40:03.1 (capable of 126.016 Gb/s with 8 GT/s x16 link)
[    6.006833] [drm] PCIE gen 3 link speeds already enabled
[   12.358022] ixgbe 0000:08:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)

One of the devices the kernel complained about, this is a GPU I have plugged into an x8 slot using an x1 mining riser :fearful:

# lspci -vvvs 43:00.0 
43:00.0 VGA compatible controller: NVIDIA Corporation GP107GL [Quadro P400] (rev a1) (prog-if 00 [VGA controller])
                LnkSta: Speed 2.5GT/s (downgraded), Width x1 (downgraded)

The other GPU downgrades its speed when it’s “idle” but it shows the full x16 link is in use

# lspci -vvvs 44:00.0
44:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660] (rev a1) (prog-if 00 [VGA controller])
                LnkSta: Speed 2.5GT/s (downgraded), Width x16 (ok)

Lastly, my SAS HBA which I have plugged into the PCH expansion slot, which I thought was pcie 2.0 x4 but looks like the card only connected to x2 lanes :confounded: I believe the card is pcie 2.0 x8, or at least it has an x8 electrical connection

# lspci -vvs 07:00.0
07:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
                LnkSta: Speed 5GT/s (ok), Width x2 (downgraded)

Well… shit. It looks like I can have either x8x8 or x4x4x4x4, not x8x4x4.
That’s really annoying.

exotic bifurations are really only something found on enterprise level gear, we gotta thank AMD for including even the basic options on a desktop platform.

Yeah, it looks like lanes can only be split into equally sized groups, at least on consumer hardware. Might be a technical limitation

Now I have to rethink this. Either I go with bootdrive in chipset slot and continue the raid0 idea or just keep everything else like it is and reduce my scratch drive to a single drive.
I’m leaning towards option two.


Yup, that’s what I’m gonna do.

ASRock X470 Taichi Ultimate (3900X, 64GB RAM)
970EVO 500GB in primary onboard M.2
Vega 56 in primary PCIe slot
1x PM981 1TB in M.2 to PCIe card in second full slot
Asus PCIe USB 3 card in bottom slot (chipset lanes)

A bit boring now but at least free of surprises. I might add one or two SATA SSDs to compensate.

Thanks man!

Was looking for something like this. All x8 cards I knew of were equipped with PLX switch which made them unnecessary expensive and hot (tiny rattling 40mm fan).

Seeing than AmazonEU has 3 in stock so when my TRX40 build is finally up and running (maybe, perhaps sometime :wink: ) I’ll knew what to look for.