Intel Optane Help

I have an array of 4 Intel Optane NVME (256gb/16gb) M.2 drives installed in an ASUS M.2 expansion card that I intend to use as a metadata special device for 4 Iron Wolf Pro HDDs in ZFS. I see all 4 of the drives on lsblk and have them configured in the zpool…but the drives only show as 13.4gb devices. When it comes to the metadata special device, is it only using the NAND of the smaller segment? Did I do something wrong? Do I have to use Windows to see the SSD portion of the drive? I couldn’t seem to find a post that already addressed a similar question, so I apologize if this is a redundant post.
System: Asrock Rack X470d4U with Ryzen 5800x

Looking around, it seems that these drives may only be compatible with Windows and/or on certain Intel motherboards, for various conflicting reasons. It’s hard to figure out which is true, but I see plenty of people with similar problems.
Looks like you might be SOL. I can’t find any result of it working correctly in linux.

tbqh, I’m surprised; I thought these would just be two block devices on one bus, kind of like how your GPU is a DAC.

Thanks for the response. Wendell did a video about using optane specifically in ZFS, so I’m sure there’s a way…i just dont know enough, lol. Or…i bought the wrong type of Optane.

Yeah, Optane works fine. It’s the H10 hybrid optane/nand flash drives that are the problem.
If you get 905p, 900p, p1600x, p4800x, p5800x, these are fine. d4800x you want to avoid for it’s Gen2x2 configuration.

2 Likes

I am actually testing these Optanes in non-Intel systems. The eqip got delivered and is sitting in a box. Will have results in a week.

But no matter the results, these hybrid 3D XPoint and NAND SSDs are not cost effective on systems which not support them natively. For your use case, just get the vanilla Optanes which expose themselves as regular single SSDs.

Intel Optane NVME (256gb/16gb) M.2 drives

These “H10/H20 series” hybrid drives are an early experiment that kind of flopped, precisely because they require special snowflake intel hardware to fully utilize, because vendor lock in makes CEO’s hot and bothered. Not to be confused with the M10 drives (like I did in an earlier edit) that are solely (early and low performing) optane.

These drives are basically two NVMe drives, on one stick, that requires the motherboard to support PCI-E lane bifurcation down to one lane. For the vast, overwhelming majority of boards if they support bifurcation at all, it’ll look like an option in the UEFI for splitting a x16 or x8 lane slot into a mix of x8 or x4 lanes. x2 or x1 lanes as mentioned are basically unheard of outside of enterprise plx NVMe backplanes.

Essentially, only special motherboards can see both the NAND and optane sides of the drive. All other boards will only ever be able to see one side of the drive. What part they do see isn’t predictable. Also note that the speeds of each side of the M10 drives will be limited to only x1 PCIe 3.0 bandwidth each, not that the NAND or optane on either side is very fast for this generation. There is a reason these are so cheap on ebay.

You are not the first to get confused by this almost deliberately confusing product.

2 Likes

I’ll be very interested in hearing your test results. I had these sitting in a box and finally got around to tinkering with them…only to be met with frustration and disappointment. But im sure I’ll figure out something to do with them at some point…lol

My board only bifurcates down to 4x4x4x4. I dont think I’ve ever had one that splits further. My Windows machine is all nvme already, so i dont suspect it’ll do any good there. I appreciate your response and the info. Thanks

Calling it quits… I know what does not work. And I also have some idea what could be tried next. tl;dr: one of the new Threadripper motherboard with a BIOS that supports bifurcation down to two lanes or a different cable than the one I tested. Either way, an Intel Optane H10/H20 is a white elephant.

1 Like

I experimented with the 32GB + 512GB (Optane/NAND) modules some time ago (HBRPEKNX0202A IIRC) on an X299 motherboard. When plugged into the CPU M.2 the Optane would show up I think as x2 Gen 3 (32GB), when plugged into the PCH the “regular” NAND (512GB) would show up as x2.
The board was supposed to support the Optanes, but I couldn’t find any UEFI config to expose them as 2 x2 devices (it was on Rampage VI Apex BTW).
I plan on getting a Rome EPYC CPU+Motherboard soon and when I do, I’ll check the support there.

On a side note, I believe that whether the device pops up as 32GB (Optane) or 512GB (NAND) depends on the lane mapping order, i.e. the lanes can be mapped 0->3 or in reverse order 3->0. My guess with that motherboard is that the CPU M.2s use one mapping, while the PCH uses the other, but I didn’t have time to test it properly.

1 Like

If you can find someone to design a board we can order from jlcpcb that takes the x2 x2 out into x4 x4 (just changing physical lane placement) and they have some pcie experience I might be willing to find that if it’s not a huge timesink

Not that many h10/20 in circulation anymore tho. Notebooks can do this x2/x2 trick reliably up through but not including alder lake

1 Like

At one point I wanted to design a x4 board with an PCIe Gen 3 switch, but a x8 card for a host with x4/x4 bifurcation sounds much easier.

Does anyone remember if all differential pairs in PCIe need to have the same length, or only pairwise? (i.e. if only len(TX0p) == len(TX0n), len(RX0p) == len(RX0n), len(TX1p) == len(TX1n), … or len(TX0p) == len(TX0n) == len(RX0p) == len(RX0n) == len(TX1p) == len(TX1n) == …)

RE: PCIe Switch: Does anyone know any remotely good and affordable chip for this task? I tried searching Mouser some time ago and again today, with very mediocre results:
https://eu.mouser.com/c/semiconductors/interface-ics/pci-interface-ic/?q=gen3&type=Switch%20-%20PCIe&sort=pricing - cheapest gen 3 I could find are ~$70 – I think this close to what I paid for my H10s :smiley:

This PEX8712 for instance appears to support exactly the configuration we’d need - x4 input, 2x2 output. Well, the price is a killer though and of course no publicly available documentation beside a “Product brief”; yuck.

On a side note, the AMD chipsets like the B550 could serve as excellent PCIe switches, as they themselves are PCIe devices (and even can be daisy-chained). Interestingly I could only find Intel chipsets in the wild (e.g. here) together with pretty extensive documentation, but those use DMI and not PCIe.

Have any 13th Gen been found to work with H10/H20? I know Intel officially does not support it after Gen 12, but there are some 13th Gen mobile systems still being sold and I’m considering one for a portable gaming setup.

That sucks that it didn’t work out in the end. I applaud your efforts! Good luck and Happy New Year!

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.