Optane H10 (32GB Cache + 500GB SSD) On Older Platforms (Haswell)?

Ages ago, I went and got an H10 for cheap off Ebay, as it’s a cool piece of gear to go in my cheapo Haswell server. But the older platform doesn’t support it properly, with it being a 50/50 chance that it’ll either be detected as the 32GB fast cache drive, or as the 500GB storage drive. Without any way to load up both.

Did some reading, and apparently, it needs platform support for being detected as two x2/x2 pcie drives? Most I can find for Haswell is some motherboards supporting x4/x4/x4/x4 bifurcation on the x16 slots.

Is there an adapter I can chuck into a pcie x4 slot and put the Optane drive in that? Or some other way to get this running?

At this point, I’d just be happy to have the 32GB drive always show up instead of only showing up randomly.

1 Like

Yes, that’s why only later platform are supported and only do this trick when an Optane drive like that is connected to their M.2 slots.

You could get a x16 to x8x8 splitter and set the slot to work as x4x4x8 in order to use one slot as dual PCIe for that Optane drive and have the other free for other components. Though I doubt it’s gonna work because it’s not just down to PCIe lanes splitting but more to a platform level. But it’s something you could try to do I guess.
You’d be wasting 4 PCIe lanes doing it this way, but it’s the only way because the point is giving the drive two separate channels to talk to the system, not just giving it 4 lanes.

2 Likes

Not only that, you need a dedicated adapter just for the H10/H20 that will take 2 lanes from the first x4 and 2 from the second x4, something like this:

(Yes, I kinda-sorta did that but never took the time to finish it off and send it to manufacturing)

In my experience, at least on my boards I can reliably get the 512G part to show up at the PCH slots and the 32G part will usually show up at the CPU slots (be it dedicated M.2 or risers or 4 x4 bif. cards) with occasionally not appearing at all, usually after fiddling in BIOS.
BTW I have an X299 board and could never find an option (in BIOS) to get both, but I read that some people got them working so it might even be vendor-dependent.

I have one last glimmer of hope in PCIe cards with dedicated PCIe switches like ASM2824, e.g. this one: https://aliexpress.com/item/1005006034838043.html, but I couldn’t find anyone who would confirm if it works with the H10s.

3 Likes

It’s something I’ll give a shot once I get paid. Found a couple interesting cheap adapters at AliExpress that are worth a shot. One’s an NVME & Sata pciex4 adapter, the other is a pcie x16 splitter to x4/x4/x4/x4.

1 Like

Yeah there’s practically nothing online about these optane SSDs being toyed around with. I wonder if there’s something that can be done with this? Get a couple of NVME SSDs, and put in another adapter there just for the optane: https://www.aliexpress.com/item/1005004839004306.html?

I think I’ll get these just to try, because what the hell, this stuff is interesting and it’s fun to screw around with.

1 Like

Practically nothing on the high seas. But this is L1T, and someone’s probably tried something crazy. :slightly_smiling_face:

I’m tempted to get an Adaptec HBA Ultra 1200-32i to see if it works with the cable I still have for the super special Optanes. I was never able to get the Broadcom P411W-32P to bifurcate its lanes down to x2x2x2x2, but the Adaptec HBA has a software utility to dictate how many lanes per port.

1 Like

Anything that says it “splits” or “bifurcates” WILL NOT WORK, as they do so on x4 boundaries and you need x2. That x4/x4/x8 is not what you want, maybe the one they show slotted into it would work

Anything that is NVMe & SATA is garbage unless you have a use case for SATA M.2 (unlikely).

2 Likes

Oh boy the price tag on those. Yeah that’s a pass, I might as well just buy big honkin SSDs at that point.

Okay so, one thing I found out on my crappy piece of crow Machinist X99 K9 motherboard, is that the Optane will show up as the 32GB drive if I have the SATA m2 slot filled with a SATA SSD. Which is very wack.

So my thinking was, maybe it could be the same for that adapter? Chuck in a SATA SSD, and it’d show the optane as the 32GB drive. But if it’s not filled, would that adapter then show both?

With the money I’d be spending on adapters just to try it out, I’d be better off just buying those itty-bitty optanes that don’t come paired with the larger slower drive.

On a side note, how do those M10 drives (only 16/32/64GB) stack up for random reads? Are they still leading the pack, or are there same/better options for the same price at this point in time?

Anecdotally:

nvme0n1

Linear write:

admin@zfserver:~ $ sudo fio --rw=write --bs=8M --threads=4 --iodepth=32 --ioengine=libaio --size=16G --name=/dev/nvme0n1
/dev/nvme0n1: (g=0): rw=write, bs=(R) 8192KiB-8192KiB, (W) 8192KiB-8192KiB, (T) 8192KiB-8192KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][eta 00m:00s]
/dev/nvme0n1: (groupid=0, jobs=1): err= 0: pid=691: Sun Feb 11 19:13:23 2024
  write: IOPS=129, BW=1036MiB/s (1087MB/s)(16.0GiB/15810msec); 0 zone resets
    slat (usec): min=4867, max=18921, avg=7715.24, stdev=568.19
    clat (usec): min=4, max=277780, avg=237638.02, stdev=18609.91
     lat (msec): min=7, max=285, avg=245.35, stdev=18.68
    clat percentiles (msec):
     |  1.00th=[  155],  5.00th=[  230], 10.00th=[  232], 20.00th=[  234],
     | 30.00th=[  236], 40.00th=[  239], 50.00th=[  241], 60.00th=[  241],
     | 70.00th=[  243], 80.00th=[  243], 90.00th=[  247], 95.00th=[  253],
     | 99.00th=[  264], 99.50th=[  268], 99.90th=[  275], 99.95th=[  279],
     | 99.99th=[  279]
   bw (  KiB/s): min=606208, max=1114112, per=98.56%, avg=1045933.42, stdev=85980.55, samples=31
   iops        : min=   74, max=  136, avg=127.68, stdev=10.50, samples=31
  lat (usec)   : 10=0.05%
  lat (msec)   : 10=0.05%, 20=0.05%, 50=0.20%, 100=0.29%, 250=93.16%
  lat (msec)   : 500=6.20%
  cpu          : usr=3.72%, sys=96.03%, ctx=34, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1036MiB/s (1087MB/s), 1036MiB/s-1036MiB/s (1087MB/s-1087MB/s), io=16.0GiB (17.2GB), run=15810-15810msec

Disk stats (read/write):
  nvme0n1: ios=50/69318, sectors=2128/17745408, merge=0/2149881, ticks=3/234125, in_queue=234127, util=61.49%

Linear read:

admin@zfserver:~ $ sudo fio --rw=read --bs=8M --threads=4 --iodepth=32 --ioengine=libaio --size=16G --name=/dev/nvme0n1
/dev/nvme0n1: (g=0): rw=read, bs=(R) 8192KiB-8192KiB, (W) 8192KiB-8192KiB, (T) 8192KiB-8192KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][r=1113MiB/s][r=139 IOPS][eta 00m:00s]
/dev/nvme0n1: (groupid=0, jobs=1): err= 0: pid=712: Sun Feb 11 19:14:49 2024
  read: IOPS=178, BW=1425MiB/s (1494MB/s)(16.0GiB/11500msec)
    slat (usec): min=5343, max=12083, avg=5603.73, stdev=310.70
    clat (usec): min=10, max=208627, avg=172293.72, stdev=13589.15
     lat (msec): min=5, max=220, avg=177.90, stdev=13.67
    clat percentiles (msec):
     |  1.00th=[  112],  5.00th=[  169], 10.00th=[  169], 20.00th=[  169],
     | 30.00th=[  169], 40.00th=[  171], 50.00th=[  171], 60.00th=[  176],
     | 70.00th=[  176], 80.00th=[  180], 90.00th=[  182], 95.00th=[  182],
     | 99.00th=[  184], 99.50th=[  192], 99.90th=[  203], 99.95th=[  205],
     | 99.99th=[  209]
   bw (  MiB/s): min=  832, max= 1472, per=98.52%, avg=1403.64, stdev=134.06, samples=22
   iops        : min=  104, max=  184, avg=175.45, stdev=16.76, samples=22
  lat (usec)   : 20=0.05%
  lat (msec)   : 10=0.05%, 20=0.10%, 50=0.24%, 100=0.44%, 250=99.12%
  cpu          : usr=0.35%, sys=44.68%, ctx=64995, majf=0, minf=697
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=1425MiB/s (1494MB/s), 1425MiB/s-1425MiB/s (1494MB/s-1494MB/s), io=16.0GiB (17.2GB), run=11500-11500msec

Disk stats (read/write):
  nvme0n1: ios=129882/0, sectors=33249792/0, merge=0/0, ticks=29359/0, in_queue=29359, util=99.27%

Random RW:

admin@zfserver:~ $ sudo fio --rw=randrw --bs=4k --threads=4 --iodepth=32 --ioengine=libaio --size=4G --name=/dev/nvme0n1
/dev/nvme0n1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][r=26.0MiB/s,w=26.7MiB/s][r=6668,w=6825 IOPS][eta 00m:00s]
/dev/nvme0n1: (groupid=0, jobs=1): err= 0: pid=724: Sun Feb 11 19:16:05 2024
  read: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(2049MiB/38347msec)
    slat (usec): min=26, max=10583, avg=67.68, stdev=66.21
    clat (usec): min=9, max=24230, avg=1135.01, stdev=822.46
     lat (usec): min=70, max=24748, avg=1202.69, stdev=868.55
    clat percentiles (usec):
     |  1.00th=[  652],  5.00th=[  775], 10.00th=[  832], 20.00th=[  898],
     | 30.00th=[  955], 40.00th=[ 1004], 50.00th=[ 1057], 60.00th=[ 1090],
     | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1287], 95.00th=[ 1369],
     | 99.00th=[ 6128], 99.50th=[ 8094], 99.90th=[10290], 99.95th=[11207],
     | 99.99th=[19792]
   bw (  KiB/s): min= 7112, max=60600, per=100.00%, avg=54761.16, stdev=13566.50, samples=76
   iops        : min= 1778, max=15150, avg=13690.29, stdev=3391.63, samples=76
  write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(2047MiB/38347msec); 0 zone resets
    slat (usec): min=2, max=364, avg= 2.95, stdev= 1.82
    clat (usec): min=2, max=24610, avg=1133.97, stdev=825.28
     lat (usec): min=5, max=24618, avg=1136.92, stdev=826.03
    clat percentiles (usec):
     |  1.00th=[  652],  5.00th=[  775], 10.00th=[  832], 20.00th=[  898],
     | 30.00th=[  955], 40.00th=[ 1004], 50.00th=[ 1057], 60.00th=[ 1090],
     | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1270], 95.00th=[ 1369],
     | 99.00th=[ 6063], 99.50th=[ 8029], 99.90th=[10421], 99.95th=[11338],
     | 99.99th=[20579]
   bw (  KiB/s): min= 7464, max=62328, per=100.00%, avg=54675.37, stdev=13606.62, samples=76
   iops        : min= 1866, max=15582, avg=13668.84, stdev=3401.65, samples=76
  lat (usec)   : 4=0.01%, 10=0.01%, 100=0.01%, 250=0.01%, 500=0.04%
  lat (usec)   : 750=3.34%, 1000=34.15%
  lat (msec)   : 2=61.17%, 4=0.06%, 10=1.11%, 20=0.13%, 50=0.01%
  cpu          : usr=4.59%, sys=10.59%, ctx=524632, majf=0, minf=15
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=524625,523951,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=2049MiB (2149MB), run=38347-38347msec
  WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=2047MiB (2146MB), run=38347-38347msec

Disk stats (read/write):
  nvme0n1: ios=521716/333295, sectors=4175456/3683352, merge=0/127124, ticks=31626/65952, in_queue=97578, util=99.91%
nvme6n1

Linear write:

admin@zfserver:~ $ sudo fio --rw=write --bs=8M --threads=4 --iodepth=32 --ioengine=libaio --size=16G --name=/dev/nvme6n1
/dev/nvme6n1: (g=0): rw=write, bs=(R) 8192KiB-8192KiB, (W) 8192KiB-8192KiB, (T) 8192KiB-8192KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][eta 00m:00s]
/dev/nvme6n1: (groupid=0, jobs=1): err= 0: pid=703: Sun Feb 11 19:14:20 2024
  write: IOPS=101, BW=814MiB/s (853MB/s)(16.0GiB/20132msec); 0 zone resets
    slat (usec): min=4242, max=36997, avg=9821.84, stdev=7678.12
    clat (usec): min=14, max=725628, avg=303519.53, stdev=213237.19
     lat (msec): min=20, max=748, avg=313.34, stdev=219.93
    clat percentiles (msec):
     |  1.00th=[  138],  5.00th=[  138], 10.00th=[  138], 20.00th=[  140],
     | 30.00th=[  140], 40.00th=[  144], 50.00th=[  144], 60.00th=[  155],
     | 70.00th=[  489], 80.00th=[  550], 90.00th=[  642], 95.00th=[  676],
     | 99.00th=[  718], 99.50th=[  718], 99.90th=[  718], 99.95th=[  726],
     | 99.99th=[  726]
   bw (  KiB/s): min=344064, max=1851392, per=98.84%, avg=823705.60, stdev=601734.86, samples=40
   iops        : min=   42, max=  226, avg=100.55, stdev=73.45, samples=40
  lat (usec)   : 20=0.05%
  lat (msec)   : 50=0.10%, 100=0.15%, 250=61.62%, 500=9.38%, 750=28.71%
  cpu          : usr=3.81%, sys=69.36%, ctx=581, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=814MiB/s (853MB/s), 814MiB/s-814MiB/s (853MB/s-853MB/s), io=16.0GiB (17.2GB), run=20132-20132msec

Disk stats (read/write):
  nvme6n1: ios=51/46365, sectors=2088/11869696, merge=0/1438059, ticks=2/389100, in_queue=389102, util=81.68%

Linear read:

admin@zfserver:~ $ sudo fio --rw=read --bs=8M --threads=4 --iodepth=32 --ioengine=libaio --size=16G --name=/dev/nvme6n1
/dev/nvme6n1: (g=0): rw=read, bs=(R) 8192KiB-8192KiB, (W) 8192KiB-8192KiB, (T) 8192KiB-8192KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][r=376MiB/s][r=47 IOPS][eta 00m:00s]
/dev/nvme6n1: (groupid=0, jobs=1): err= 0: pid=718: Sun Feb 11 19:15:06 2024
  read: IOPS=170, BW=1367MiB/s (1433MB/s)(16.0GiB/11985msec)
    slat (usec): min=5727, max=12077, avg=5840.94, stdev=217.18
    clat (usec): min=9, max=214117, avg=179608.28, stdev=13248.53
     lat (msec): min=5, max=226, avg=185.45, stdev=13.28
    clat percentiles (msec):
     |  1.00th=[  117],  5.00th=[  182], 10.00th=[  182], 20.00th=[  182],
     | 30.00th=[  182], 40.00th=[  182], 50.00th=[  182], 60.00th=[  182],
     | 70.00th=[  182], 80.00th=[  182], 90.00th=[  182], 95.00th=[  182],
     | 99.00th=[  190], 99.50th=[  199], 99.90th=[  209], 99.95th=[  211],
     | 99.99th=[  215]
   bw (  MiB/s): min=  768, max= 1376, per=98.52%, avg=1346.78, stdev=126.32, samples=23
   iops        : min=   96, max=  172, avg=168.35, stdev=15.79, samples=23
  lat (usec)   : 10=0.05%
  lat (msec)   : 10=0.05%, 20=0.10%, 50=0.24%, 100=0.44%, 250=99.12%
  cpu          : usr=0.31%, sys=43.85%, ctx=65243, majf=0, minf=653
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=1367MiB/s (1433MB/s), 1367MiB/s-1367MiB/s (1433MB/s-1433MB/s), io=16.0GiB (17.2GB), run=11985-11985msec

Disk stats (read/write):
  nvme6n1: ios=130069/0, sectors=33297664/0, merge=0/0, ticks=30922/0, in_queue=30922, util=99.30%

Random RW:

admin@zfserver:~ $ sudo fio --rw=randrw --bs=4k --threads=4 --iodepth=32 --ioengine=libaio --size=4G --name=/dev/nvme6n1
/dev/nvme6n1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][eta 00m:00s]
/dev/nvme6n1: (groupid=0, jobs=1): err= 0: pid=738: Sun Feb 11 19:16:26 2024
  read: IOPS=51.9k, BW=203MiB/s (213MB/s)(2049MiB/10104msec)
    slat (usec): min=9, max=388, avg=13.97, stdev= 3.74
    clat (usec): min=8, max=1177, avg=299.73, stdev=65.96
     lat (usec): min=22, max=1209, avg=313.70, stdev=68.04
    clat percentiles (usec):
     |  1.00th=[  215],  5.00th=[  235], 10.00th=[  247], 20.00th=[  262],
     | 30.00th=[  273], 40.00th=[  281], 50.00th=[  293], 60.00th=[  302],
     | 70.00th=[  314], 80.00th=[  326], 90.00th=[  347], 95.00th=[  367],
     | 99.00th=[  603], 99.50th=[  783], 99.90th=[  947], 99.95th=[  988],
     | 99.99th=[ 1057]
   bw (  KiB/s): min=197544, max=219536, per=100.00%, avg=207912.40, stdev=6377.03, samples=20
   iops        : min=49386, max=54884, avg=51978.10, stdev=1594.26, samples=20
  write: IOPS=51.9k, BW=203MiB/s (212MB/s)(2047MiB/10104msec); 0 zone resets
    slat (nsec): min=1946, max=303519, avg=2876.51, stdev=2329.96
    clat (nsec): min=1919, max=1174.4k, avg=299303.80, stdev=65547.96
     lat (usec): min=4, max=1181, avg=302.18, stdev=66.06
    clat percentiles (usec):
     |  1.00th=[  215],  5.00th=[  235], 10.00th=[  247], 20.00th=[  262],
     | 30.00th=[  273], 40.00th=[  281], 50.00th=[  293], 60.00th=[  302],
     | 70.00th=[  314], 80.00th=[  326], 90.00th=[  347], 95.00th=[  367],
     | 99.00th=[  603], 99.50th=[  775], 99.90th=[  947], 99.95th=[  988],
     | 99.99th=[ 1057]
   bw (  KiB/s): min=196656, max=218976, per=100.00%, avg=207580.00, stdev=6339.28, samples=20
   iops        : min=49164, max=54744, avg=51895.10, stdev=1584.95, samples=20
  lat (usec)   : 2=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=12.00%
  lat (usec)   : 500=86.54%, 750=0.88%, 1000=0.54%
  lat (msec)   : 2=0.04%
  cpu          : usr=15.25%, sys=40.65%, ctx=524588, majf=0, minf=14
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=524625,523951,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=2049MiB (2149MB), run=10104-10104msec
  WRITE: bw=203MiB/s (212MB/s), 203MiB/s-203MiB/s (212MB/s-212MB/s), io=2047MiB (2146MB), run=10104-10104msec

Disk stats (read/write):
  nvme6n1: ios=514338/0, sectors=4116384/0, merge=0/0, ticks=3706/0, in_queue=3706, util=99.26%

Can you guess which is which?

admin@zfserver:~ $ lsblk -o+MODEL | grep INTEL
nvme0n1     259:0    0 476.9G  0 disk              INTEL HBRPEKNX0202AL
nvme1n1     259:1    0 476.9G  0 disk              INTEL HBRPEKNX0202AL
nvme6n1     259:3    0  27.3G  0 disk              INTEL HBRPEKNX0202ALO

My point is, unles you need randrw there’s barely any point in trying to get it working :wink:

3 Likes

Well, I did add a disclaimer in my linked post:

That goes for any one of the HBAs, cables, and enclosures you would need to connect the M.2 device to the rest of the system. Plus, you would have essentially spent an excess of money to extract an additional 32 GB or 1 TB from the device, then neutralized its primary use by putting it behind an HBA which adds a few microseconds of latency to all I/O.

@Marandil’s solution would be functionally superior and potentially cheaper:

But it looks costly in terms of PCIe lanes made unavailable by this setup—the same reason I considered getting the SLS5-8X-39X2U2-2X2-0.5M cable an unpalatable option because I’d be consuming 8 PCIe 4.0 lanes to get a 2x2 PCIe 3.0 device connected.

Let me dash that hope. I have a QNAP U2MP dual M.2 to U.2 enclosure, which contains an ASM2812 PCIe 3.0 switch within. No combination of configurations I tried allowed both block devices residing in the Intel Optane H20 to be accessed simultaneously.

I also have a ZikeDrive Z666 USB4 enclosure, which contains an ASM2464PD controller. The controller doesn’t support more than 1 device unlike the ASM2464PDX, which is currently not sported by any product on the market (as of 2/11/2024). To my non-surprise, the enclosure exposes neither block device.

3 Likes

It’s more a hobby thing. It’s dirtcheap and fun to experiment with. I was curious just about optane performance in general, since a lot of the reviews are from years ago. Bit of reading, seems that the 4k randon reads of optane are around 400MB/s, the other big honkers these days get around 100MB/s. So they have a niche-use case that is pointless because they’re too small lmao.

I did some benchmarking a while back:

The PCIe 4 lanes are via chipset, while the the PCIe 5 lanes were CPU-connected. Anything else unspecified would also be CPU-connected.

Something more recent I also posted on L1T:

:sadface: I don’t think I have ever hit anywhere near 400 MB/s with queue depth 1 random 4 KiB reads. My fio results hover around 300~340 MB/s.

1 Like

The tests I saw were done by Anandtech, but then there’s all sorts of things at play. I’ll be interested to see how my suspect AliExpress M10’s perform. I’ll be chuffed if at least one of them isn’t dead.

Really? Does it not need “at least” x2? It’s not gonna work if you provide x4 to each of the halves? That I didn’t know.

It’s not that it won’t work if you provide x4, it’s that you can’t physically provide x4 to each half because m.2 connector only has 4 lanes.

You can provide x4 and one of the halves should work. You can split x8 signal into x4x4 (if your motherboard supports that) and provide 2 lanes from each half, but to do that you need a dedicated PCB like the one I posted - you can’t do that with regular bifurcating boards.

Did you create those graphs yourself or is it some bench software output? It looks really nice.

Whats you worklfow?

These were created myself using a such a stupid method that you’d palm your face:

  1. Benchmark all the drives in AIDA64.
  2. Screenshot the plotted graph and paste in Microsoft Paint.
  3. Erase the right side of the graph and add colored text labels. (The lines have always been uniform over the span of the tests, so nothing meaningful is lost anyway.)
1 Like

I ended up with two Optane H10 drives, and I tried putting them both in an HP Omen laptop, one of which was the source for one of the drives. The result: three NVMe devices, not the expected four.

If Intel really wanted that stuff to be compatible with more machines, they would’ve had to either put a PCIe switch on the M.2 card, or make a controller that exposed the two as different NVMe namespaces.

1 Like

I found something similar in that crappy X99 motherboard I bought. The H10 reliably showed up as the 32GB Optane x2 part, if the m.2 SATA port was also filled by another SSD.

Did you have something like that going on? If you only had the 2 H10’s installed, would they show up as all 4?

I wonder if my 8750H CPU laptop would show the H10 properly…