The laptop has two M.2 slots, and each one had an H10. One was 256+32, the other was 512+32. There’s no M.2 SATA involved.
When I had both drives in it, I only ended up with three NVMe devices. I don’t remember which of four combinations it was:
[flash1 + optane1] + [flash2 + optane2]
[flash1 + optane1] + [flash2 + optane2]
[flash1 + optane1] + [flash2 + optane2]
[flash1 + optane1] + [flash2 + optane2]
I wonder whether it’s an artificial firmware limit, or an actual PCIe root port count limit?
What would be awesome is a splitter for pciex16 that would take 4 H10s, and give us access to 4x optane and 4x ssds, if that is at all possible. But it’s basically a hobbyist niche inside a niche.
The H10’s havent aged well, but you might be able to affordably roll your own with the SATA SSD of your choice and a cheap Optane M10 cache only drives.
I just bought a batch of 10 of them for $37 on eBay to use as appliance system boot disks. Now granted they were the 16GB models, not the 32GB ones, but reviews from the time the launched suggests little performance difference between 16, 32 and 64GB Optane M10’s when used for caching.
PCIe switches have become prohibitively expensive ever since Broadcom tried to corner the market by buying PLX Technology a decade ago.
Except for very high priced Enterprise stuff, and for the mass produced PCIe switches integrated into CPU chipsets, the PCIe switch market and its applications are pretty much dead at this point.
Besides, PCIe switches introduce latency which negates the Optanes biggest advantage (that they are very low latency, which is why they do random data so well)
That, and the PCIe switches tend to use more power than would be ideal for an M.2 card.
There is nothing about bifurcation that limits it to 4x chunks. This is just the most common implementation.
For instance, the Odroid H4 series (an Intel x86 SBC from South Korean company Hardkernel) does bifurcation of its single m.2 x4 port.
You have a choice. One x4 port, two x2 ports or four x1 ports. (though this requires the appropriate riser card to make it work.)
Unlike most other boards - however - there is no option in the BIOS to select which to use, you have to flash the correct BIOS file that matches the option you want to get it to work.
I’m using one as my work “lunchtime browsing machine”. it is an Intel N97 quad core machine with a 118GB 2x lane Optane 800p drive in the first slot and a 1x Intel BE200 Wifi7 WLAN card in the second slot.
Well, my 10-pack of 16GB Optane M10 drives just arrived.
The sequential performance is as expected per Intels specs, and it is a little pedestrian for 2024, but DAMN look at that tasty low queue depth 4k random performance:
Holy crap not bad for a $3.70 drive, no?
That’s only like 3x faster than a Samsung 990 Pro.
One might ask, what are you going to do with a 16GB drive?
Well, they are great for little appliance boot drives, like Kodi boxes, TrueNAS Core, pfSense / OPNSense, etc. etc.
I have a few M10 2242 flash drives i made. Which i use for live distros. It’s a shame the P1600X is so expensive now. I got a few of those for $70 from newegg a while back.
Sorry, I missed this post when you originally posted it.
That is amazing. I may have to try it at some point.
The ideal adapter for these would somehow present only 2 lanes per drive, and allow us to fit like 8 of them in a 16x slot, since these low end Optanes have only 2x lanes.
My experience is that usually with RAID you can get sequential improvements, but random performance goes down (or if done very well stays the same). It usually does not scale much or at all.