PCIe lane width affect on NVMe iops and latency - Running PCIe 4.0 NVME at PCIe 4.0 1X

I find my self needing to fit more NVMe drives into a system than it has been designed for. The system has PCIe 4.0 1x slots that could be used with NVMe using an adater and there would theoretically be around 2GB/s available bandwidth when using an NVMe drive rated for PCIe 4.0.

image

The 2GB/s speed is fine for my application but I was wondering on the affect on over all latency and iops. The workload is a 3-way mirror for a ZFS metadata pool using consumer grade PCIe 4.0 NVMe drives.

Can anyone share their thoughts and experience on this?

1 Like

If there is a difference you can measure and benchmark, I doubt it has any impact on how your special vdev performs. I’m judging that from my experience with both SATA SSD and NVMe SSD as special vdev. And SATA has just 6Gbit bandwidth instead of 16Gbit. And we’re talking PCIe here, not SATA.

The main benefit in using NVMe like that is the low latency on NAND Flash and corresponding 4k random reads, not the throughput possible by 4 PCIe lanes.

Card doesn’t seem expensive, I actually like the card because there are a lot of boards with x1 around.

I’d give it a shot and I’m certain it makes for a great special vdev. Or boot drive or other use cases where throughput doesn’t matter that much. ~1.5GB/s is still 3x SATA speed and very fast.

1 Like

I have a heap of these on the parts rack, they work as advertised.

The biggest issue with them when used in consumer motherboards is that they generally share the bandwidth with another PCIe slot slowing the other slot down. Apart from that they’re fine as an interface adapter.