New toy Optane p5801x e1.s gen 4

It’s also limited to under 15 mm thickness. The P5801X are 15 mm thick.

I’m now wondering if there is some other setting obscuring the ability to see bifurcation settings, or maybe the MB truly doesn’t have the setting.

The P5801Xs on the market are actually 25mm thick, I don’t know why Intel’s ARK page claims they are 15mm; perhaps the theoretical retail P5801Xs that never materialized would have been 15mm.

2 Likes

I stand corrected. Getting mine to fit will be fun when I get them…

1 Like

I’ve hit more hurdles, apparently you can’t use VROC on NVMe devices connected to the PCIe slots, so I had to use mdraid to get the 2 optanes to raid 0 together which I think is leaving alot of performance on the table. Hopefully I can find some MCIO to E1 cables to connect the drives so I can get VROC working.

Here’s what 2 P5801Xs in mdraid 0 look like:

1 Like

:thinking: For Optanes, 300 MBps Q1T1 4 KiB random read is kind of meh, which I’m assuming is caused by the RAID?

1 Like

Yeah, I’m pretty sure the software raid is the cause of the mediocre performance, I’d expect it to be the same as @nathanr numbers without the software raid.
What’s interesting is how poorly the software raid handles mixed random low queue depth workloads comparted to 100% read or 100% write.

Which SKU did you get, by the way? There are two SKUs floating around (with the same model code):

  1. SSDPFR1Q400GBF1
  2. SSDPFR1Q400GBEF, which is labeled as an engineering sample on the packaging

    This SKU is also more expensive for reasons I don’t know.

Two people who did benchmark it managed 619 MBps and 494 MBps T1Q1 4 KiB random reads in CrystalDiskMark respectively, which sticks out to me because I’ve not seen any this high before for CrystalDiskMark:

If Google Translate is correct, the random performance of this second generation of enterprise Optane is CPU-bound, as:

In fact, if the CPU frequency is further increased, the 4K random read and write speed of Optane P5801X can be even higher. In the SSD forum, a guy raised the CPU motherboard to above 6GHz and ran a terrifying result of more than 600MB/s with Win7 system

Perhaps that’s explains away some of the performance variations with Optanes. I get consistent 340-ish MBps benchmarks across both Optane generations with an AMD Ryzen 7800X that runs in ECO mode.

2 Likes

I ended up getting SSDPFR1Q400GBF1

That seems way higher than normal.

These are my scores running on just one disk, no mdraid:

Perhaps mdraid isn’t the problem and Linux just doesn’t handle the IO the same as Windows and gives different scores. I’ll install Windows next to see what I get.

You might be on to something, atleast for the random tests; my system tops out at 3.5GHz which is fairly low by modern standards so it could be contributing the the depressed scrore.

sorry for the slow reply, a few things to cover

a) the dual adapter, i have it too - cannot get it to work, limited support back

b) raid performance is worse than just as is, and there’s no point in mirroring with this level of durability

c) CDM with p5801x as main os drive on my threadripper build

1ge6t1vfyayd1

d) empty drive on ntfs
p5801x ntfs solo slot 1

f) for comparison, 4x t700s via storage spaces

CrystalDiskMark_20241216112103

g) rand 4k ext4

==> randread_4k_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=548MiB/s (574MB/s), 548MiB/s-548MiB/s (574MB/s-574MB/s), io=16.0GiB (17.2GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=4184429/84, sectors=33475432/1656, merge=0/121, ticks=21041/5, in_queue=21045, util=64.95%

==> randread_4k_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=1772MiB/s (1858MB/s), 1772MiB/s-1772MiB/s (1858MB/s-1858MB/s), io=51.9GiB (55.8GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13534430/131, sectors=108275440/2896, merge=0/192, ticks=69411/4, in_queue=69416, util=93.32%

==> randread_4k_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=1775MiB/s (1861MB/s), 1775MiB/s-1775MiB/s (1861MB/s-1861MB/s), io=52.0GiB (55.8GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13530488/70, sectors=108243904/1336, merge=0/95, ticks=69096/5, in_queue=69101, util=93.17%

==> randread_4k_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=1768MiB/s (1854MB/s), 1768MiB/s-1768MiB/s (1854MB/s-1854MB/s), io=51.8GiB (55.6GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13497984/83, sectors=107983872/1544, merge=0/108, ticks=68824/5, in_queue=68829, util=92.36%

h) full ubuntu ext4 results:

==> randread_1m_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=4928MiB/s (5167MB/s), 4928MiB/s-4928MiB/s (5167MB/s-5167MB/s), io=144GiB (155GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=1179945/110, sectors=302065920/2056, merge=0/145, ticks=113352/16, in_queue=113367, util=67.78%

==> randread_1m_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=6802MiB/s (7133MB/s), 6802MiB/s-6802MiB/s (7133MB/s-7133MB/s), io=199GiB (214GB), run=30003-30003msec

Disk stats (read/write):
  nvme0n1: ios=1627916/50, sectors=416746496/816, merge=0/50, ticks=3668713/113, in_queue=3668826, util=77.39%

==> randread_1m_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=6804MiB/s (7134MB/s), 6804MiB/s-6804MiB/s (7134MB/s-7134MB/s), io=199GiB (214GB), run=30005-30005msec

Disk stats (read/write):
  nvme0n1: ios=1628165/77, sectors=416810240/1784, merge=0/113, ticks=7494488/284, in_queue=7494772, util=82.27%

==> randread_1m_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=6803MiB/s (7133MB/s), 6803MiB/s-6803MiB/s (7133MB/s-7133MB/s), io=199GiB (214GB), run=30002-30002msec

Disk stats (read/write):
  nvme0n1: ios=1628451/71, sectors=416883456/1448, merge=0/101, ticks=1756955/82, in_queue=1757037, util=75.22%

==> randread_4k_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=548MiB/s (574MB/s), 548MiB/s-548MiB/s (574MB/s-574MB/s), io=16.0GiB (17.2GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=4184429/84, sectors=33475432/1656, merge=0/121, ticks=21041/5, in_queue=21045, util=64.95%

==> randread_4k_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=1775MiB/s (1861MB/s), 1775MiB/s-1775MiB/s (1861MB/s-1861MB/s), io=52.0GiB (55.8GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13530488/70, sectors=108243904/1336, merge=0/95, ticks=69096/5, in_queue=69101, util=93.17%

==> randread_4k_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=1768MiB/s (1854MB/s), 1768MiB/s-1768MiB/s (1854MB/s-1854MB/s), io=51.8GiB (55.6GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13497984/83, sectors=107983872/1544, merge=0/108, ticks=68824/5, in_queue=68829, util=92.36%

==> randread_4k_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=1772MiB/s (1858MB/s), 1772MiB/s-1772MiB/s (1858MB/s-1858MB/s), io=51.9GiB (55.8GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13534430/131, sectors=108275440/2896, merge=0/192, ticks=69411/4, in_queue=69416, util=93.32%

==> randread_64k_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=1816MiB/s (1904MB/s), 1816MiB/s-1816MiB/s (1904MB/s-1904MB/s), io=53.2GiB (57.1GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=868912/67, sectors=111220736/1376, merge=0/103, ticks=23943/5, in_queue=23948, util=77.86%

==> randread_64k_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=6797MiB/s (7127MB/s), 6797MiB/s-6797MiB/s (7127MB/s-7127MB/s), io=199GiB (214GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=3252061/101, sectors=416263808/1760, merge=0/106, ticks=467913/20, in_queue=467934, util=61.58%

==> randread_64k_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=6799MiB/s (7129MB/s), 6799MiB/s-6799MiB/s (7129MB/s-7129MB/s), io=199GiB (214GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=3252987/54, sectors=416382336/896, merge=0/56, ticks=946176/15, in_queue=946192, util=61.30%

==> randread_64k_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=6796MiB/s (7126MB/s), 6796MiB/s-6796MiB/s (7126MB/s-7126MB/s), io=199GiB (214GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=3245347/88, sectors=415404416/1680, merge=0/120, ticks=228535/9, in_queue=228544, util=61.61%

==> randwrite_1m_q1.txt <==
Run status group 0 (all jobs):
  WRITE: bw=3017MiB/s (3163MB/s), 3017MiB/s-3017MiB/s (3163MB/s-3163MB/s), io=88.4GiB (94.9GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/721561, sectors=0/184691720, merge=0/261, ticks=0/96290, in_queue=96290, util=57.34%

==> randwrite_1m_q16.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4465MiB/s (4682MB/s), 4465MiB/s-4465MiB/s (4682MB/s-4682MB/s), io=131GiB (140GB), run=30004-30004msec

Disk stats (read/write):
  nvme0n1: ios=0/1067666, sectors=0/273291424, merge=0/257, ticks=0/3565661, in_queue=3565661, util=81.74%

==> randwrite_1m_q32.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4461MiB/s (4678MB/s), 4461MiB/s-4461MiB/s (4678MB/s-4678MB/s), io=131GiB (140GB), run=30007-30007msec

Disk stats (read/write):
  nvme0n1: ios=0/1066390, sectors=0/272974880, merge=0/230, ticks=0/7372968, in_queue=7372967, util=81.98%

==> randwrite_1m_q8.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4456MiB/s (4672MB/s), 4456MiB/s-4456MiB/s (4672MB/s-4672MB/s), io=131GiB (140GB), run=30002-30002msec

Disk stats (read/write):
  nvme0n1: ios=0/1065434, sectors=0/272726528, merge=0/243, ticks=0/1665713, in_queue=1665713, util=81.76%

==> randwrite_4k_q1.txt <==
Run status group 0 (all jobs):
  WRITE: bw=400MiB/s (420MB/s), 400MiB/s-400MiB/s (420MB/s-420MB/s), io=11.7GiB (12.6GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=0/3064515, sectors=0/24677024, merge=0/20111, ticks=0/16621, in_queue=16620, util=49.92%

==> randwrite_4k_q16.txt <==
Run status group 0 (all jobs):
  WRITE: bw=901MiB/s (945MB/s), 901MiB/s-901MiB/s (945MB/s-945MB/s), io=26.4GiB (28.3GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=0/6891505, sectors=0/55269368, merge=0/17164, ticks=0/37868, in_queue=37868, util=64.76%

==> randwrite_4k_q32.txt <==
Run status group 0 (all jobs):
  WRITE: bw=879MiB/s (922MB/s), 879MiB/s-879MiB/s (922MB/s-922MB/s), io=25.8GiB (27.7GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/6726164, sectors=0/53984680, merge=0/19456, ticks=0/38709, in_queue=38708, util=67.19%

==> randwrite_4k_q8.txt <==
Run status group 0 (all jobs):
  WRITE: bw=859MiB/s (901MB/s), 859MiB/s-859MiB/s (901MB/s-901MB/s), io=25.2GiB (27.0GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=0/6569011, sectors=0/52716800, merge=0/20587, ticks=0/35964, in_queue=35963, util=65.86%

==> randwrite_64k_q1.txt <==
Run status group 0 (all jobs):
  WRITE: bw=1581MiB/s (1657MB/s), 1581MiB/s-1581MiB/s (1657MB/s-1657MB/s), io=46.3GiB (49.7GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/756195, sectors=0/96768320, merge=0/1058, ticks=0/19203, in_queue=19203, util=62.74%

==> randwrite_64k_q16.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4396MiB/s (4610MB/s), 4396MiB/s-4396MiB/s (4610MB/s-4610MB/s), io=129GiB (138GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/2103006, sectors=0/269163696, merge=0/729, ticks=0/359685, in_queue=359685, util=55.65%

==> randwrite_64k_q32.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4374MiB/s (4586MB/s), 4374MiB/s-4374MiB/s (4586MB/s-4586MB/s), io=128GiB (138GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/2092090, sectors=0/267768336, merge=0/840, ticks=0/740749, in_queue=740749, util=56.48%

==> randwrite_64k_q8.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4324MiB/s (4534MB/s), 4324MiB/s-4324MiB/s (4534MB/s-4534MB/s), io=127GiB (136GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/2068572, sectors=0/264757832, merge=0/755, ticks=0/152806, in_queue=152806, util=54.94%

==> read_1m_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=4930MiB/s (5169MB/s), 4930MiB/s-4930MiB/s (5169MB/s-5169MB/s), io=144GiB (155GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=1176002/40, sectors=301056512/928, merge=0/65, ticks=113193/2, in_queue=113196, util=67.82%

==> read_1m_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=6804MiB/s (7135MB/s), 6804MiB/s-6804MiB/s (7135MB/s-7135MB/s), io=199GiB (214GB), run=30003-30003msec

Disk stats (read/write):
  nvme0n1: ios=1629407/81, sectors=417128192/1600, merge=0/117, ticks=3669850/184, in_queue=3670033, util=77.22%

==> read_1m_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=6804MiB/s (7134MB/s), 6804MiB/s-6804MiB/s (7134MB/s-7134MB/s), io=199GiB (214GB), run=30005-30005msec

Disk stats (read/write):
  nvme0n1: ios=1628621/70, sectors=416926976/1416, merge=0/105, ticks=7496462/246, in_queue=7496708, util=82.50%

==> read_1m_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=6803MiB/s (7134MB/s), 6803MiB/s-6803MiB/s (7134MB/s-7134MB/s), io=199GiB (214GB), run=30002-30002msec

Disk stats (read/write):
  nvme0n1: ios=1629744/75, sectors=417214464/1760, merge=0/143, ticks=1758241/84, in_queue=1758325, util=75.19%

==> read_4k_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=550MiB/s (577MB/s), 550MiB/s-550MiB/s (577MB/s-577MB/s), io=16.1GiB (17.3GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=4199281/66, sectors=33594248/1256, merge=0/53, ticks=21093/1, in_queue=21094, util=63.31%

==> read_4k_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=1795MiB/s (1882MB/s), 1795MiB/s-1795MiB/s (1882MB/s-1882MB/s), io=52.6GiB (56.5GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13700376/53, sectors=109603008/888, merge=0/56, ticks=71048/1, in_queue=71048, util=94.69%

==> read_4k_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=1794MiB/s (1881MB/s), 1794MiB/s-1794MiB/s (1881MB/s-1881MB/s), io=52.6GiB (56.4GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13699157/66, sectors=109593256/1360, merge=0/102, ticks=70753/4, in_queue=70757, util=94.78%

==> read_4k_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=1803MiB/s (1891MB/s), 1803MiB/s-1803MiB/s (1891MB/s-1891MB/s), io=52.8GiB (56.7GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=13759279/61, sectors=110074232/1256, merge=0/94, ticks=71026/3, in_queue=71028, util=95.58%

==> read_64k_q1.txt <==
Run status group 0 (all jobs):
   READ: bw=1835MiB/s (1924MB/s), 1835MiB/s-1835MiB/s (1924MB/s-1924MB/s), io=53.8GiB (57.7GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=878104/119, sectors=112397312/6176, merge=0/171, ticks=23782/32, in_queue=23815, util=70.70%

==> read_64k_q16.txt <==
Run status group 0 (all jobs):
   READ: bw=6799MiB/s (7129MB/s), 6799MiB/s-6799MiB/s (7129MB/s-7129MB/s), io=199GiB (214GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=3253918/81, sectors=416501504/2304, merge=0/145, ticks=468316/11, in_queue=468326, util=61.42%

==> read_64k_q32.txt <==
Run status group 0 (all jobs):
   READ: bw=6797MiB/s (7128MB/s), 6797MiB/s-6797MiB/s (7128MB/s-7128MB/s), io=199GiB (214GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=3252706/86, sectors=416346496/1752, merge=0/131, ticks=946337/21, in_queue=946357, util=62.44%

==> read_64k_q8.txt <==
Run status group 0 (all jobs):
   READ: bw=6798MiB/s (7128MB/s), 6798MiB/s-6798MiB/s (7128MB/s-7128MB/s), io=199GiB (214GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=3252911/114, sectors=416372608/2744, merge=0/155, ticks=229177/11, in_queue=229187, util=61.68%

==> write_1m_q1.txt <==
Run status group 0 (all jobs):
  WRITE: bw=3040MiB/s (3188MB/s), 3040MiB/s-3040MiB/s (3188MB/s-3188MB/s), io=89.1GiB (95.6GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/727277, sectors=0/186152760, merge=0/239, ticks=0/96011, in_queue=96012, util=56.59%

==> write_1m_q16.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4455MiB/s (4672MB/s), 4455MiB/s-4455MiB/s (4672MB/s-4672MB/s), io=131GiB (140GB), run=30004-30004msec

Disk stats (read/write):
  nvme0n1: ios=0/1065318, sectors=0/272682232, merge=0/264, ticks=0/3563805, in_queue=3563806, util=83.37%

==> write_1m_q32.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4451MiB/s (4667MB/s), 4451MiB/s-4451MiB/s (4667MB/s-4667MB/s), io=130GiB (140GB), run=30008-30008msec

Disk stats (read/write):
  nvme0n1: ios=0/1064130, sectors=0/272362160, merge=0/319, ticks=0/7365046, in_queue=7365046, util=82.94%

==> write_1m_q8.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4460MiB/s (4676MB/s), 4460MiB/s-4460MiB/s (4676MB/s-4676MB/s), io=131GiB (140GB), run=30002-30002msec

Disk stats (read/write):
  nvme0n1: ios=0/1066326, sectors=0/272961008, merge=0/203, ticks=0/1667042, in_queue=1667041, util=81.18%

==> write_4k_q1.txt <==
Run status group 0 (all jobs):
  WRITE: bw=438MiB/s (459MB/s), 438MiB/s-438MiB/s (459MB/s-459MB/s), io=12.8GiB (13.8GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=0/3353068, sectors=0/26826344, merge=0/223, ticks=0/17950, in_queue=17950, util=54.48%

==> write_4k_q16.txt <==
Run status group 0 (all jobs):
  WRITE: bw=980MiB/s (1028MB/s), 980MiB/s-980MiB/s (1028MB/s-1028MB/s), io=28.7GiB (30.8GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/7499788, sectors=0/60000912, merge=0/269, ticks=0/42031, in_queue=42031, util=66.47%

==> write_4k_q32.txt <==
Run status group 0 (all jobs):
  WRITE: bw=992MiB/s (1041MB/s), 992MiB/s-992MiB/s (1041MB/s-1041MB/s), io=29.1GiB (31.2GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=305/7593228, sectors=4744/60762520, merge=0/1020, ticks=2/42442, in_queue=42445, util=68.05%

==> write_4k_q8.txt <==
Run status group 0 (all jobs):
  WRITE: bw=967MiB/s (1014MB/s), 967MiB/s-967MiB/s (1014MB/s-1014MB/s), io=28.3GiB (30.4GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/7400549, sectors=0/59207760, merge=0/258, ticks=0/41496, in_queue=41496, util=67.76%

==> write_64k_q1.txt <==
Run status group 0 (all jobs):
  WRITE: bw=1568MiB/s (1644MB/s), 1568MiB/s-1568MiB/s (1644MB/s-1644MB/s), io=45.9GiB (49.3GB), run=30000-30000msec

Disk stats (read/write):
  nvme0n1: ios=0/749942, sectors=0/95984176, merge=0/208, ticks=0/19072, in_queue=19072, util=62.09%

==> write_64k_q16.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4422MiB/s (4637MB/s), 4422MiB/s-4422MiB/s (4637MB/s-4637MB/s), io=130GiB (139GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/2115134, sectors=0/270729096, merge=0/205, ticks=0/398092, in_queue=398092, util=56.12%

==> write_64k_q32.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4427MiB/s (4642MB/s), 4427MiB/s-4427MiB/s (4642MB/s-4642MB/s), io=130GiB (139GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/2117288, sectors=0/271001688, merge=0/236, ticks=0/853295, in_queue=853295, util=56.62%

==> write_64k_q8.txt <==
Run status group 0 (all jobs):
  WRITE: bw=4393MiB/s (4606MB/s), 4393MiB/s-4393MiB/s (4606MB/s-4606MB/s), io=129GiB (138GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=0/2101246, sectors=0/268950528, merge=0/213, ticks=0/177299, in_queue=177299, util=54.26%
1 Like

fio.zip (50.7 KB)

full bench results on ext4, along with script to run them

On ext4 i average 550, 16gb rand4k q1t1 benches above

my build

1 Like

I got VROC raid0 working via the MCIO ports, but the results were mildly disappointing, they are slower than mdraid’s results on linux:


I also Installed Windows and ran the benckmark but the results are really bad, like NAND bad. I’m pretty sure the issue is that Windows can’t schedule threads properly so the CPU clock speeds never go high enough to produce a good random numbers; this corroborates what @LiKenun was saying about the CPU being the bottleneck for random performance even more.

1 Like

just for sanity, Which PCI slot do you have them/it in, on most boards you are going to have to put it in the first or main slot where you normally put a GPU, the closer you get to the edge of the board / further away from the CPU the slower they are, and significantly so, unless it is a premium board with redrivers on every slot - when I was first testing mine on a standard ASUS prime board moving even one slot dropped the speed from 380 to 300 on rand

edit: just saw which board you had doubt it’s that, probably worth the checking different slot speeds though

Hello People I’m new here and I see that you people have the P5801x and used it on a z690 or z790 system. How does the RST Raid perform with those drives over the CPU PCIe lanes ?

Welcome!

This doesn’t answer your question directly, but it’s more an explanation why there isn’t much will to configure a system like that:

Z690 and Z790 don’t have very good CPU PCIe bifurcation options (most motherboards don’t even let you bifurcate the CPU lanes) so the only way to run very many of these P5801Xs is with a PCIe switch which is very expensive; in most cases it’d be cheaper to move up to the W790 platform which can bifurcate CPU PCIe lanes more freely than getting a Z690/Z790 + PCIe switch.

1 Like

Thank you for your insight.

I have a board that has one direct m.2 to the CPU and one that sacrifices the GPU lanes in order to provide another one. Due to “only” having a 4070ti Super the loss of 1% going to x8 GEN4 might not be the biggest problem.

Just wanted to bridge the time before the Threadripper funds are built up :slight_smile:

i went z790 to threadripper to asus trx-50

on z790 stick the optane in the gpu slot and it’s 20%+ faster on iops in slot one

on trx50 it’s fast in every slot

Any idea how these stand up in jbod? I know there is a hit with raid 0, but can’t find any info if the latency takes a loss with jbod.

Would one of these on an adapter card work as a boot drive on x670e(hero) platform?

Have been wanting to try optane for the longest time and this seems to be the best option pricewise where i live.

Any insight would be much appriciated.

Assuming you mean ROG x670e hero? IF so it should work BUT it will take your GPU down to 8x. The x16 slots slots only run in 16x if one is used or 8x/8x if both are used.