How fast is a P5800X Optane?

Thanks for pointing that out, I’m not super familiar with how this works. The manual shows:

AMD Ryzen™ 7000 Series Desktop Processors*
2 x PCIe 5.0 x16 slots (support x16 or x8/x8 modes)

Seems you are right, using the 2nd CPU slot makes them both run at x8, bummer. I found some anecdotes about losing ~10 FPS running a 4090 at x8, still getting 100+ FPS at 4K. That sounds OK for my needs. Edit: here’s a thorough review showing the difference is relatively minimal.

The next part is: can the 2nd slot running at x8 bifurcate to x4x4. The manual links to this page. It says it’s specifically about a “Hyper M.2” AIC, so it’s not clear if it applies to other cards. Maybe? Anyway it shows:

ProArt X670E Creator WIFI
PCIEX16_1 X4+X4+X4+X4 or X4+X4
PCIEX16_2 X4+X4
PCIEX16_3 X2

Does this indicate slot 2 supports bifurcation to x4/x4? Does it seem likely the dual U.2 card I found will work in slot 2 with a 4090 in slot 1? It seems silly to only given specifications for the Hyper M.2 AIC. I would hope that the same applies to any AIC.

BIOS manual says:

Use [PCIE RAID Mode] when installing the Hyper M.2 X16 series card or other M.2 adapter cards. Installing other devices may result in a boot-up failure.

Seems like there is hope for the dual U.2 card, though no guarantee.

Yes, should all work fine, just put the PCIEX16_2 slot into PCIe RAID mode and it will bifurcate PEG lanes to x8/x4/x4.

Don’t worry too much about running the GPU at Gen4 x8. It should be fine unless you are constantly churning VRAM and need to maximize throughput to shave a few minutes off a long render or what have you. The nice thing is that you’ll be ready to run a future gen GPU at Gen5 x8.

1 Like

The “Hyper M.2 AIC” is just their own M.2 bifurcation card, you can use that card in other systems with bifurcation, or you can use other bifurcation cards in the asus system. There are no proprietary “security features” on these that prevent you from using them interchangeably. I myself have both a 4 slot M.2 asus card, as well as an Abelcon U.2 adapter that I’ve used in Asrock and Tyan systems. When the ASUS adapter came out, it was the best value M.2 bifurcation card available, though there is finally a lot more competition now.

The single caveat is that some bifurcation cards aren’t going to be able to keep a clean enough signal for PCIe 4.0. In practice, if the card lists 4.0 compatibility then it’s a pretty safe bet it’ll just work.

Cards are much more reliable at sending a viable signal to drives than cables are.

Does this indicate slot 2 supports bifurcation to x4/x4?
Yeah that seems correct. We’re I buying it for that purpose, I’d see that as confirmation it’ll do just that.

1 Like

Thanks guys! That card and two P5800X 800GB are now on the way! I’m hyped! None of the P5800X reviews show RAID 0 results, so that’ll be interesting. There weren’t any 1.6TB in stock anywhere, but 2x800GB will be enough for me (and cut the price nearly in half).

Hey @wendell, could you please share the fio commands that you used to obtain your 4k QD1 results?

Right now, I am getting better 4k QD1 performance from a pci3 m.2 → u.2 connection than a pci4 connection (using an Ableconn card) when running this command:

fio --loops=5 --size=1000m --filename=/dev/nvme0n1p1 --ioengine=pvsync2 --hipri --direct=1 --name=4kQD1read --bs=4k --iodepth=1 --rw=randread --name=4kQD1write --bs=4k --iodepth=1 --rw=randwrite

It is about 7% faster when using pci3, on an x670e motherboard. I would be interested to try this with your fio command to see if this trend holds.

I think its posted here skmewhere will dig out later. Pcie4 into the cpu should be best unless there are intermittent pcie wrrors. Turn on aer in bios.

Would also be worth setting the ableconn card slot to pcie3 to see if its just pcie3/4. If ablecomm is same as the pcie3 adapter then the pcie4 connection is erroring

Heya, I did try looking through this thread, but couldn’t find any such commands myself.

But some interesting results came out of what you suggested I try… For each of the below 3 scenarios, I ran the exact fio command I shared in my last post 5 times in a row, with a 1 second sleep between each run (this is why there are 5 results per test case):

First, Ableconn, pcie4 (with AER enabled – no errors reported in dmesg):

  read: IOPS=117k, BW=458MiB/s (480MB/s)(5000MiB/10928msec)
  write: IOPS=87.8k, BW=343MiB/s (360MB/s)(5000MiB/14580msec); 0 zone resets
  
  read: IOPS=120k, BW=467MiB/s (490MB/s)(5000MiB/10708msec)
  write: IOPS=90.3k, BW=353MiB/s (370MB/s)(5000MiB/14175msec); 0 zone resets
  
  read: IOPS=121k, BW=475MiB/s (498MB/s)(5000MiB/10536msec)
  write: IOPS=92.9k, BW=363MiB/s (381MB/s)(5000MiB/13771msec); 0 zone resets
  
  read: IOPS=120k, BW=469MiB/s (491MB/s)(5000MiB/10668msec)
  write: IOPS=91.3k, BW=357MiB/s (374MB/s)(5000MiB/14025msec); 0 zone resets
  
  read: IOPS=118k, BW=461MiB/s (483MB/s)(5000MiB/10849msec)
  write: IOPS=88.1k, BW=344MiB/s (361MB/s)(5000MiB/14530msec); 0 zone resets

Second, Ableconn, pcie3 (with AER enabled – no errors reported in dmesg):

  read: IOPS=109k, BW=425MiB/s (446MB/s)(5000MiB/11755msec)
  write: IOPS=88.8k, BW=347MiB/s (364MB/s)(5000MiB/14415msec); 0 zone resets
  
  read: IOPS=110k, BW=430MiB/s (450MB/s)(5000MiB/11639msec)
  write: IOPS=93.5k, BW=365MiB/s (383MB/s)(5000MiB/13685msec); 0 zone resets
  
  read: IOPS=105k, BW=412MiB/s (432MB/s)(5000MiB/12147msec)
  write: IOPS=83.9k, BW=328MiB/s (344MB/s)(5000MiB/15259msec); 0 zone resets
  
  read: IOPS=112k, BW=438MiB/s (459MB/s)(5000MiB/11412msec)
  write: IOPS=94.5k, BW=369MiB/s (387MB/s)(5000MiB/13551msec); 0 zone resets
  
  read: IOPS=108k, BW=423MiB/s (443MB/s)(5000MiB/11824msec)
  write: IOPS=84.6k, BW=330MiB/s (347MB/s)(5000MiB/15129msec); 0 zone resets

Third, and most interestingly, pci3, with the m.2 → u.2 adapter cable that comes with Optane 905p drives:
(EDIT: this third case is actually running at pci4, not pci3 – see my next post)

  read: IOPS=122k, BW=475MiB/s (498MB/s)(5000MiB/10519msec)
  write: IOPS=95.8k, BW=374MiB/s (392MB/s)(5000MiB/13363msec); 0 zone resets
  
  read: IOPS=124k, BW=484MiB/s (508MB/s)(5000MiB/10322msec)
  write: IOPS=97.6k, BW=381MiB/s (400MB/s)(5000MiB/13114msec); 0 zone resets
  
  read: IOPS=120k, BW=468MiB/s (491MB/s)(5000MiB/10675msec)
  write: IOPS=90.9k, BW=355MiB/s (372MB/s)(5000MiB/14085msec); 0 zone resets

  read: IOPS=126k, BW=494MiB/s (518MB/s)(5000MiB/10123msec)
  write: IOPS=99.3k, BW=388MiB/s (407MB/s)(5000MiB/12886msec); 0 zone resets

  read: IOPS=126k, BW=494MiB/s (518MB/s)(5000MiB/10121msec)
  write: IOPS=99.0k, BW=387MiB/s (406MB/s)(5000MiB/12923msec); 0 zone resets

So, you are correct that pci4 is faster than pci3 for the same connection method… but for some reason, the m.2 → u.2 adapter results in measurably more IOPS (it is not quite 7% in this test, though).

P.S. Adding one additional test result for you, as well… this is a single run of the above tests, still using the m.2 → u.2 connector, but this time, on a /dev/mapper/p5800x encrypted target:

  read: IOPS=64.6k, BW=252MiB/s (265MB/s)(5000MiB/19810msec)
  write: IOPS=46.2k, BW=180MiB/s (189MB/s)(5000MiB/27713msec); 0 zone resets

Feel free to let me know if you have ideas at all on how I might be able to speed that one up :wink: I assume that there isn’t much that can be done for the encrypted case. I would be happy to try anything at all though.

2 Likes

Be aware that just because the BIOS shows a PCIe Advanced Error Reporting Option and let’s you enable it doesn’t automatically mean it’s actually working. What specific motherboard model are you using?

While I don’t have a P5800X I got a few other fast PCIe Gen4 U.2/U.3 SSDs and I’ve tested them with the M.2-to-SFF-8639 adapter that came with an Intel Optane 905P. These work fine with PCIe Gen3 but when using them with a PCIe Gen4 SSD they introduce PCIe Bus Errors.

1 Like

Hi aBav, thanks for the feedback. I am on an ASUS Proart x670e.

Very importantly, based on what you wrote, I realized an error on my part (I will edit my post, above, after this). The M.2-to-SFF-8639 adapter that I was using for the third test case in my last post was not limiting the connection to pcie3, as I thought. It was actually still running at pcie4.

I confirmed this by observing a large difference in 4k QD1 performance (about 15%) when setting that link speed to pcie3 in the motherboard BIOS when using the 8639. It still doesn’t explain why the Ableconn on pcie4 performed worse than the 8639 on pcie4, though.

As for AER errors… I am seeing none at all in dmesg, and I just ran 1tb worth of read/write data through the drive using badblocks, and it all was read back correctly. This is a basic test, but it is still something. Doing reads with “dd” also does not produce any AER messages. I do see this when running “dmesg | grep -i aer”, though, so I think it is enabled:

[    0.473305] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability LTR]
[    0.982068] pcieport 0000:00:02.1: AER: enabled with IRQ 28
[    0.982542] pcieport 0000:00:02.2: AER: enabled with IRQ 29
[    1.336761] aer 0000:00:02.2:pcie002: hash matches

Interestingly, when using the 8639 and pcie4, “dd” has a max read/write speed of 3.0GB/s and 4.065GB/s.
When using the Ableconn at pcie4, “dd” has a max read/write speed of 5.5GB/s and 5.8GB/s. I don’t care about sequential speeds at all, but this is interesting to observe – the 4k QD1 performance is still slightly higher on the 8639 over pcie4.

Any thoughts on how I might confirm if AER errors are occurring? Badblocks seems like the most efficient way to see if some data is not being correctly read/written, but maybe there is a better way.


I also just noticed another oddity – the usual fio test that I run does the read and write test at the same time:

fio --loops=5 --size=1000m --filename=/dev/nvme0n1p1 --ioengine=pvsync2 --hipri --direct=1 --name=4kQD1read --bs=4k --iodepth=1 --rw=randread --name=4kQD1write --bs=4k --iodepth=1 --rw=randwrite

Sample output:

  read: IOPS=125k, BW=488MiB/s (511MB/s)(5000MiB/10253msec)
  write: IOPS=97.4k, BW=381MiB/s (399MB/s)(5000MiB/13136msec); 0 zone resets

But, if I run only the read test on its own, with no write test being done at the same time (on either connector), it apparently drastically reduces my performance:
fio --loops=5 --size=1000m --filename=/dev/nvme0n1p1 --ioengine=pvsync2 --hipri --direct=1 --name=4kQD1read --bs=4k --iodepth=1 --rw=randread

Sample output:

  read: IOPS=75.1k, BW=293MiB/s (308MB/s)(5000MiB/17046msec)

I hope Wendell will be able to dig up the fio commands/files that he used in the past, as I feel like I must be doing something wrong here…

Just to be sure: This motherboard explicitly shows a PCIe Advanced Error Reporting (AER, not ARI) option in the BIOS?

I’m asking since I’ve been using a few ASUS AM4 motherboards (Pro WS X570-ACE, ProArt B550-CREATOR, ProArt X570-CREATOR WIFI) and these don’t offer the PCIe AER option.

Contrary to that many ASRock AM4 motherboards do.

But on any AM4 system I’ve seen so far where PCIe AER is working it is only working on PCIe lanes that come directly from the CPU, errors on chipset PCIe lanes stay hidden.

It does show AER specifically. It was sort of hard to find, in “Advanced → AMD CBS → NBIO Configuration → Advanced Error Reporting (AER)”.

If I disable the option, then the struck through lines no longer appear in dmesg:

[ 0.473305] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability LTR]
[ 0.982068] pcieport 0000:00:02.1: AER: enabled with IRQ 28
[ 0.982542] pcieport 0000:00:02.2: AER: enabled with IRQ 29

In my case, I am using an m.2 slot whose 4 pcie lines are directly from the CPU. I was sure to avoid chipset based lanes.

AER reports working on my Pro WS X570-ACE with Ryzen 5900X

[me@home~]$ dmesg | grep -i aer
[    0.863356] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
[    1.112645] pcieport 0000:00:01.1: AER: enabled with IRQ 28
[    1.112794] pcieport 0000:00:01.2: AER: enabled with IRQ 29
[    1.112916] pcieport 0000:00:03.1: AER: enabled with IRQ 30

However, it does not with a Ryzen 5700G

What’s up bud? I know it’s been a while since you posted this, but I found this searching for my problem. So, I’ve bought a PM1733, and an M.2 to U.2 adapter and put it in my first m.2 slot on my Asus z690 mobo. I think I’m running at gen 4 speeds, but under any kind of load the drive errors out and remounts ro. Any idea how to get the 1733 to run via my adapter without erroring out like this?

1 Like

I just solved a similar problem myself after about 2 months. It turns out that the best combination thus far has been an M.2 adapter with a redriver and a very good cable that will definitely carry a PCIe 4.0 signal.

I used an OCuLink-to-OCuLink cable, but Micro SATA Cables also sells ones that have the other end connect directly to the drive.

I’m also eyeing a setup using an M.2-to-MCIO adapter and an MCIO-to-U.2 cable instead just to be futureproof. I noticed that the PCIe 5.0 solutions are offered using MCIO to the exclusion of older connectors.

3 Likes

ordered that, and all ther pemutations. we’ll see

I also got the Gen-Z variants which is what the systems from amd/genoa use so I am hoping that will really nail everything in terms of compatibility.

3 Likes

Incidentally, do you know if there are any U.2 cables that are less than 25cm? My Google-fu is coming up short.

The other end can be SlimSAS, OCuLink, MCIO, or Gen-Z 1C. Doesn’t make a difference to me.

Supermicro almost had it with their 45cm OCuLink (Right Angle) to U.2 PCIE with Power Cable (CBL-SAST-0955) . THe name is misleading because the picture shows that the power-carrying wires are 45cm. But the PCIe-carrying part of the cable looks to be short enough (less than 25cm) for my needs. The right-angle OCuLink end, however, is facing the wrong way and every single M.2-to-OCuLink adapter requires a straight connector with the beveled edge facing down towards the PCB.

1 Like

Did you have any luck with that? I’m considering upgrading the 900p in my X570 rig to a P5800X, but have been struggling to find an optimal solution for connectivity, as my 4090 prevents me from using the 2nd PCI-E slot.

That’s awesome. Sad to see optane killed off but the free market is a fickle mistress. Micron nuked the Lehi Utah fab for 3d xpoint a while back in February. Texas instruments has since bought it and started developing it.

Where or what do you think will be optanes successor

Did the video on the genz variant. Very stable

Mixed results with oculink and everything else.

Hello


,

I tested the latency of each M.2 and PCIe port with an Intel Optane P4801X SSD.
Latency for all M.2 and PCIe ports managed by the Chipset is 12.1µs.
Latency for all M.2 and PCIe ports managed by the CPU is 10.9µs.

It is therefore necessary to avoid cables, and if possible avoid using the Chipset ports to gain speed. Today, manufacturers are making graphics cards with 8 PCIe lines instead of 16 PCIe lines, and this is therefore interesting for CPUs that only manage 20 PCIe lines. See the new NVIDIA RTX A400 and NVIDIA RTX A1000 card.

Note: the only way to test the latency of each port on your motherboard is to use an Intel Optane SSD, because the performance does not change unlike a NAND SSD.