How fast is a P5800X Optane?

Thanks for the details!

Have you by chance have had any experience with basic Gen4 PCIe NVMe switches without the SATA/SAS part?

Am thinking of the Broadcom P411W-32P ( P411W-32P ), x16 to “32” lanes, the only thing I don’t like is uncertainty regarding the cables.

Broadcom’s older HBA 9400 line that also supports NVMe used non-standard U.2 cables and the cables Broadcom lists for the P411W-32P are all 1 m in length which sucks if you want to build a compact DIY NVMe drive shelf with Icy Dock backplanes (a new revision uses OCuLink for Gen4 instead of U.2).

(I really dislike proprietary cables :frowning: )

800GB size, with 100PB endurance would not really fit a typical use case; yes, it might run for ages, but a person would not keep such a drive running more than 10 years in a computer before upgrading to larger capacity, if only as a cache drive.

Where it would work, is automation, like car logging drives, which bricks a whole car when it reaches max write cycles on a TLC drive…

1 Like

Hey all, I found a solution that has been working so far with using a cable and adapter. I’m a new user here so it’s not currently letting me officially link…

The one cable I found on Amazon that lists it’s able to support PCIe 4.0 speeds:

LINKUP - Internal 16G U.2 Cable (85Ω 85ohm PCIe Gen 4 Mini SAS HD to U.2/SFF-8643 to SFF-8639 Cable) with SATA Power

The m.2 adapter I used is basically the only black colored one that is 110 in size… again sorry its not letting me link.

image

2 Likes

@CoreFX

Do you have a system that supports PCIe AER on an UEFI level and is the P5800X connected via CPU PCIe lanes?

Getting PCIe Gen4 SSDs to “work” with M.2-to-U.2 adapters, SFF-8639 for a direct SSD connection or 8643 cables for a backplane isn’t “hard”, the pain comes when you look at error logs.

On Ryzen systems only CPU PCIe properly supports AER and doing a standard CrystalDiskMark benchmark run with a Samsung PM1733 that tops off at 7.400 MB/s sequential read causes ca. 50 PCIe errors with passive adapters and cables without PCIe redrivers (best scenario I’ve seen so far). :frowning:

In Windows these errors are logged as WHEA ID 17 entries.

2 Likes

I would; I would love nothing better than to have a live-long permanent storage device.

If we get to a point where everyone needs _ TiB for normal usage, I would still like to have all my high-density data (including plain text things like personal notes, diary (if you keep such things), contacts, etc.) on such a drive, almost like an annex of my brain.

Granted, if one treats a storage drive like this, it needs to be well encrypted; so one would need to be careful to remember that encryption key very well, especially if it is cycled every few decades. Over multiple decades, cycling keys would likely be a necessity, either to limit potential leakage by the devices used to access the drive in that time, or as part of re-encrypting with safer algorithms.

Though if it really is so long lived, even if the password is forgotten, one could keep it for many years without worry in case the password is later remembered.

One could accomplish something similar with an encrypted partition/disk-image or ZFS dataset that one carries over from system to system in the same way, but I think having a physical drive would reduce the need to micromanage this kind of thing.

Even outside of daily use, a nigh-permanent drive would let you keep a long term, no-maintenance backup with family, a friend, or in safe-deposit box (if you have the funds for such a thing).

As I see it, assuming I am not missing any fatal caveats, Optane might be the first credible replacement to good ink+paper for long term storage; if I had the money, I would want to test conformal-coating the internals, to see if that could further stretch longevity.

1 Like

The problem is that Optane is only rated for a 3-month data retention, just like enterprise SSDs. Your data might be safe for longer, but if you don’t connect your P5800x to power here and there it’s gonna be bad news.

3 Likes

@wendell

Can you point at a seller for that PCIe Gen4 x8 bifurcation carrier AIC for two U.2 SSDs that has been working fine for you?

IMO the real flex here is:

image

2 Likes

I’m planning a P5800X with an Ableconn PEXU2-132 adapter on an X670E motherboard. Should I put it in a CPU or chipset PCIe slot? This is what I have:

CPU
2 x PCIe 5.0 x16 slots (support x16 or x8/x8 modes)
Chipset
1 x PCIe 4.0 x16 slot (supports x2 mode)

One of the CPU slots is for a RTX4090, the rest are open.

Looking at the block diagram of the gigabyte x670 aorus master, it’s certainly interesting.

It looks like there’s a lot going on with the x4 chipset link, so any task that’s so needy that it would max out a P5800X would probably notice bandwidth contention issues occurring. In practice, most people are hard pressed to stressing their NVMe drives.

On the other hand, I bet that if you tested the 4090 in both x16 and x8 modes, you’d likely see only 1-3 frames of difference at worst. So you could likely safely go x8x8 and give the P5800X the other slot if that’s worth it to you.

1 Like

Sorry, I should have been more specific about the motherboard: it’s the Asus “ProArt X670E Creator WIFI”. It has two x16 slots for the CPU, so I think I can put the 4090 on one and the P5800X on the other.

I had another thought: RAID 0 is great but with SSDs has the downsides of 1) increased risk, and 2) worse random RW. The P5800X mitigates both of those, yes? Is it possible to run two P5800X in RAID 0?

In that case, I wonder if I could get both P5800X connected to the CPU? As mentioned, on the CPU I’ve got only one free x16 slot, but I’ve also got two free M.2 (PCIe 5.0, 4x). I see there are M.2 to U.2 adapters, which would mean running a cable to the P5800X. Above Wendell says, “a cabled adapter will not work”. Is that really a no go? Isn’t the drive designed to be plugged into a U.2 with a cable?

I found this x8 PCIe 4.0 dual U.2 AIC that looks perfect for running two P5800X. That’d let me put them both on CPU lanes, and still have two M.2 slots on the CPU open. I bought it, so we’ll see!

Veidit and aBav.Normie-Pleb were looking for this in a now closed thread (I wonder if linking them pings?).

All ryzen 7000 motherboards should be very similar. They absolutely do not have 32 lanes to supply two x16 slots

What is done is if only the first slot is occupied, then it’ll get the full x16 lanes. If both slots are occupied, it should automatically bifurcate to giving x8 lanes each slot.

I would very closely look at the manual to see if that second slot can further bifurcate from x8 to x4x4, otherwise the card you mentioned in the other post won’t allow both drives to be seen. The slot must support bifurcation down to x4, or you need to get an (expensive) card that has a PLX chip that allows for PCIe switching.

Edit
It may actually be supported. Looks like asus are fucking retarded and insist on renaming bifurcation to “raid mode”.

Page 57

1 Like

It’s 2022 and Asus still don’t put topology diagrams in their motherboard manuals :neutral_face:

4 Likes

Thanks for pointing that out, I’m not super familiar with how this works. The manual shows:

AMD Ryzen™ 7000 Series Desktop Processors*
2 x PCIe 5.0 x16 slots (support x16 or x8/x8 modes)

Seems you are right, using the 2nd CPU slot makes them both run at x8, bummer. I found some anecdotes about losing ~10 FPS running a 4090 at x8, still getting 100+ FPS at 4K. That sounds OK for my needs. Edit: here’s a thorough review showing the difference is relatively minimal.

The next part is: can the 2nd slot running at x8 bifurcate to x4x4. The manual links to this page. It says it’s specifically about a “Hyper M.2” AIC, so it’s not clear if it applies to other cards. Maybe? Anyway it shows:

ProArt X670E Creator WIFI
PCIEX16_1 X4+X4+X4+X4 or X4+X4
PCIEX16_2 X4+X4
PCIEX16_3 X2

Does this indicate slot 2 supports bifurcation to x4/x4? Does it seem likely the dual U.2 card I found will work in slot 2 with a 4090 in slot 1? It seems silly to only given specifications for the Hyper M.2 AIC. I would hope that the same applies to any AIC.

BIOS manual says:

Use [PCIE RAID Mode] when installing the Hyper M.2 X16 series card or other M.2 adapter cards. Installing other devices may result in a boot-up failure.

Seems like there is hope for the dual U.2 card, though no guarantee.

Yes, should all work fine, just put the PCIEX16_2 slot into PCIe RAID mode and it will bifurcate PEG lanes to x8/x4/x4.

Don’t worry too much about running the GPU at Gen4 x8. It should be fine unless you are constantly churning VRAM and need to maximize throughput to shave a few minutes off a long render or what have you. The nice thing is that you’ll be ready to run a future gen GPU at Gen5 x8.

1 Like

The “Hyper M.2 AIC” is just their own M.2 bifurcation card, you can use that card in other systems with bifurcation, or you can use other bifurcation cards in the asus system. There are no proprietary “security features” on these that prevent you from using them interchangeably. I myself have both a 4 slot M.2 asus card, as well as an Abelcon U.2 adapter that I’ve used in Asrock and Tyan systems. When the ASUS adapter came out, it was the best value M.2 bifurcation card available, though there is finally a lot more competition now.

The single caveat is that some bifurcation cards aren’t going to be able to keep a clean enough signal for PCIe 4.0. In practice, if the card lists 4.0 compatibility then it’s a pretty safe bet it’ll just work.

Cards are much more reliable at sending a viable signal to drives than cables are.

Does this indicate slot 2 supports bifurcation to x4/x4?
Yeah that seems correct. We’re I buying it for that purpose, I’d see that as confirmation it’ll do just that.

1 Like

Thanks guys! That card and two P5800X 800GB are now on the way! I’m hyped! None of the P5800X reviews show RAID 0 results, so that’ll be interesting. There weren’t any 1.6TB in stock anywhere, but 2x800GB will be enough for me (and cut the price nearly in half).

Hey @wendell, could you please share the fio commands that you used to obtain your 4k QD1 results?

Right now, I am getting better 4k QD1 performance from a pci3 m.2 → u.2 connection than a pci4 connection (using an Ableconn card) when running this command:

fio --loops=5 --size=1000m --filename=/dev/nvme0n1p1 --ioengine=pvsync2 --hipri --direct=1 --name=4kQD1read --bs=4k --iodepth=1 --rw=randread --name=4kQD1write --bs=4k --iodepth=1 --rw=randwrite

It is about 7% faster when using pci3, on an x670e motherboard. I would be interested to try this with your fio command to see if this trend holds.

I think its posted here skmewhere will dig out later. Pcie4 into the cpu should be best unless there are intermittent pcie wrrors. Turn on aer in bios.

Would also be worth setting the ableconn card slot to pcie3 to see if its just pcie3/4. If ablecomm is same as the pcie3 adapter then the pcie4 connection is erroring