Copy-and-paste shows the same results as the youtube video.
That operation is single-threaded, so I won’t be surprised if it can be represented by CrystalDiskMark’s Seq1M Q1T1 test.
Copy-and-paste shows the same results as the youtube video.
That operation is single-threaded, so I won’t be surprised if it can be represented by CrystalDiskMark’s Seq1M Q1T1 test.
that or the card hes using is a limiter because https://youtu.be/UwgtQ1xdMuk?si=jye8kmC_qSfNDWzv , watch this the P5801x seems to get more speed on transfer speeds , threw me out on a wild loop ngl
My OCuLink adapters barely fit, but the primary problem is actually the motherboard itself. The Micro SATA Cable adapters’ female connectors are so close to the end of the M.2 PCB that the cables have to be bent at a tight radius and pressed up against the PCIe bracket pieces/edge of chassis. It doesn’t help that the motherboard comes with some stupid armor I’ll never use because I don’t actually put M.2 SSDs into M.2 slots! The adapters are definitely not for any motherboard that comes with fancy armor and whatnot.
Did some testing this evening.
First I decided to do a “before” benchmark, using my passive (I think?) Startech PCIe adapter I’ve been using until now. Slot is made up of CPU lanes, on a Threadripper 3960x:
I decided to do a few tests in a row, to minimize the risk of a spurious result:
These are all standard “Nvme” tests in the settings menu.
Then I realized, wow, these tests are really CPU intensive. The Seq1M Q8T1 loads up one core of my Threadripper to just above 50%. The Seq128K Q32T1 loads it up to something like 85%.
The RND4k results are where it really gets crazy. The NVME test uses 16 threads by default, and on my Threadripper all 16 of those are pinned. The Q1T1 - of course - uses only one thread, and that one is also obviously pinned.
My bright idea was that I should alter these that seem to be CPU limited so that when I do my “after” test, my CPU limitation on this machine is not masking any potential issues.
So I upped the RND4k Q32T16 to Q32T24 (one thread per core. If I went above that and started using SMT, it appeared to perform worse)
I also upped the Q1T1, this time to 48T, as it didn’t seem to mind the logical SMT cores and performed better with them:
So far so good. To illustrate how these things can really load up the CPU, I took this screen shot:
I ran a few more experimental benchmarks after taking this screenshot, and I noticed something odd. The drive seemed to be getting a little bit slower with each run. I did notice that Windows Update was trying to install something, and while the OS is on a completely different drive, I thought maybe it is consuming CPU and slowing things down.
I didn’t think any more of it.
I shut down the system, but Windows insisted on installing the update first, and then went to install the redriver and Slimsas cable.
To be apples to apples and not sabotage performance, I made sure I installed the redriver in an m.2 slot with CPU lanes.
First I did three of the NVME tests:
My thoughts: Hmm. It’s a little slower than the PCIe card, but it’s only very slightly so. Maybe the cable length is at play here. (mine is the silly needlessly long one) More tests.
Wait a minute, it’s still getting a little slower.
And slower…
Uh, oh. Do we have a problem?
On to the custom tests.
Alright, so it keeps getting slower with every test. Something is very odd here, and I don’t think it is the redriver. I think it might be the drive itself.
Optane drives are renown for being able to just keep on writing without the sustained writes having negative issues, and these are read performance…
I wonder if repeated 4k rnd tests are doing something to internally framgment the drive or something.
I’ve tried TRIMming it, which did nothing (not even sure if 3d Xpoint needs trimming like NAND flash)
I opened up the Intel MAS Gui application, and used it to do a full diagnostic on the drive, which came up with a clean bill of health. It has 97% life remaining, and should be good to go. The drive remained cool throughout all of the tests, so I doubt there was any throttling going on.
I then tried doing a secure erase (figuring this might reset the drive somehow) but it completed way too fast for me to think it actually did anything. It was also ineffective.
So anyway, this redriver and cable seem to work, but something really fishy seems to be going on with my drive.
Has anyone else experienced this with their P5800x drives? Slower performance over time with repeated tests?
Does it do some sort of garbage collection offline to optimize things, and I just hit it with too many tests too fast, or is something more problematic going on?
I’m tempted to try to dd all zeroes to it and give it another try and see if that brings it back to where it was before.
Appreciate any thoughts!
ah so the only odd al thing optane drives do is if they have been off for a long time. or think they have been off for a long time it will media recondition itself which affects performance somewhat.
optane is engineered such that it thinks it will very very slowly lose information overtime. so if it’s been off a long time then drive will in the background read and rewrite every sector. this background task takes no more than a couple days if the drive is powered on, and idle.
Ahh, thank you, that could explain it
I had been using this 800 GB p5800x as my main drive in my workstation, but a few weeks ago, I replaced it in the workstation with my 400GB p5800x, and set it aside in preparation for building my new game machine.
…until just now when I tested the redriver and cable.
So it has been off for a few weeks.
Any idea how much off-time it takes to trigger this mode?
This seems to more likely be the result of garbage collection cleaning up write disturb effects from all the hammering of the optane drive; those little cells get heated up to ~1000 degrees to make a write and that thermal energy can disturb it’s neighbors making them harder to read or even rewrite if they get annealed into an “island”.
so we do get back the performance but over time and then its high sails ?
can u explain a bit more for this layman
That was my original thought, but everyone keeps saying that unlike NAND, 3D Xpoint can essentially do high IOPS writes all day every day without ever slowing down.
Though to be fair, even my “slow” speeds are higher than Intel’s specs of 7200 MB/s, so maybe a small slowdown during garbage collection like this is built into that spec?
but what abt the write speeds ? isnt it falling behind as per intel spec of 6100MB\s . also i did find this reddit thread discussing similar problem ,
https://www.reddit.com/r/buildapc/comments/13mzak1/2x_optane_p5800x_raid0_slower_read_over_time/
I saw that one too.
It is unclear to me if that was some odd issue related to motherboard software RAID, or if it maybe was the same issue I am having in disguise, and the performance came back by letting the drives work through their remapping/garbage collection process.
As for me, I removed the redriver, and stuck the 800GB p5800x back in the PCIe adapter card, and have just left it alone to idle. I’m going to test it again in a few days and see how it performs then.
ah thats good keep us posted , but could you do “real world” file transfer speeds in windows so we can keep an eye on that as well to if any improvements happen .
I’ll have to think about that. Honestly, I don’t know what I’d do a real world transfer to in Windows. Anything I am writing to is going to be slower for small files.
I’d also need to think about what to copy. Large files will just be sequential, and that is pretty uninteresting for these drives. There are faster sequential drives out there these days. It’s the small files and database stuff that these drives excel at.
Under Linux, I’d just send stuff to /dev/null, but I don’t know how to accomplish anything similar in Windows.
Maybe if I set up a RAMDisk, and copy to it, but I haven’t used RAMDisk software in forever. I don’t know if what I have even works with Windows 11 anymore.
Back in 2012 I bought a copy of DataRAM Ramdisk, which supposedly came with lifetime updates.
Then in 2015 when I needed an update for a new version of Windows, they had conveniently removed that statement from their webpage, and wanted me to buy it again. I argued with their customer service. A lot. And eventually they gave me a new license in 2015 for the latest version.
This version is probably no longer any good either. Not looking forward to having to argue with them again…
Maybe there are some open source options these days?
i have close to 400gb of large and smaller files , videos , textures , 3d assets , project files would be ideal test case scenario . send me ur drive i will test and see , lol
Again, what would you copy it to/from that is faster than the Optane drive? (as to not bottleneck the test) Having the files is not the limitation.
I think you’d need to create a very large ramdisk to copy to for the read test.
For the write test, you’d have to somehow force Windows to do only sync-writes. (which I don’t know how to do)
There are probably ways to do all of this stuff, but its always more complicated with Windows than with Linux.
certainly,
basically when one cell (bit) is written, all the adjacent cells to it (which may or may not be sequential Logical Block Addresses) change their state a little making them slower to read/write when it comes time for them to be accessed.
The annealing island I spoke of is some kind of metastable crystal structure in the Phase Change Material (I’m actually not sure how prevalent this affect is in Intel’s implementation of PCM, ideally this material phase wouldn’t exist at all) that would contribute to this slower read/write speed.
Hi, I’ve tried almost anything except the expensive M.2. Redriver adapters, and I’m interested in PCIE5 more than PCIE4. On the PCIe4 side of things I find that the on board slimSAS 4i with this cable gives me full PCIe 4.0 speeds.
This adapter gives a 30 windows eventviewer logs error pr second :
That was with 2 optane and 2 Solidigm drives. Performance was a bit shoddy with the odd non-bsod windows crash.
Then I have tried this:
Which does not work in any way or form with any drive .
I have tried this:
Which somewhat works, but has issues, and cabling is a nightmare.
I have also tried this:
I am now down to wondering if I should try this:
(sorry for the swedish text :-)).
I do have MCIO 1m internal cables for this from the same vendor (so should work):
Anyway, it’s a damn hassle
from my research this Amazon.com works perfectly for gen 4 speeds and for gen 5 we need this https://www.amazon.com/gp/product/B0CSKQL61R?smid=A1UCLUF7KW7AYG&psc=1 with https://www.amazon.com/gp/product/B0CPVRT5M4?smid=A1UCLUF7KW7AYG&psc=1 . the first adapter has been tested and confirmed ( even checked for WHEA errors ) from multiple sources , the second combo , because its gen 5 few has tried it , but a pretty good chance it will flawlessly
oh and btw , i don’t know anyone has addressed it earlier some drives wont work with these adapter because of Power disable pin feature in some ssd , do for that we need to mask of some pins that carry current ( using a normal insulation tape is fine , we need to mask of the 3 pins that carry power ) , because of this issue , the ssd wont even power on \ wont be visible in the system . https://www.youtube.com/watch?v=IeUWUVH1D20&t=127s , explained better in this feature , one ssd that have this problem is micron 7450