Critiquing: Really shitty AMD X570 (also B550) SATA SSD RAID1/10 performance - Sequential Write Speed Merely A Fraction Of What It Could Be :(

It’s not either or but as well as.

The X570 one is my personal daily-driver system, B550 due to a lack of Gen4 support and just the number of chipset PCIe lanes couldn’t handle my intended configuration. The B550 one is going into the living room for HTPC purposes and light VM experimentation stuff on the side.

Note: During the X570 testing no additional NVMe drives had been connected (two Samsung PM1733 PCIe 4.0 x4 drives in an U.2 backplane would be the final configuration) and the Thunderbolt 3 AIC is idling without anything attached to it to not use up the available X570 chipset bandwith between the CPU and the chipset.

So, still not sure about your use case here. You want a workstation, with a mirrored raid to protect against drive failures.

Why do you not simply buy two NVMe to RAID1 your OS drive instead of RAID10 SATA drives? Then you can just make your SATA drives part of a slower JBOD or RAID6 setup. The 2xRAID1 NVMe configuration will perform much better and be more reliable than the 4xRAID10 configuration no matter what.

It seems to me like you’re fixating on solving the wrong problem here, but could just be me being stupid. Regardless, I do agree X570 should be able to do what you ask of it.

  • The NVMes are going to store the data that are actually handled/being transformed where the NVMe speed advantage actually (really) comes into play. These won’t be using a RAID configuration since I don’t have the free funds for “just buying larger ones until the size limitation does no longer matter” (the PM1733 are already the 7.68 TB models).

  • The SATA RAID1 or RAID10 is for the OS and installed programs whose drives will always be staying in the system. Installing them on the same drives or logical volumes as the swappable NVMes would be - in my opinion - a stupid move if swappable storage is something planned for from the beginning.

  • I honestly didn’t consider SATA something being an issue in 2021 (or 2019 when considering X570’s introduction).

  • I still cannot find the error in my personal logic in this thread’s issue cheat sheet summary a few postings above.

  • If points there are taken apart by an objective argument, I’m all in for that since then I would learn something new but otherwise it feels a bit like someone complaining about an Apple product in an Apple-centric forum where the emotions run a little hot when their gods are criticized :upside_down_face:

Yay this seems to be a workaround:

Created a RAID1 array with standard settings on the B550 system with the latest AMD RAID management software, let it fully initialize, shut down the system, moved the SSDs over to the X570 motherboard and installed Windows on it:

Still not as fast as under B550 but the sequential write results are at least within expectations/usable for a normal computer without the RAID array being maxed at 100 % activity with 2.5 GbE.

1 Like

To be honest, I’m surprised we’re still seeing systems shipping with a large volume of SATA on-board in 2021. Its probably feature tick box dick-measuring more than anything else. They’re cheap to add… I don’t know of anyone using say 6x SATA ports for example in 2021 (which seems to be a common included number) because 99% of cases simply don’t have the drive bays for a start.

Personally I’d rather have the lanes dedicated to a slot. The last version of the SATA interface standard is over 10 years old at this point, it’s legacy technology.

Sorry that doesn’t really solve your issue, but its how I feel about SATA (as an aside).

B550 might work because the RAID controller/chipset is slightly newer and more aware of SSD raid and chooses the appropriate stripe/block/etc. sizes to avoid the read-modify-write cycle.

But at this point in the SATA interface’s life cycle, I doubt many vendors are putting too much R&D into it, especially in the consumer space. SATA, for most people is for connecting a bunch of slow disks at this point, and it will do that job.

Most people chasing RAID SSD performance/reliability will be using a PCIe card for it. Its unfortunate, but you’re an extreme edge case here. I understand what you’re trying to do and why (I’ve considered it myself, but ended up just putting my SSDs in JBOD/individual disk config) - but I very much doubt any vendors will be working on this sort of configuration.

For my purposes, individual SSDs was “fast enough” and my data is mostly disposable (test lab VMs)…

1 Like

I can rest easy that I’m stepping on the bugs no matter if SATA or PCIe storage:

@wendell
Also a bit annoying: AMD seem to have broken the background service for their RAID management Windows software with the May 25th, 2021 release meaning you have to manually launch the application to be informed about the current state of things while the system is running.

1 Like

I’m having a hard time fully understanding this:

If an existing RAID array with its parameters chosen and locked in during the creation process (AMD’s recommendation and standard setting for SATA HDDs and SSDs is 64 kB stripe size) can be swapped between the X570 and B550 platforms and these platforms perform quite differently with the very same SSD units and cache settings does the stuff about avoiding the read-modify-write cycles still matter?

I’m not trying to egg you on, I just want to understand as much about this as possible from the perspective of a simple GUI user to not potentially repeat similar performance unpleasantries in the future by unknowingly choosing sub-optimal parameters for storage arrays.

1 Like

Another thing remains that I cannot quite understand:

  • The Samsung 860 PRO 2 TB RAID1 array shows that even on X570 1.000 MB/s sequential read is possible.

BUT

  • The Micron 5300 PRO 3.84 TB RAID1 array on B550 can also do 1.000 MB/s sequential read, but when taking it to X570 it gets slowed down to 900 MB/s sequential read, never seen faster results in any of my numerous benchmark runs on X570 and I don’t like 10 % speed just being “pissed” away, especially since SATA drives are only half-duplex.

Can someone shine a light at the process of how a SATA controller is talking to a SATA drive that might explain this behavior?

1 Like

I just noticed this setting in the b550 ROG strix, maybe this could unlock trim?

Hmmm it won’t let me raid 1 all 6 drives but will let me raid 0, 10

2 Likes

so the numbers we have so far aren’t suuuper bad. Though not as much as I’d expect since the raid1s would be ideally faster:

image

this is a 6 drive stripe of 3 mirrors:

this benchmark ran while it was still initalizing, which chews up about 100mb of write capacity
image

we disabled the read-ahead cache in the gui and disabled windows write buffer cache flushing in device manager (so far).

we will do more testing (and linux testing to see if the hardware is capable of doing better, or if it is the raid solution)

2 Likes

@GigaBusterEXE

I had thought the same, for me it made no difference with TRIM not working. Even when “SSD” had been selected, up to the previous RAID driver release the array would be recognized by Windows as a mechanical HDD. Updating the driver fixed that but still no progress regarding TRIM.

You can easily check this in Windows: Open the defragmentation utility, mechanical HDDs get actually defragmented, SSDs get trimmed.

An AMD SSD RAID array gets listed there as an SSD correctly with the latest RAID drivers, but if you try to start the process it will very quickly switch to “Optimization not available” (don’t know the exact wording in English right now). If you launch this with normal SATA SSDs in AHCI mode you can actually see a note that SSD TRIMing is being done.

If you don’t want to rely on a Windows utility there is also the little command-line program called “TRIMCheck” to manually check if TRIM is actually working.

Also, the HDD/SSD/BOTH option doesn’t even appear in the Windows RAID Management software when creating an array (only filter buttons to quickly show specific types of drives) so I imagine that this is maybe some legacy option that has been copied & pasted over over the years and doesn’t have a function presently.

@wendell

Not “suuuper bad”…?!

A RAID 10 array with three pairs of RAID1 should give you around 5-6n sequential read and 2-3n sequential write performance - what SSDs did you use, the Intel 320 series (dramatization)? :upside_down_face:

I have zero experience with SATA QLC SSDs (just seeing that in the screenshot you posted) but wouldn’t these be the worst case for immediately testing this since they reeeally suffer from the initialization process writing to the entire drives?

Thank you both very much for actually looking at these issues!

1 Like

Don’t forget the write sync was running at the same time so mixed read while write for the benchmark.

I’m going to try linux too which I’m sure will give us a crystal clear picture of what’s possible. Once this infernal sync is complete

3 Likes

Kinda interesting, although those numbers don’t really look that bad to me either.
Not sure if it would be interesting to test this on intel z590 as well?

@wendell

I think I just found something:

Wanted to check the performance on B550 with write cache completely disabled and switched through the settings, as expected halving sequential write speeds (RAID1 with 2 x Micron 5300 PRO 3.84 TB) from 500 MB/s to 250 MB/s.

The problem now is that enabling write cache no longer restores the original performance :frowning:

Maybe there are write cache driver bugs causing the issues I’ve first experienced on X570, those slower sequential write results would match with disk write cache not being used even though it is to be enabled according to the GUI…

1 Like

rebooted since re-enabling?

I can prettymuch confirm that for raid1 there is no performance benefit with the amd drivers.

Raid 0 of 3 drives is identical performance as a raid 1 of 3 drives, as a stripe of 3 mirrors, more or less.

Thst seems… odd…

1 Like

Yes, of course - I may be a GUI normie but have been used to Windows since 3.11 :wink:

1 Like

Yay: Uninstalling AMD’s RAID management software, rebooting, manually deleting the C:\Program Files (x86)\AMD\RAID Software folder, reinstalling the RAID management software and once again rebooting restores the write performance :face_vomiting:

I don’t think AMD’s software engineering A-Team is the one behind the RAID software…
…which is great for something handling your data :stuck_out_tongue:

1 Like

Don’t AMD’s software team just reskin other products?

1 Like

That’s a yes for “StoreMI” from Enmotus & the FuzeDrive guys (another bag of buggy hurt I wanted to check on*, but I should make a dedicated thread for that since it has 100 % nothing to do with this).

*Used FuzeDrive about a year ago, a motherboard UEFI update killed their driver, took a while until that had been figured out and I have yet to check if that old FuzeDrive volume can be accessed or if it was destroyed (I was extremely pissed then since there was the subtext in the messages from their support that I could have never used their product in a way I had described, meaning I must have had cracked it or done something else to it).

But I don’t think the classical chipset RAID software stuff is coming from an external team. If so then AMD’s QC people that review the stuff coming in should be fired (I don’t even mean that sarcastically).

2 Likes

The b550 thing was pure speculation on my part based on your claim it works better with b550.

But the 64k stripe size right there highlights why your 4k write figures are so bad.

If you’re writing 4k at a time, a whole 64k block needs to be read, modified and written out for only 4k of it to be changed.

i.e., every time you do a 4k write, you’re not doing that. You’re reading 64k, modifying it and writing 64k out - because the RAID won’t write out 4k directly. It’s working in 64k blocks.

So that’s 128k (64k read, 64k write) of IO at the drive level for a simple 4k write.

2 Likes