Critiquing: Really shitty AMD X570 (also B550) SATA SSD RAID1/10 performance - Sequential Write Speed Merely A Fraction Of What It Could Be :(

To be clear when I mentioned firmware, I wasn’t referring to updated firmware for the SSD, etc.

More so that proper SSD array vendors write (or contract out to the SSD vendor) their own custom firmware for the drives to work with their array. I know they’ve (EMC, others) been doing this for decades for spinning drives, and I’m sure SSDs would need more tweaking…

1 Like

Yeah this can be a real issue for second hand SAN drives in your own projects.

Sometimes works, I’ve got some old Nimble drives in my Unraid, they were in a WEEE bin.

I’ve had customers fall down on the reverse too, buying cheap (but identical) disks on eBay for their EqualLogic or whatever and finding they don’t work because they lack the right firmware sauce.

2 Likes

Will test two units of Samsung 860 PRO 2 TB SSDs next, to my knowledge the fastest consumer-grade SATA SSDs there are with proper MLC NAND.

3 Likes

Looking forward to the results.

2 Likes

Yeah, I’d bet what’s left of my soul on X570 having issues with SATA :frowning:

Tested two Samsung 860 PRO 2 TB SSDs with X570 SATA in AHCI mode and the sequential write results are suspiciously close to the ones of the Micron 5300 PRO 3.84 TB’s even though these SSDs should perform a bit differently.

Reference the results below with those in the thread above:

Ironically the best sequential results can be achieved by putting the SSD in a cheap-ass USB 10 Gb/s enclosure that hangs off of the Thunderbolt 3 AIC that itself is connected to the X570’s chipset PCIe lanes:

1 Like

Yeah, as expected the 860 PROs don’t give a f*** performing filled at 99 %, basically the same as empty within margin of error. The only way to slow them down is by insulating them thermally to let the SSD controller’s temperature rise. Hasn’t happened yet though in normal circumstances even with the two 860 PROs being placed where there is no case fan airflow.

Will erase them and then initialize an X570 SATA RAID1 to compare them to the Micron 5300 PRO 3.84 TB SSDs, unfortunately don’t have four units at hand to also be able to check them for their RAID10 performance.

2 Likes

Was the following also fixed?

And would this also apply to NVMEs using the AMD raid?

1 Like
  • No, at least for SATA RAID arrays TRIM is still not active;

  • Currently I don’t have free NVMes for experimenting around but of course depending on the SSD controller NAND-based NVMe SSDs are also f’d if TRIM is not working poperly;

(one of the reasons I switched over to enterprise-grade SSDs for both SATA and NVMe application since I don’t like unpredictable performance)

  • My next test is to take the RAID arrays that were created on an X570 motherboard and test them connected to a B550 motherboard (ASUS ProArt B550-CREATOR);

  • B550’s individual port’s SATA performance in AHCI mode is better than X570’s so I’m curious how the very same RAID array performs when using it with a different platform (if it is at all possible, but my gut says it should be compatible);

2 Likes

I just built a x399 homelab for testing junk (quad gt740 different auto SLI, teslas) and I got my hands on 2 identical enterprise NVMEs

This might be something I test, but I’ll have to figure out a bunch of the basics first

First order of business is to see if the controller overheats with the boards hestsinks

2 Likes

@wendell

If “whenever” the new “X570S” motherboards are going to be reviewed, I would like to see some SATA comparisons to original X570 motherboards’ performance.

2 Likes

Here are some RAID1 benchmarks done on an ASUS ProArt B550-CREATOR (UEFI 2401), with a much slower APU (4750G) instead of a 5950X.

  • As expected I could take the SSDs that house the fully initialized RAID1 arrays from the X570 motherboard and plug them into the B550 one and it just works normally, not even a notification;

  • Also here the speeds are pretty much as initially hoped for the respective SSDs, meaning I consider it confirmed that X570 is limiting the SATA performance;

  • I just noticed I forgot to make screenshots of the RAID1 benches on the X570 system, will make them later, the B550 system is currently hammering the SSDs so that I can check its stability since I want to use the B550 system as a small Windows HTPC with VMware Workstation desktop virtualzation home lab stuff on the side that’s intended to run 24/7;

1 Like

New AMD chipset drivers have been released today, will check them out!

1 Like

Not much change.

Here’s finally a RAID1 screenshot of B550 and X570 for direct comparison, as mentioned the benchmarks are of the very same RAID array/SSDs that are moved between the the X570 and the B550 motherboard, you can nicely see the dip in sequential write speeds of about 10 %.

Since the Samsung 860 PROs are the fastest SATA SSDs I got I guess it makes the most sense to look for bottlenecks with them.

But these results are “pretty good” for X570 SATA RAID1, still no idea why the Micron 5300 PRO tank so much on X570 but on B550 work as expected. :confused:

Currently initializing a RAID1 with two Microns on the B550 motherboard with the latest version of AMD’s RAID management software (previously had been initialized with the now outdated version), then I’ll check its performance there and move it over to the X570 motherboard.

As per my posts above - it looks to me like you’re seeing read-modify-write cycles when trying to write, which is tanking performance.

This can happen with spinning disks if you make incorrect choices regarding sector/stripe/filesystem block sizes, but the correct numbers are much more well known in traditional hard drive land.

SSDs are still new, the firmware does lots of stuff in the background, and its very much an edge case.

There’s a reason enterprise flash arrays use their own custom os/firmware/etc.

Also this.

You’re very much in “edge case” land and the knowledge isn’t commonly available for what you’re trying to do.

I do have enough experience with enterprise storage however to tell you that it isn’t as simple as you are expecting it to be, and the symptoms you describe (poor write, read ok) are consistent with the read-modify-write cycles I describe above.

Your hardware is probably working just fine and not causing the issue, it’s just being instructed to do a lot more work by the multiple layers of abstraction not lining up correctly due to incorrect sizing of the various data sizes involved causing read-modify-write.

The larger (1M) writes are less/virtually not affected because they are mostly “full page/sector/stripe” writes that do not require read/modify/write cycles for every block (a full block is fully changed so the old state isn’t required to preserve via read first). This is why you are seeing pretty close to theoretical max performance on those.

4k? its a crapshoot. Your latest write figures are pretty much exactly the sort of penalty I’d expect if there was read-modify-write involved on the 4k writes - somewhere within one or more of the layers of abstraction between your OS filesystem and the cells on the SSD.

Different performance between different SSDs could be down to different SSD controller optimisations for different NAND, longevity strategy, different cache amounts, etc…

One more thing. The OS will hopefully do write caching (in RAM) if you tell it to, to mask this somewhat. This is ALSO why caching hardware RAID controllers exist - and why ZFS uses 20 (30? memory hazy) second transaction groups in RAM (to buffer lots of smaller writes into bigger chunks that don’t tank performance so bad). I’m not sure if crystal diskmark deliberately invalidates/disables this OS level write cache to measure “raw” disk performance.

So - maybe confirm if your actual use case suffers so bad, or its just diskmark… its unlikely you’re constantly doing 4k writes, unless you’re using some very specific niche application…

I do not have the competence to argue against the points you’ve made due to a lack of experience with “serious real hardware RAID SSD solutions”.

However I do not think these apply here and it’s an AMD driver or chipset/firmware issue.

Reasons, all already mentioned but as a cheat sheet to not have to read all again:

  1. My investigative journey began when I noticed that an X570 system that had its OS installed on an AMD SATA RAID1 with two Micron 5300 PROs 3.84 TB was brought to its knees due to 100 % disk activity when copying to it with a little over 200 MB/s over a 2.5 GbE NIC.

  2. I then did the first benchmark, SEQUENTIAL performance for a RAID1 configuration with two modern SATA SSDs should be around 1000 MB/s READ, 500 MB/s WRITE, I only got 900/300 MB/s.

(I didn’t make screenshots then since I had then still thought that there must had been an obvious misconfiguration on my end)

  1. I then switched to RAID 10 with four of the mentioned Micron SSDs were I only got around 1.600 MB/s Read, 250-900 MB/s Write instead of 2.000 and 1.000 MB/s. (screenshots in posting #1)

  2. The AMD RAID management software has two groups of cache settings: One for the logical array that is exposed to Windows, one for the individual drives. Since all SSDs have full powerloss protection I intended to leave drive write cache enabled.

  3. Cycling through the various array cache settings doesn’t change much at all, which seems fishy.

  4. I can take an existing RAID array from the X570 system and just plug it into the B550 motherboard and just like magic the performance is suddenly as expected, even though the B550 system is much slower single-core and multi-core-wise (4750G vs 5950X). Both systems use the same type and speed of memory (DDR4-3200 ECC 64 GiB on B550, 128 GiB on X570)

Point 6 is why I am under the impression that the points you raised do not apply in this case and something at AMD’s end is to blame.

Also:

I’m going to test the same setup tho with crappier ssds on rocket lake also. And Intel rst

2 Likes

Thanks, please mention TRIM not working on AMD’s side but on the same RAID types on Intel’s to apply a little pressure to AMD to improve.

:vulcan_salute:

1 Like

Just a silly, silly question here.

If B550 solves your problem but X570 does not, why not simply go with the B550 option? And, do the B550 have the same manufacturer and BIOS as the X570? I don’t think your conclusions have enough evidence yet, but whatever, not really me you need to convince in either case.

1 Like

It’s not either or but as well as.

The X570 one is my personal daily-driver system, B550 due to a lack of Gen4 support and just the number of chipset PCIe lanes couldn’t handle my intended configuration. The B550 one is going into the living room for HTPC purposes and light VM experimentation stuff on the side.

Note: During the X570 testing no additional NVMe drives had been connected (two Samsung PM1733 PCIe 4.0 x4 drives in an U.2 backplane would be the final configuration) and the Thunderbolt 3 AIC is idling without anything attached to it to not use up the available X570 chipset bandwith between the CPU and the chipset.

So, still not sure about your use case here. You want a workstation, with a mirrored raid to protect against drive failures.

Why do you not simply buy two NVMe to RAID1 your OS drive instead of RAID10 SATA drives? Then you can just make your SATA drives part of a slower JBOD or RAID6 setup. The 2xRAID1 NVMe configuration will perform much better and be more reliable than the 4xRAID10 configuration no matter what.

It seems to me like you’re fixating on solving the wrong problem here, but could just be me being stupid. Regardless, I do agree X570 should be able to do what you ask of it.