RAID5 low write speed and SSD cache

This is my first experience with RAID. Just want to create reliable storage on my home server/gaming pc for photos, movie streaming and probably CCTV storage (wanna try blue iris).
Most of the time I’m using my macbook so I think this PC should be fine for everything as I won’t do gaming same time with any other tasks (except CCTV recording)

I’ve got 3 Seagate IronWolf 4tb HDDs and created RAID5 (in bios)
I use i5 12500 with AsRock Z690 Steel Legends and Windows 11.
Reading speed is around 360 MB/s but Write speed varies within 10-60 MB/s
So I got the idea to add SSD cache to speed it up but found out that Intel RST no longer supports SSD caching as an obsolete technology. Also, I can’t use older RST versions as they don’t support my chipset.
Is this writing speed normal for raid 5? Any options to increase it to at least stable 80 mb/s ?
Or should I just buy 1 more HDD and do RAID 10 instead?
Very much hope to hear some advices, thank you in advance!

Write speed is the Achilles heel of Parity RAID like Raid5.

You should be able to have one disk worth of write performance which should be ~100-150MB/s sequential in your case.

You get double the write speed and 4 disks worth of read speed. I prefer striped mirrors over parity RAID. Should be ~200-300MB/s sequential writes. And even with random reads and writes, you get double the IOPS as well, but HDDs are always bad at this.

thank you, this speed is much better
how to figure out why is it so slow in my case? I’m trying to copy files from ssd (either nvme or sata) to RAID5

the smaller the files, the slower it gets. Big files (like a Gig or so) should run at max sequential write speed. Check drive specs what to expect in your case. Should be a 100 or so.

And BIOS RAID, file transfer software, whatever, may decrease this…software overhead. CrystalDiskMark will tell you what your drives can do.

15gb files copying at 25-30 MB/s, doesn’t look normal at all
CDM shows 200 MB/s for Q8T1 and 32 MB/s for Q1T1
Normal write speed for single IronWolf is 150-200

Please don’t. First, hardware RAID is dead. Wendell says so!

Second: Win-11 is not a good platform for RAID anyway. It’ll work, but the OS has problems of its own, telemetry (calling home with your unique data) is just one of them.

I’d suggest getting an older (read as: ancient) PC case (preferably with working PSU that adheres to the ATX standards: 200W suffices, really) Next, purchase a Aliexpress Erying mATX board, add sufficient RAM (32, 64 or even 128GB) and a 1TB NVMe drive (partially for cache), the 3 RAID drives you have, TrueNAS or Proxmox as the OS and you’ll end up with a (fairly) cheap NAS that won’t loose your data when (not if!) the mainboard goes AWOL.

This guy has something to say about these Erying mATX boards:

Just don’t be tempted by those “deals” offered on Aliexpress: the mainboard on it’s own is quite OK price-wise, but the bundles with RAM and/or NVMe/SATA drive are essentially rip-offs as these are much cheaper to get separately.

HTH!

The RAID 5 write penalty is 4, so theoretically you could get ((3 drives)*(175MB/s per drive))/4=131.25MB/s of read throughput… but the BIOS software RAID is less than ideal implementation and may fall short of this figure.

Out of curiosity, what stripe size did you choose when making the array? Increasing the stripe size may help reduce the amount of CPU cycles the software RAID is using up and increase the transfer rate.

Another thing to consider is if windows defender is scanning all the files you are transferring and slowing everything down, you could check in task manager to see if “Antimalware Service Executable” is being a resource hog during transfers.

I remember last time I was doing raid5 there was something about stripe size and number of drives. I think maybe along the lines of stripe size * columns = allocation unit size? You need to match things up so that they are properly divisible by your drive number and write speed instantly goes 3-4x higher than if you don’t line up everything properly for the drives. Ill see if I have notes saved on my PLEX server when I get home.

But instead of bios raid you should be doing this in Storage Spaces, which you can set up through powershell to get to the options you need.

So what’s his solution? I’ve tried to watch this video a few times, but too advanced for me. Didn’t find an answer there.

I also have one more Dell PC with i7 4700, was thinking to use it for the server.
But can’t I just use proxmox on my main i5 12500 and have win11 and truenas there? so I can have all in one box.
Also seems like it will be tricky to run Jellyfinn client on truenas.

damn, this is sick, I wish I’ve seen it earlier.

task manager shows 0 cpu and disk usage for "Antimalware Service Executable” during transferring

default wich was 128kb

So I will have to back up the data and create raid again with different strip size, correct?

somehow when I did my research about raid and storage spaces I found a video where bios raid was faster than storage spaces. But that might be wrong. Also when I tried to create parity in storage spaces I’ve got 10gb of space instead of 8gb how it is supposed to be for the raid5, how is it even possible?
So you recommend switching to storage space and parity?

These are my notes from making arrays for my plex server:

  • Interleave must be matched to the format allocation unit size. So if 5 disks in parity 1, that means 4 drives are getting data written to them. 256K interleave * 4 = 1024MB. So format the volume at 1M allocation unit. Or 512KB * 4 = 2048 so format at 2M. These get very good write performance (for raid5 anyway).
  • If you have 6 disks in parity 1, you will never have good write performance since you cannot match data stripes to units written in a direct ratio. Always use 3 drives or 5 drives in a parity 1 array (so 2 drives or 4 drives written to), or 10 drives in parity 2 (8 drives written to). Anything else has terrible write speed.
  • If it errors out and says something about resiliency or columns when trying to make the disk, this is normally a -size issue. Need to take the parity disk into account.

And these are the powershell commands I used to make my last array:

  • $PhysicalDisks = (Get-PhysicalDisk -CanPool $True)

  • New-StoragePool -FriendlyName “Movie1array” -StorageSubsystemFriendlyName “Windows Storage*” -PhysicalDisks $PhysicalDisks -ResiliencySettingNameDefault parity -ProvisioningTypeDefault Fixed -LogicalSectorSizeDefault 4096 -WriteCacheSizeDefault 1GB
    (note that you may want to set -LogicalSectorSizeDefault to 512 and test performance difference if you have 512 sector drives instead of 4kn drives)

  • New-VirtualDisk -StoragePoolFriendlyName Movie1array -FriendlyName Movie1_Array -ResiliencySettingName Parity -FaultDomainAwareness PhysicalDisk -PhysicalDiskRedundancy 1 -NumberOfColumns 5 -Interleave 256KB -WriteCacheSize 5GB -ProvisioningType fixed -Size 42.5TB
    (you would want to use 3 columns for 3 drives instead of 5, and the -size will also change based on your drives being used)

Alternatively, you can use commands like this instead and manually change the friendly name of each drive you want to be in your array to the same thing and then group them together:

  • Get-PhysicalDisk

  • Set-PhysicalDisk -FriendlyName “existing drive name” -NewFriendlyName “Array1HardDrive”

  • $PhysicalDisks = (Get-PhysicalDisk -FriendlyName “Array1HardDrive”)

I would say to use Storage Spaces over bios because you can plug the drives into any Windows PC and they will work back in the array, no fiddling needed. This is good for upgrading PCs and such down the line. Bios raid wont do that. Speed wise, maybe bios is better at some raid types? IDK. But I do know that with the right configuration that Storage Spaces can extract full performance of a raid5 array
I would also format this with NTFS, dont try out the newer ReFS file system. It does work well when it works, but it isnt ready yet and Microsoft keeps breaking it from time to time (usually once a year or so). They quickly release a patch when they break it for Server OSes, but client OS users are SOL on getting their data back when Microsoft nukes it like that.

ok, thank you!
will copy my data and try storage spaces again
I was really confused when it showed me 10gb available for 3 hdd 4tb each in parity, that’s not possible, right?

No that shouldn’t be possible. Did you use the wizard to make the pool? Or powershell commands? I would suspect the pool simply wasnt created like you intended.
Or perhaps you were reading it one way and Storage Spaces was trying to say something else. Such as if you make a parity pool of three 4TB drives it might say it is a 11TB pool or whatever, but the actual virtual disk wouldn’t be that size, it is simply telling you the pool total size even though some of that is used for parity space.

I was using wizard back then. I found out about powershell commands for pool recently

After you make the pool, run a Crystal Disk Mark on it and see what performance you get. You can play around with interleave and allocation unit and see what works best.

Once you know the array is functioning properly on the host machine you can transfer data to it over the network. It should get the same performance. If over the network is a lot less, then something about the SMB transfer is bottlenecking. You can always try TeraCopy or FastCopy and see if they improve network transfer performance as well.

ok, thank you!
I’m making pool on my main computer, so there won’t be any losses caused by network
it’s not such a bad idea to use main computer for storage, right?

No issue having an array on your main PC. I actually use 3 Storage Spaces arrays on mine. Two as data backups, and one for gaming to pool a pair of 2TB NVME drives together.

Sorry, one more detail
Should I turn off VMD controller in BIOS for storage spaces or keep it on?

Not sure, I dont recall ever messing with a setting like that.