Intel RST RAID 0 and REALLY slow response times >1500ms. Specs here

I think this is in the correct category.

Running:
Windows 8.1 Pro w/ Media Center
Intel RST 15.9.0.1015 (AFAIK latest with support for Win 8)

Hardware:
ASUS Z97 Deluxe WiFi/NFC
i7 4790k at stock, MCE on.
Noctua NH-U9S w/ 2x NF-A9
Sapphire RX 580 Nitro+ Storage: see image.
EVGA 80+ White 800W
Custom SFF case (~ 20 Litres), 2x NF-A8 intake, PSU exhaust.

Description:
The above Task Manager screenshot is not long after boot, and I’d just restored a Chrome session with a bunch of tabs. Regardless, latencies >1s are common. Between POST and the Windows boot option between 8.1 and 10, it takes ~2 mins where the spinny circles are spinning. This is considerably slower than when I was running just a single 250 GB drive; see bottom paragraph.

I should add that when little IO activity is taking place, I have seen latencies as low as 0.1 ms - it’s only when there’s significant amounts of IO - which RAID 0 is supposed to be twice as good at compared to a single drive.

My thoughts and potential diagnoses, in order of what I think the probability of fixing the issue is:
(1) The stripe size is sub-optimal (though I followed what appeared to be the recommended configuration from Intel)
(2) I am running in a SFF case with not brilliant airflow, so the PCH could be overheating and throttling
(3) a reinstall would fix this due to the way I migrated my install onto the RAID. (see next paragraph)

Migration process:
RAID’d the two SanDisks inside Windows 8.1 running on my single Samsung 850 Evo 250 GB, (did a bunch of upgrades accross multiple machines and ended up with 2x 120 GB drives I figured I’d RAID 0 in place). I then used gparted after installing mdadm in a live Ubuntu Linux to clone all partitions across, though had to move the start of my Linux home partiton forward due to migrating from a 250GB to a 240GB volume.

The above text is duplicated in the imgur post.

Forgot to explicitly ask a question; what could be causing response times so slow, and what could be done to fix it?

If they’re non-NAS drives, be aware that they may be getting dropped from the array by intel RST periodically.

I’ve no experience with this with SSDs, but i have had that specific issue with a pair of WD blacks from a few years ago. Thought the drives were fucked, but no - it was due to using non-raid friendly (read: deliberately crippled anti-raid firmware) drives with intel RST.

Are all power management features on the drives turned off so they are not getting powered down?

edit:
Also: your disks are quite full. You’ll be hammering the absolute shit out of that last bit of NAND and the drives may/WILL be attempting to wear level (i.e., shift blocks around like crazy in the background in firmware) while you’re doing this. This alone could be impacting your response time.

The activity you’re sending the drives won’t be the only workload they’re trying to do… the controller will be going nuts. With plenty of free space, the wear would be spread more evenly across the drive without the mad shuffling that will need to happen when you’ve only got 3GB free on each drive.

Also, the OS level SSD drivers may not be able to tell the drives what to do with regards to TRIM, etc. either due to them being presented to the OS via the intel RST. This will only exacerbate the problem you have with low free space - as without trim being able to do its thing, the drive will need to be doing a lot more work to allocate free blocks.

TLDR: i would not recommend RAIDING SSDs together with intel RST. You also need to run your drives less close to max capacity to get them some working space. But that’s just me…

All power management features are off, so it’s not going to be that.

They’re just 2x 120GB SanDisk SSD Ultra II’s, so it’s possible it’s firmware, but like you said, my partitions are quite full.

The activity you’re sending the drives won’t be the only workload they’re trying to do… the controller will be going nuts. With plenty of free space, the wear would be spread more evenly across the drive without the mad shuffling that will need to happen when you’ve only got 3GB free on each drive.

Regarding my disks being quite full - I guess it’ll also be doing terrible things with fragmentation causing more random and less sequential reads / writes then. Noted.

Intel RST does support TRIM, and according to Windows Drive Optimisation it will successfully optimise / TRIM, so I don’t imagine it will be that. Then again, I don’t know how TRIM works with respect to having multiple partitons; IE whether it’s on a partition or a volume level.
My partitions are:
300 MiB boot partiton
100 MiB EFI partiton
82.23 GiB Windows 8.1 inc 12 GB pagefile (8% free, NFTS compression on) C:
62.50 GiB Windows 10 inc 8 GB pagefile (43% free, NFTS compression on) G:
31.51 GiB Linux swap
40 GiB Linux BTRFS
6.81 GiB Linux EXT4 home

For some reason, it didn’t occur to me to consolidate both the Win8.1 and Win10 pagefiles - I had 12GB in and for the Win 8.1 partiton, and 8GB in and for the Win10 partition. As I have so much free space in the Win10 partiton, I’ve now set it up so that both 8.1 and 10 use 20 GB, IE 8.1 now has 24 % free (19 GiB), and 10 has 22 % free (14 GiB). I’ll see if I still get these high latencies and report back.

Side note - from experimentation, putting part of your pagefile on a HDD and part on an SSD just slows the entire system down; unlike in Linux I’m not aware of any way you can prioritise certain locations over others for swap / pagefile, so that’s never going to work. If there was a way to share the pagefile.sys file and swap partiton, that would nearly half the amount of required unused, reserved storage… I know Linux can use a file as well as a partition for swap, so maybe I could format the current swap partiton to NTFS, put a 32GB pagefile.sys file in there, and tell Linux that that’s its swap… But this is irrelevent as I intend on doing hardware passthrough in the near future.

I would love to upgrade to something like a single 500 GB MX500 for 60 GBPs, but alas, I don’t have the money to spare just now.