Microsoft Tiered Storage Spaces (SSD + HDD) performance worse than HDD

Inspired by the recent StoragMI video our gracious leader posted: I wanted to try something similar.

I have two 120GB SSD’s and two 6TB HDD’s.

I don’t get it, performance of my tiered storage space on Server 2016 is worse than the HDDs’ in raid1. Like much worse, sequential writes in raid 1 are 230 where as with the tiered storage space, I’m getting erratic speeds from zero to 34, then jumping to 100 and back down to zero. Weirdly, when I run a crystal disk speed test and it works pretty well. But as soon as I begin a large file copy (robocopy my files back over from an older single disk) it sux and speeds drop lower than the software HDD mirror.

Any help or suggestions would be appreciated. At the end of the day, its a NAS and the HDD’s are fast enough for the network, but it’s really annoying me I can’t seem to get it to work. Plus I want to use those SSD’s.

Here are my PS commands after I get the drives installed:

New-StorageTier -StoragePoolFriendlyName “Mirror Pond” -FriendlyName SSD_Tier -MediaType SSD -ResiliencySettingName Mirror -NumberOfColumns 1 -FaultDomainAwareness PhysicalDisk

New-StorageTier -StoragePoolFriendlyName “Mirror Pond” -FriendlyName HDD_Tier -MediaType HDD -ResiliencySettingName Mirror -NumberOfColumns 1 -FaultDomainAwareness PhysicalDisk

$ssd_tier = Get-StorageTier -FriendlyName SSD_Tier

$hdd_tier = Get-StorageTier -FriendlyName HDD_Tier

New-VirtualDisk -StoragePoolFriendlyName “Mirror Pond” -FriendlyName Vault -StorageTiers @($ssd_tier, $hdd_tier) -StorageTierSizes @(22GB, 5588GB) -ResiliencySettingName Mirror -WriteCacheSize 95GB

Set-StoragePool -FriendlyName “Mirror Pond” -IsPowerProtected $True

Then I format the Virtual disk to ReFS and restart



I think this is one of the better blogs on storage space tiering (it’s been a while since I played with it):

Basically it sounds like you might be saturating the writeback cache and the SSD tier and then it’s writing directly to the HDD whilst trying to empty the cache. TBH I’ve always thought that storagespace tiering is really about optimizing read rather than write workloads.

The above blog links to several other articles, they are a little dated now as the blog was written for Server 2012 R2 and there have been changes/optimisations in Server 2016 and Windows 10; but it should still hold up.

Good luck and report back if you manage understand the problem and resolve or at least explain it :slight_smile:

I had heard that too…
My real world test is transferring a 47GB mkv file from another system running a single drive, also both systems with dual nics. I set my write cache to 95GB. So that should mean the entire mkv file should transfer into the write cache and be saturating the link, or at minimum, stressing out that single drive.

Thanks for the suggestion though, anything else you can brainstorm with me?

I am out of ideas.

Wait a second…

“With Storage Spaces Direct, the Storage Spaces write-back cache should not be modified from its default behavior. For example, parameters such as -WriteCacheSize on the New-Volume cmdlet should not be used.”

Did I break it?

Maybe new commands should be:
Set-StoragePool -FriendlyName “Mirror Pond” -IsPowerProtected $True

New-StorageTier -StoragePoolFriendlyName “Mirror Pond” -FriendlyName SSD_Tier -MediaType SSD -ResiliencySettingName Mirror
New-StorageTier -StoragePoolFriendlyName “Mirror Pond” -FriendlyName HDD_Tier -MediaType HDD -ResiliencySettingName Mirror

New-Volume -FriendlyName “Vault” -FileSystem CSVFS_ReFS -StoragePoolFriendlyName “Mirror Pond” -StorageTierFriendlyNames SSD_Tier, HDD_Tier -StorageTierSizes 118GB, 5588GB

Can’t wait to restart that transfer again. -sarcasm

But you are not using Storage Space Direct? Are you?

Storage Spaces Direct is a shared nothing clustering.

If you are not using those it’s probably best to find docs on regular storage spaces as I know storage spaces direct do have some limitations.

Also, the write cache is there to optimise random write I/O so you should probably only set it to a few GB not 95. It’s default in server 2012 R2 is only 1GB. Then set the SSD tier to be something much larger.

  • I think it might be that your performance sucks because you made the SSD tier too small and have a massive writeback cache that is barely used?

I assume that means you have battery backed disk controllers? Else that’s a potential risk to your data.

BTW; this was in the article I linked.

I really think you need to reconfigure it with a much larger SSD tier and leave the write cache at the default value or something < 10GB.

EDIT: and this talks about random IO to the write cache, your file transfer will be sequential so might not use the writecache at all!

Hope you figure this out, I want to know without having to repo the problem :smiley:

Just blew away my virtual disk and re-created it with mostly default settings as discussed…

Started my large mkv transfer aaaaand, it fill up ram then sucked.
Even if it were passing the ssd cache straight to the HDD tier, it should still transfer faster than bouncing between 30MB/s to 100MB/s and then back to zero for no observable reason.

so weird. Keeps me up at night.

I did have a Mirror-accelerated parity before with an adjusted switch, just removed it and restarted, we’ll see how it works just woke up.
Rotation aggressiveness:
HKLM:\SYSTEM\CurrentControlSet\Policies -Name DataDestageSsdFillRatioThreshold -Value 75

It sucked too, which is why I dropped it and went for bigger drives.

Still sux.
I can’t anymore. I have to stop.
I initially thought it would have been awesome as an accelerated parity NAS if it worked.

Maybe we could get a follow up video on comparing a few storage tier solutions between Linux, Microsoft, and whatever third party solution they prefer.

there is something wrong with one of my new SSD’s.


Ah, so you’ve diagnosed that as the problem and it’s not storage spaces? I’ve got to say I did find using FreeNAS and ZFS a better/simpler solution for the last home NAS I setup.

I would rather use zfs on freenas but I have to have hyper-v. So I’m trying to be frugal and wrap everything into one box.

Thanks for looking at it with me, stay tuned, I’ll replace the ssd’s when I get home in a few hours.

I set up a similar system recently after watching Wendall’s video but only using a single 500gb SSD and a single 4TB hard drive. I kept the write back cache at default which I believe is 10GB. I am getting very good results using a drive, but it did take a few days to settle. Initially it did feel slow and sequential performance of large files was only around 80-90 MB/s but now I am getting over 300MB/s on writes.

Even-though I feel a little silly not realizing it sooner, after swapping them out with name brand ssd’s the speeds jumped!

Even after the cache is full, the enterprise class HDD’s kick in and maintain the larger transfer.
Finally! I’ve been loosing sleep over this for months. Lets see if it holds up.

Here is my command amendment:
New-VirtualDisk -StoragePoolFriendlyName “Mirror Pond” -FriendlyName Vault -StorageTiers @($ssd_tier, $hdd_tier) -StorageTierSizes @(131GB, 5588GB) -ResiliencySettingName Mirror -WriteCacheSize 100GB

Thanks guys!

1 Like

MS Storage spaces requires redundancy, so single ssd’s aren’t an option unfortunately.

Ask me how I found that out.

Not sure what I have done then, as I have had it running as a simple tiered space with a single SSD and single hard drive. I have seen quite a few guides that also state that you need at least 2 SSDs, so perhaps they changed something in later builds of windows . I got the powershell commands from this guide.

It will perhaps mean more to you than me but my set up does seem to work fine and I am getting very good performance from it.

"If you have just 1 SSD and 1 HDD run this command

Get-StoragePool Pool | Set-ResiliencySetting -Name Simple -NumberOfColumnsDefault 1"

No Shit!

I’m using Server 2016 and was testing it at work on 2012 R2 and those wouldn’t do single drives. Pretty cool that Win10 will though, I’ll have to remember that. Thanks for the tip!

No bother, I am pretty sure initally you could only do this with 2 SSDs and 2 HDD as all the older guides refer to this regarding the server 2012 storage spaces. They must have changed it for later builds to allow it.

Just created a simple storage space to accelerate my gaming drive on Windows 10! Used the one working drive from the pair I had for my server.

Thanks again for the awesome tip!!!

New-VirtualDisk -StoragePoolFriendlyName “storage pool” -FriendlyName Accelerated -StorageTiers @($ssd_tier, $hdd_tier) -StorageTierSizes @(100GB, 2792GB) -ResiliencySettingName simple -WriteCacheSize 18GB

1 Like