Storage Spaces - Stay or move to a new solution?

I’m at a crossroads with my Windows Storage Space Parity volume. I have been using this solution for mostly a media vault for years (2016) with little issues aside from slow writes. A few years ago I upgraded to Server 2019 and new hardware where I read more on how to properly set up a parity storage space in PowerShell. This seemed to resolve my write issue for a while but for some reason it is back.


Current Server Hardware Configuration

Intel NUC 11 NUC11PAHi5
1TB internal NVME SSD (Server 2019 OS → 2025 soon)
64GB 3200Mhz RAM
OWC ThunderBay 8 DAS over Thunderbolt
4x - 6TB WD Red Plus
4x - Seagate Exos X16 14TB

To note I am in the middle of upgrading my 8 HDD’s from 6TB WD Red Plus to Seagate Exos X16 14TB. So far 4 have been replaced.

I have halted my HDD upgrade as I am re-evaluating my Parity Storage Spaces so if need be i can copy my 37TB of data over to the unused drives to potentially rebuild my array. I wanted to double check my SS configuration so I went back to storagespaceswarstories to verify my settings on the current volume storing the 37TB of data .

Years ago in powershell I configured 5 columns on the 8 HDD’s with a 16kb interleave, then formatted the volume with ReFS at a 64K AUS. There is an oddity when I checked these settings.

PS C:\Users\administrator.COMPSMITH> Get-VirtualDisk -friendlyname "Parity_Int16KB_5Col_THIN" | fl

ObjectId : {1}\\COMPSMITHSERVER\root/Microsoft/Windows/Storage/Providers_v2\SPACES_VirtualDisk.ObjectId="{187446ee-3c29-11e8-8364-806e6f6e6963}:VD

:{43d963e7-19a0-49d4-acf4-40be8cc8fe7d}{1558397e-f97f-4b6c-ae35-d43546e731ee}"

PassThroughClass :

PassThroughIds :

PassThroughNamespace :

PassThroughServer :

UniqueId : 7E3958157FF96C4BAE35D43546E731EE

Access : Read/Write

AllocatedSize : 44159779995648

AllocationUnitSize : 268435456

ColumnIsolation : PhysicalDisk

DetachedReason : None

FaultDomainAwareness : PhysicalDisk

FootprintOnPool : 55201872478208

FriendlyName : Parity_Int16KB_5Col_THIN

HealthStatus : Healthy

Interleave : 16384

IsDeduplicationEnabled : False

IsEnclosureAware : False

IsManualAttach : False

IsSnapshot : False

IsTiered : False

LogicalSectorSize : 512

MediaType : Unspecified

Name :

NameFormat :

NumberOfAvailableCopies :

NumberOfColumns : 5

NumberOfDataCopies : 1

NumberOfGroups : 1

OperationalStatus : OK

OtherOperationalStatusDescription :

OtherUsageDescription :

ParityLayout : Rotated Parity

PhysicalDiskRedundancy : 1

PhysicalSectorSize : 4096

ProvisioningType : Thin

ReadCacheSize : 0

RequestNoSinglePointOfFailure : False

ResiliencySettingName : Parity

Size : 63771674411008

UniqueIdFormat : Vendor Specific

UniqueIdFormatDescription :

Usage : Data

WriteCacheSize : 33554432

PSComputerName :

This shows an AllocationUnitSize of 268435456. But diskpart shows 64K:

DISKPART> filesystems
Current File System
Type : ReFS
Allocation Unit Size : 64K

I am unsure why these 2 values are different, so if someone can explain this and if this volume layout is good it would be appreciated. My hope is if I stick with SS and finish the HDD and OS upgrade performance will be back to normal.

I’m trying to determine why this write slow down is occurring. Could it be that the AUS is not lining up? Could it be the 2 different drive types? There are no SMART errors on any of them. Could it be an issue with server 2019 SS and I should upgrade? I also saw a comment posted here that a freshly formatted ReFS volume will write at full speed but as soon as one file is deleted, write performance tanks, so I have no clue what is going on.

Preferably I would like to not copy everything off and destroy the volume and continue upgrading the HDD’s, but if I have to I have been looking at alternatives.

Potential alternative solutions are limited as I want to keep Windows server as it is host for other roles. I have been reading up on zfs-windows which look promising but it is still in beta. Then I was looking into passing the pci device for the OWC ThunderBay 8 DAS through to a VM in hyper-v and installing TrueNAS. I’m not really interested in stablebit drivepool with snapraid or other solutions unless I find something convincing that puts it over the top of my potential alternative solution.

That being said, if I destroy the volume and SS after copying the data off, I will only be able to utilize 4 HDD’s to build a new array on, then I would need to expand it to the last 4 HDD’s after the data is copied back. From my research zfs now has the ability to extend a RAIDZ VDEV one disk at a time. This is available in the latest TrueNAS Scale and I assume the openzfs implemented in zfs-windows.

Any help with this will be greatly appreciated as I am at a stand still while I determine my path forward. Thank you.

Main advantage of a Storage Spaces Pool is to pool disks of any type or size with hot/cold data Tiering and redundandancy as an option per Storage Space.

OpenZFS on Windows 2.3.1rc6 from today is still beta but is a huge step till a release state. It is already worth to evaluate (best with a backup to SS )

I am aware of the advantages and disadvantage of both. I have been using storage spaces parity for years.

This issue is large write performance has tanked which is bringing the whole array to a crawl until it is completed.

Since I still have 4x 14tb drives not swapped in yet. I’m most likely going to copy the data off and rebuild. If I go with SS I would try to start with 5 drives for a 5 column layout then expand to 8 drives. I’m just not sure what is wrong with the current layout. Maybe the interleave of 16kb is too small, but with a 64kb ReFS AUS I thought that was right. If anyone has some insight on how I should configure for SS then I may stick with it.

zfs on windows has my interest, and would be preferable if stable as if I ever did want to move the os to say TrueNAS the zfs pool should be able to be imported. I thought it being at a release candidate stage it should be stable. If this not the case? Is there still the potential for data loss? Is performance bad?

IMO you’re a prime candidate for HexOS. Admittedly I’m not a fan but I do see the value for folk who simply need a system that just works.

As stated I need to stay on windows server for now. HexOS essentially a fork and reskinned truenas that is simplified for a licensed cost. Hard pass.

Aren’t Exos X16 drives some of the worst reliable Seagate drives in the past 10 years? Not to mention right now it seems like half the “new” Exos 16-20TB drives are actually Chia farm garbage on the last half of their life with firmware flashed over to zero everything out.

You should change your virtual disk to have 4k logical sectors since that is the physical sector layout of your drives.

One is the actual pool allocation unit (slab size) and one is your partitioned disk unit allocation size. Unfortunately 256MB is the smallest slab size SS can use.

Im confused on this. I suppose this is somehow 256MB? Doesnt read like that though, reads more like 268MB.

I dont remember all the ins and outs of setting up storage spaces parity and write alignment. Hopefully someone else can chime in here. But you want to align the number of columns and interleave with the allocation unit size. I forget though if you want 4 columns or 5 when doing a 5 drive parity though. I think 5? And Storage Spaces automatically writes data blocks to 4 of them and a parity block to the 5th. So basically to get best write performance in parity it looks like: data blocks (4) * interleave = divisible by allocation unit size of pool. Then you want to format the disk with an AUS that is directly divisible into the interleave. Which in your case would be 64K interleave and 256MB pool AUS (smallest size storage spaces goes to now?), and 256K ReFS AUS. Or 256K interleave and 1M ReFS format AUS. Hopefully someone who remembers storage spaces better than I can chime in on the correctness of this though.

One way you can test your formatting and write speed is to configure it all in Power Shell, then test write with Crystal Disk Mark. Then delete it all and remake it with some different settings and test again. Id do this with first testing 4 columns and then testing 5 columns and see which comes out faster. Since I dont remember how columns and parity go together in Storage Spaces anymore, whether it is column setting and then it auto adds parity data on top, or column setting and automatically subtracts the parity block from it :man_shrugging:

1 Like

I got 8 of them from a server that was decommissioned. They were only in the server for 1 year.

The drives are presenting a 512 byte LogicalSectorSize. I would need to convert them to 4k from what im reading and I dont belive this is the issue with poor writes in parity SS.

SlotNumber FriendlyName          Manufacturer Model                PhysicalSectorSize LogicalSectorSize
---------- ------------          ------------ -----                ------------------ -----------------
           ST14000NM005G-2KG133               ST14000NM005G-2KG133               4096               512
           ST14000NM005G-2KG133               ST14000NM005G-2KG133               4096               512
           ST14000NM005G-2KG133               ST14000NM005G-2KG133               4096               512
           ST14000NM005G-2KG133               ST14000NM005G-2KG133               4096               512

I used the formula from here for a PhysicalDiskRedundancy of 1 …
$ntfs_aus = ($number_of_columns – 1) * $interleave although I’m using ReFS 64K
64K = (5-1)*16k
16 * 4 = 64
This should be lining up. This is why I’m perplexed.

I understand you are stuck on Windows Server for now; but, given your storage requirements I do believe you are much better off with splitting the system in two, one dedicated NAS (that can be built with a $500 AM4 setup just make sure the mobo support ECC) and then your actual server.

I do believe that this setup will significantly reduce your administrative burden. Up to you though.

1 Like