Windows Storage Space Problems

TLDR: I am creating a storage space of 5x 10tb drives to function as a DAS in single redundancy parity. The Storage space will mostly store large video files and I want to set the allocation unit size larger than 4k to speed up writes. Problem is, I’m getting abysmal write speeds on drives that can sustain ~250MB/s writes each. Write performance will quickly tank below 100MB/s and stall completely when either transferring a folder of mixed files or just simply a few large files.

Basic process is as follows

1: create a 5 drive pool using the GUI

2: use powershell to create the storage space with an interleave of 128k using 5 columns. Interleave = AUS/(Columns-1)

3: format the volume in disk management as NTFS AUS 512k (2x the interleave)

Get-Virtualdisk … | fl

Shows “allocation unit size” as 1073741824 (which I believe is 1GB???) Why is this? I definitely didn’t set that.

fsutil fsinfo ntfsinfo

shows “bytes per cluster” as 524288 (is this the 512k AUS I set?)

however “bytes per sector” shows 4096

I also tried erasing the volume in disk management and formatting in powershell using the following

Get-VirtualDisk -FriendlyName …| Get-Disk | Initialize-Disk -Passthru | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem ntfs -AllocationUnitSize 524288

Assume I’m a total idiot and don’t know what I’m doing, this is my first storage pool and I’m trying to figure it out. Should the “bytes per sector” be reading 4096? Should “allocation unit size” be reading 1GB???

I’m hoping for somewhere in the ballpark of 1GB/s writes on this storage pool, the 4 data drives should be capable of it in theory if the parity calculations don’t take me out to lunch. Any help is appreciated.

What does your fsutil fsinfo ntfsinfo output look like?
it should look something like this (minus the Bytes Per Cluster) if you’re running a 512e hdd:
image

Also what processor is this running on?
I’ve got a 4 column parity storage space and once the RAM cache fills up write speeds fall to ~75MB/s simply because the CPU is the bottleneck calculating parity.

Thanks for the reply, so I’ve tried some various things since the original post. I was originally following a guide on wasteofaserver (can’t share the link but it can be googled easily) that said interleave should be 1/2 of aus.

However upon reading comments, I read that the formula should be Interleave=AUS/(columns-1), so for a 512k aus and 5 columns the interleave should be 128k not 256, who knew?


Anyways, attached are the fsutil and fl screenshots showing a 131072 interleave and a 524288 bytes per cluster which is the latest way I formatted it.

The pc is running a ryzen 9 4900hs, I also attached it to my desktop with a 13600k and no better. On the Ryzen usage never gets above 20% overall and no core spikes above 30% either, doesn’t seem like a bottleneck. Ram also never gets above 50%

Something I’m noticing watching write based file transfers, write throughput on the storage space will drop to zero, read activity from the source file will drop to zero, a few minutes will pass and then activity will resume. When looking at the storage space GUI, sometimes a drive will show a warning (with no explanation). All 5 drives are in a yottamaster 5 bay enclosure:

Maybe a drive is getting disconnected for some reason? A superuser thread indicates a power limit on the enclosure could be the problem

But the enclosure I’m using has a 150w psu and the combined power draw of the drives is around 35w, nowhere close.

A reviewer of my encosurenotes the following:
“Good material, Nice enclosure if you intend to use each HDD individually but I don’t recommend it in SW raid, especially in Linux. maybe that’s the case with me only but each restart will give you a new dev id for my HDDs and a random drive will fail until the next restart”

Could this be causing an issue with storage spaces? The warning and file transfer problems appear specifically in the middle of a file transfer not on a restart. Haven’t rigirously tested restarts yet but it doesn’t seem too happy on restarts either.

I’d agree that your processor definitely isn’t the bottleneck; my 75MB/s write scenario is with a Turion II Neo N54L.

when I run Get-VirtualDisk -friendlyname "StorageSpace1" | fl I’m also getting 1GB reported for AllocationUnitSize, but I know for a fact that the disks were setup with 64k clusters so that output isn’t correct, or its referring to something else.

The transfers dropping to zero intermittently doesn’t sound like a storage spaces problem itself;
I suppose a good way to test if this is a storage spaces problem or an enclosure problem would be to try writing to the drives just as “normal” drives and see if disconnects happen.
Every external USB enclosure I’ve ever had has had problems when I run high throughput through it for long periods of time; I’m pretty sure its the usb controller overheating.

I think you’re right, the enclosure might just be shit. I disbanded the storage space and the pool and transferred to all 5 drives simultaneously as a JBOD. All 5 drives disconnected from the computer after transferring ~15gb of a 37gb transfer. Crystaldiskinfo doesn’t show ridiculous temps on the drives (~52c) but the usb controller might be getting overwhelmed like you said.

I pulled a couple of drives out of the 5 bay enclosure and tried them in an older 2 bay toaster I had lying around. I can complete the full 37gb transfer to 1 or both drives simultaneously in JBOD.

Curiously I do see some spikes well above the drive’s rated speeds during the transfer, jumping briefly from a steady 240MB/s up to 2GBps, which neither the drives, nor the enclosure, the cable, or the USB port are capable of. Write caching is disabled on the drives so that’s weird. That’s some weirdness I haven’t seen before.

Any recommendations on a 5bay 10gbps capable DAS enclosure that isn’t shit?

This is why you don’t use /dev/sdb, but dev/disk/by-(id, uuid, path,etc.). sda, sdb can change on a reboot (first come first serve UNIX policy)…using unique identifiers prevent this and make replacing disks and troubleshooting much easier.

Never use sda, nvme0, etc. for your storage arrays.

2 Likes

I’d like to pretend I know what that means, buuuuut assuming I’m a total idiot that’s basically klingon to me.

1 Like

I’ve seen write caching policies act weird/not consistent when the storage is connected through USB before. You could tell for certain if the unrealistically fast speeds are a result of write caching by monitoring RAM usage during file transfers.

As to what 5-bay USB enclosure isn’t trash… I’m not sure I can recommend one, every one I’ve ever used has had problems *when used heavily.

Wasn’t noticing any ram spikes on either JBOD or storage spaces. But faster transfer speeds than expected are a good problem to have I guess. Not that the overall transfer completes faster, explorer just reports spikes in speed.

There has to be something… Currently comparing the Sabrent (DS-SC5B) and the Terramaster D6-320. I’ve used the sabrent 1 and 2 drive enclosures before and they’ve worked well for years, albeit they are different animals.

I know Wendell recently mentioned one he said he was acceptable but I can’t find out where he said that.

Actually now that I think about it, Areca makes USB/thunderbolt enclosures that are pretty good, but they are super expensive, like >1000USD.

Knowing Wendell I’d expect some 24 bay monster lol.

Oof yeah that’s a bit out of the price range.

Going back to the original storage spaces talk. I’m pretty sure, unless they’ve really worked on it in the past few years, storage spaces has a really horrible raid5/6 implementation. Pretty much raid 10 is the only real way to deploy storage spaces.

It’s also even more than that too, storage spaces at least was, maybe not anymore, very picky about the storage controllers being used and there was really only a few approved by Microsoft back in the day. It was so much that when you told Dell you wanted to do storage spaces, they basically just had one option for you to buy. It worked, but you didn’t have options.

When I ran storages spaces I did raid 10 with an SSD caching tier and I had pretty good performance. This was back in server 2012 R2 and 2016 days.

Any suggestions for other software based raid solutions?

I use Unraid, it’s pretty fantastic product. You can do parity based RAID on it.

If you want to stick to Windows I used Drive Bender with snap raid to do a Raid5-like deployment a while ago. Edit: Looks like they finally did end of life Driver Bender. Seems like you can still download it, but wont be able to license it.

The cool thing about Unraid and Drive Bender is they both do file based “raid” solution (it’s kind of not raid but kind of is.) This allows you to pull any drive and mount it on your desktop and read all the files. The benefit is if you had a raid5 deployment and lost two drives, you can still recover any files on the remaining drives.

In Microsoft-Land, ReFS can work, if you have the right OS-license.

For my slightly-janky NAS-setup, I have been running BTRFS with 2 drives local and 4 drives in a USB-enclosure.

My understanding of unraid/ zfs is that you’d need to run it on a separate machine. I don’t have a spare nas pc to run unraid as the os, hence the need for an enclosure.

With refs, isn’t that just a format you’d use instead of ntfs on a storage space? And doesn’t it require win 10/11 enterprise?

Isn’t btrfs a linux filesystem?

ehhh. Performance for Storage Spaces has been notoriously bad for years. ReFS never became the default filesystem for a good reason. ZFS or even btrfs would be better choices, but parity RAID in btrfs is laughable. If you look at btrfs literally don’t use it’s volume manager.

Microsoft/ Storage Spaces Direct kinda tries to fix some of that Deploy Storage Spaces Direct on Windows Server | Microsoft Learn

But at the end of the day Ceph is better at that role.

If you’re living in Windows land, you’re better off with an old LSI HW RAID card and not using storage spaces.

Doesn’t a raid controller card require the drives to be mounted internally? I don’t have the necessary 3.5" bays internally.

ReFS is readable by all Win10 and since recently Win11 versions. Creation of ReFS-pools requires M$ Server, Win10 Enterprise or Educational (I think).

Licensing ideas by Microsoft are a special kind of joke.

With BTRFS, you need to do your research into what configurations will not blow up in your face before putting all your eggs into one basket (which you shouldn’t do anyway).