Hardware Raid is Dead - Need Windows 11: What other options?

Hi,

I need to use Windows 11 as daily driver. I have 8 - 20TB WD drives that I wish to pool together into single drive.

If hardware raid is dead (I agree) what are my options on a threadripper pro system? On other systems I used the intel software raid and never really had issues with it (although never more then 4 drives), I don’t need max performance just the single large partition and some minimal level of redundancy.

1 Like

Storage Spaces, AMD raid (bios setup), “spanned volume/raid-5 volume” in Disk Management.

Spanned volume is simple, just search “computer Management” and open it, then select Disk Management, right click a drive and select spanned volume.

Bios is also somewhat easy. Download the raid driver from your MB website, open bios and change to raid mode, build an array in bios options, load windows and install driver, initialize the disk.

Storage Spaces is more advanced to do it right for best performance. Here are the commands for a basic striped pool across all the drives with no parity drive protection:

Open Powershell as admin.

Get-PhysicalDisk

Set-PhysicalDisk -FriendlyName "*drives disk name goes here (found from previous command)*" -NewFriendlyName "PoolHardDrive"

$PhysicalDisks = (Get-PhysicalDisk -FriendlyName "PoolHardDrive")

New-StoragePool -FriendlyName "StoragePool" -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $PhysicalDisks -ResiliencySettingNameDefault simple -ProvisioningTypeDefault Fixed -LogicalSectorSizeDefault 4096 -WriteCacheSizeDefault 2GB

Get-StoragePool "StoragePool" | New-VirtualDisk -FriendlyName "Storage_Pool" -ResiliencySettingName Simple -UseMaximumSize -NumberOfColumns 8 -ProvisioningType Fixed -Interleave 256KB -WriteCacheSize 2GB

Then in Disk Management initialize the disk you made.

If you want to use SS and want a parity setup let me know and ill give you the modified commands.

edit:

Just saw this very last part actually, so parity commands to survive 1 drive failing would be:

Get-PhysicalDisk

Set-PhysicalDisk -FriendlyName "*drives disk name goes here (found from previous command)*" -NewFriendlyName "PoolHardDrive"

$PhysicalDisks = (Get-PhysicalDisk -FriendlyName "PoolHardDrive")

New-StoragePool -FriendlyName "StoragePool" -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $PhysicalDisks -ResiliencySettingNameDefault parity -ProvisioningTypeDefault Fixed -LogicalSectorSizeDefault 4096 -WriteCacheSizeDefault 2GB

Get-StoragePool "StoragePool" | New-VirtualDisk -FriendlyName "Storage_Pool" -ResiliencySettingName Parity -UseMaximumSize -PhysicalDiskRedundancy 1 -NumberOfColumns 8 -ProvisioningType Fixed -Interleave 256KB -WriteCacheSize 2GB

Use “-PhysicalDiskRedundancy 2” for being able to lose 2 drives without the array failing or getting data loss.

Unfortunately your number of drives doesnt work great for parity, as interleave with the number of drives isnt divisible by an allocation unit size, so write speed will be very slow (probably 20-50MB/s) It is unfortunately the price you have to pay for parity if you want to use that number of drives (8 total). 10 drives would work best for a 2 parity setup (8 data + 2 parity), or 5 drives for a 1 parity setup (4 data + 1 parity). Those can be divisible properly. Anything else incurs a giant write speed penalty. You can mirror it instead and get your write speed back, but that would mean you only have the storage space of 4 of the drives to use in a mirror.

2 Likes

Thank you so much for your response! I never knew about storage spaces. I will read about them.

Out of curiosity for the commands you listed, did you choose the values on purpose (Interleave, columns, etc…)?

Yes, columns 8 because you have 8 drives, so it will stripe (raid0) the data across them for best performance. I suppose you could do “columns 1” to keep all the data in a single chunk (not striped), but if a drive fails I’m not sure any of the data would still be accessible or not even if it was all in its own chunk of a drive that didn’t fail (because the array would still likely go down). I haven’t really looked into that and whether the pool would still be there properly or not in that situation, as I always use parity or mirror to keep data after a drive(s) failure. So I always stripe it as much as I can for performance reasons.

Interleave 256k because generally you store larger files on big drive storage arrays and so tiny interleave to save some space on very small files doesn’t usually make sense.
When making your drive partition after you made the virtual disk, be sure and use 2048KB allocation unit since 256kb * 8 drives = 2MB. I suppose you could do a 128K interleave and a 1MB allocation unit size or 64K and 512KB if you wanted, it likely wont make that much performance difference but could add up to some decent savings if you do plan to store a ton of very small files

1 Like

They’re the easiest option for Windows software RAID. The dynamic disks @EniGmA1987 mentioned (through Disk Management) are deprecated by Microsoft and, in the RAID0/1 benching I’ve done, come in more like 90% of Storage Spaces. The use case for AMD RAID’s pretty much just if you need a bootable volume (see Wendell’s videos on BIOS RAID).

The eight column simple space @EniGmA1987 gave you commands for creates an eight disk RAID0. That’s a lot of drives for zero failure tolerance, so consider parity spaces, such as in the edit @EniGmA1987 made while I was typing this. Or perhaps two RAID10s if you don’t have a hard requirement for a single volume and 50% available capacity is acceptable.

Getting the interleave right for good parity space performance takes getting the math right. Microsoft also tends to default columns to at least one less the max to allow automatic retirement of a failed disk. So consider what you want for recovery procedures and failure tolerance when planning the space. Once the virtual disk’s created, its number of columns cannot be changed. So it’s wise to do performance testing up front and fix any issues before loading up the space. (Microsoft’s docs recommend matching space configurations to their workload profile, though that doesn’t really work all that well for space getting general use.)

You can also use PowerShell or the Storage Spaces UX to format the disk. The Storage Spaces UX can also be convenient for adding drives to a pool but probably won’t use good settings when creating anything besides a RAID1 mirror space. So it’s usually best to stay with New-StoragePool and New-VirtualDisk.

It’s the number of columns which needs to line up with the AUS to avoid the write penalty, not the number of drives. I also think this looks like five columns but there might be a way to align seven columns and get full array utilization given single disk retirement. If not, five columns underutilizes the array but it should only be a drop to 5/7 = 71% of potential throughput rather than a two order of magnitude misalignment penalty.

Disclaimer here is I don’t have enough drives to have tested storage spaces this far.

As others stated: Storage Spaces & ReFS which is included with Windows Server 2022 Standard or Datacenter (if you plan on more than 2 Hyper-V VM’s)

Are you planning any kind of redundancy?

3 Likes

Just to clarify, Storage Spaces are available in Windows Server 2012, Windows 8, and newer. ReFS is also a Server 2012 introduction but IIRC was semi-supported in 8.1 and didn’t really arrive client side until 10.

The testing I’ve mentioned doing is on Windows 10 22H2. I’ve looked at ReFS in addition to NTFS but there doesn’t appear to be much in the way of functional advantages to ReFS and there’s minor perf disadvantages. IMO ReFS is of interest mainly for integrity streams but those’ve been wanting some major fixes that, if Microsoft’s gotten to them, are expected to be Server 2022 only.

ReFS is Microsoft’s attempted counter to ZFS, complete with integrity checking.

There’s definitely a performance disadvantage… until there’s corruption
Then it’s an infinitely faster file system since ReFS still has the data intact while NTFS has lost the data and does not have a mechanism to retrieve it.

You can mount and access ReFS volumes from Windows 11 pro, but cannot make them natively as that requires a Server 2022 installation.

We have discontinued support for Windows 10 as October 2025 is fast approaching.

2 Likes

The context here is primarily parity spaces, however, which should be capable to repair NTFS or ReFS without integrity streams. Not sure if there’s been enough third party testing to ascertain which approach is most reliable, though, and Microsoft seems cryptic on the topic (three and maybe two way mirrors should also permit repair). I agree ReFS is cool in that it can (in principle) provide repair to simple spaces or regular partitions but, for client OS use, it looks like any fixes made to Server 2022 would flow to Windows 12 at most. I haven’t been able to find anything from Microsoft on whether that would actually happen and, if so, on what timeframe.

The best I’ve personally been able to work out is it seems like Windows client deployments probably have the best odds if kept on NTFS and combined with non-Microsoft mechanisms for integrity checking.

Yeah, it’s my understanding Microsoft at least partially (maybe mostly?) crippled ReFS out of 11 and has brought creation back only for dev drives. This lack of clear and sustained commitment to supporting the filesystem, presumably at Microsoft VP level, is IMO another reason to careful of adopting ReFS.

Guess we might have an idea of you work for. :grinning: The product manager communications on this I’m aware of are from u/wbsmolen_, IIRC, and apply more broadly than just 10.

1 Like

for the VERY specific use case of providing multiple disks as local storage on a windows desktop machine, hardware RAID is NOT dead. but also only available VIA specific adaptec hardware.

this is NICHE, but still a good solution.

1 Like

Can confirm.

I’ve been in multiple programs were software RAID solutions are explicitly banned by name because of all the performance and reliability problems they’ve caused in the past.

1 Like

If there’s market share data available I haven’t been able to find it. But, anecdotally, I’d guess the ordering from most to least commonly used is probably closer to ASMedia, Marvell, Broadcom, and then Adaptec. Not sure if there are other companies making RAID controllers (the other obvious candidate, JMicron, doesn’t have anything in their current lineup).

software RAID (driver or firmware based)

hardware based (part of the chips microcode)

even then, adaptec also makes HBAs, they are good, but way more expensive than the used stuff recommended in home server guides.

don’t get ZFS, BTFS, MDADM, confused here. those are OS level software RAID. totally different.

1 Like

Incorrect for all three. Good luck.

1 Like

It is actually easy to see and prove my point, however in my elevated age i find arguing on the internet a non soothing hoby.

Peace.

Honestly, I’d just use spare hardware to build an external storage machine, link up to it via network, and use something like ZFS. (or TrueNAS)

1 Like

I use hardware RAID on my TR Pro system. 2 x 14TB drives RAID1, for my downloads, ISO’s and initial backups. There is nothing on the RAID array that I cannot lose or would stop a full recovery.

ReFS is ideal for enterprise environments where data integrity, high availability, and large-scale storage are priorities, particularly in backup, archiving, and virtualization scenarios. However, it is not suitable for general-purpose computing, boot volumes, or applications requiring features unique to NTFS like dedupe, file compression or legacy software.

We’ve had some wacky issues specifically on Windows Server 2022 and ReFS that vanished by going back to NTFS.

2 Likes

Does anyone have recent experience of Open ZFS on Windows?

I heard there used to be problems with driver coflicts which could cause data corruption & blue screens. But there seems to have been much progress in recent years and it’s now at ‘beta’ status.

I’ve had 4x 10TB HDDs sitting around for some time (5 years :see_no_evil: ) with the intention of putting them in an old computer to use as a basic NAS/file server. Data integrity is my priority, a solution which has detect minor data corruption (so double parity or equivalent) is what I would really like.

I considered Storage Spaces, but I don’t really want to be locked into a proprietary solution. (And I read somewhere that Storage Spaces needs a minimum of 5 drives for double parity.)

Ideally I would like to keep the computer running Windows so it can be used for general purpose things (although I have to admit I haven’t really used it for anything else for ages). TrueNAS Scale is an obvious solution, then run Windows in a VM on that if I need to. But…

The project has been delayed for so long due to long-term illness/disability (which affects my brain & muscles). When I planned to start the project diving into the world of TrueNAS & Linux shouldn’t have been too hard. Not a complete Linux noob and still able to learn. But… then things declined more rapidly than expected. So it never happened.

1 Like

ReFS only auto fixes data corruption on mirror arrays anyway right now (cause it needs a 2nd copy to check against), so 4 drives in a striped mirror (raid10) would be best. OpenZFS is extremely beta and not at all recommended for anything where data is important. They have finally gotten to release candidate which is nice, but some of those fixes listed are things I would really consider to be things that should be back in beta. With important data I wouldn’t even consider using it until it has had a stable release out for at least half a year or more.

Another benefit of staying Windows is you can use Backblaze desktop for $9/mo for all your storage. That would be all 20TB for $9. You cant come close to that anywhere else, and 20TB on TrueNAS through Backblaze would cost $120/mo for cloud backup.