Current: Threadripper Pro system 3995wx, 512gb ram. HighPoint 7540 M.2 raid card with 8x 2TB drives in a raid 0; 14.5TB usable (Gets differential backup); 30/30 R/W. 2x dual port 25gbps ConnectX-5 cards. boot from an M.2 on mobo. 5.5GB/s real world throughput SMB on windows server; RDMA.
Requirements: I need a lot more storage, I can put 16 drives 3.5” 20-28TB ea. Later I’ll likely add a 60bay DAS to the system. I want to be able to edit videos off it, I’m not sure what my minimum speed requirements are to do that. Video files will be up to 8k75 442 HQ or 6K60 4444 XQ. 4k60 without lag is a minimum requirement. I may want to have a few VMs, but they could reside elsewhere.
Note: I have no ZFS experience of any kind, I don’t really know the terms, offerings or possibilities.
Idea: I’m thinking no parity level higher than RaidZ1 with a cold spare nearby. Something ZFS, possibly Proxmox or something similar, with one big ass disk and my current NVME array as a cache drive, is that even possible, would it be recognized? Drivers? I assume the cache drive in ZFS is a copy, so a “volatile” raid 0 NVME cache wouldn’t be a problem if it “died”. Is this possible? Reasonable? I’d like as much performance from the spinning disks as possible as well, so I guess lots of pools, kind of like a raid 50? What kind of off-cached performance might I see? How big does the boot drive need to be? I’m currently using a 256gb M.2, probably good? I will need an HBA card, suggestions?
HDDs are slow af, especially for anything other than sequential IO.
Are you a solo video editor? If so, it probably makes more sense to edit off of NVMe (via a server) and move files off onto bulk storage once you’re done.
If you want SMB Direct/RDMA (3-10 GByte/s) the only stable option is a Windows Server OS with Windows 11 clients. There is ksmbd on Linux that promises SMB Direct but I have not seen any success report.
Look at Windows Server 2025 Essentials as base (affordable, single CPU, 10 cores, 25 users ). Windows supports software raid with Storage Spaces and ntfs/Refs (pool disks of any size/type) where redundancy not based on disks but per Storage Space. You can virtualize other services via Hyper-V (type 1 virtualizer) or a free VMware Workstation.
OpenZFS 2.2 on Windows has reached release candidate state 11 with Raid-Z Expansion and Fast Dedup included, very promising. Check issue tracker for current state.
For storage management (Storage Spaces and ZFS) you can try my napp-it cs web-gui (copy and run, free for noncommerial use)
I am the only editor, but we have lots of concurrent projects moving at different paces, they’re too big for the SSD and also it’s a raid 0, so that’s very risky. Thus something that managed moving things there for more in a cache drive would be ideal. I will use the storage for other things, but this the real bottle neck. I know with big arrays, you can get 2GB/s R/W.
Is there any way to setup a disk raid and then ssd cache for it on windows server?
I honestly don’t full understand the benefits of ZFS, but they seem to be there. It seems it’s less parity? I understand ultimately there’s faster configs? If I setup something like TrueNAS and then put a windows VM, I could attach the main array to a Win Serv VM and do SMB that way, right?