I’m currently annoyed by my desire to use six non-OS SATA SSDs on a Windows host system as something RAID6-like without a dedicated hardware RAID controller.
Just to see what happens I’ve tried VMware Workstation and its feature to raw passthrough entire physical disks to guests but while the performance with a Windows guest isn’t that bad using it with TrueNAS (core) is just abyssmal
Is there a reason why there isn’t something like a virtual device driver program that makes it possible to use OpenZFS under Windows - was imagining something like VeraCrypt when using entire drives (just to help imagining how I’d use it)?
Win-OS isn’t suitable for the way ZFS works. In short, running Linux/Unix style file systems in a VM based on a Win-OS host is kinda defeating the purpose of using these file systems, and their advantages over NTFS, in the first place.
Best solution, be it not for free, is to migrate the SSD drives to a separate machine with TrueNAS, Unraid or even a plain vanilla Linux distro (Debian, Ubuntu Server, etc) to gain the benefits of ZFS.
On a kind of design philosophy level, I think it comes down to the wants/needs of users of the respective OSes.
Windows, for the most part, is a user-facing OS, so the file system implementations, are made to be as hands-off as possible. You can use Storage Spaces to create a software-based RAID6-esque solution, which won’t perform wildly different from ZFS in use (resilience potentially being another story), but Windows deals with the details.
OpenZFS is popular in enthusiast and enterprise environments because the exact opposite is true for it. It’s configurability and resilience are its main selling points, but it’s not necessarily as user-friendly because of this.
Microsoft has no interest in other formats, not even expanding ReFS to replace NTFS, and seem pretty keen on dissuading absolutely unwilling to help 3rd parties.
The ZFS on Windows project is slowly ticking along, but it’s a niche of a niche.
Anyone who wants reliable storage, uses a system that supports it; either a hardware controller on small scale, or a separate box (san/nas/file server) for larger scale
Marelooke beat me to it. ZFS on windows is actually being worked on. It’ll be a long while before it’s useable since it’s basically a single person project. Additionally the effort is first and foremost about windows being able access ZFS pools. Trying to boot on it may or may not be viable. But if it pans out it’ll be neat to have, even I personally keep windows confined to a vm on top of ZFS these days
While others have definitely answered the question in a fortuitous manner. Its not the the unix file structure thats as much of a problem as its the memory models and the other functions of the system. after all BTRFS is doing really darn well at making progress to coming to windows with a software layer driver
Its pointed out above so linking would be redundant.
I think the thing with ZFS is it really would require more of than a driver. It would require switching how the OS allocates memory. While windows can certainly already do this in a primitive manner its not as easy as doing it in the linux kernel on the fly
Actually, the reason Windows support so few filesystems has mostly to do with the fact that they never bothered to implement a proper block driver device subsystem (BDDS), the way Linux has.
This means all block device support needs to be implemented as a userspace driver, which drastically reduces performance to the point of kernel-boosted NTFS and exFAT being superior to most other block device filesystems on Windows.
In comparison, Linux BDDS is a freakin’ Shinkansen to Windows’ steam engine train. That said, it is far from perfect. The reason ZFS isn’t default on Linux is partly due to licensing, partly due to a lot of ZFS features already being implemented in the BDDS in a conflicting way.
Bottom line: You are more likely to see performant ZFS support in ReactOS than you are to see it in Windows, since the only people who can modify the kernel to make it work, well, don’t care about it.
In my research i did come across a release for ZFS on Windows, but it was not a final or stable product so i didn’t want to trust my data to it. In the end i created an Ubuntu VM using Hyper-V and passed through four 2 TB disks to the VM. I created a RaidZ1 pool using the 4 disks and then exported that as an iSCSI share on an internal private network between the VM and Windows Host. I mapped the iSCSI share to the host and, with guest practices installed on the VM, they share 10 Gbit connectivity between each other.
I think a much more practical solution is the one mentioned earlier, mounting over iSCSI with a fast network link.
I’d one up this by saying that with the direction Microsoft has been taking Windows in the past few years, and where it’s headed, it doesn’t seem like it’ll be worth the time and effort to dev ZFS to work on such an unmitigated disaster of an OS.
Broadcom HBA 9400 8i8e gets the other 8 CPU PCIe lanes (8 internal SATA SSDs, 2 external SAS 3 expanders for up to 2 x 24 SAS/SATA
Intel XL710 QDA2 gets its 8 PCIe Gen3 lanes from the X570 chipset
2 x Intel Optane 905P 480 GB, 1 x via CPU M.2, 1 x via X570 U.2
CPU: 5900X
Motherboard ASUS Pro WS X570-ACE
128 GiB ECC DDR4-2666
The configuration seems to work fine but stability testing is a pain in the ass due to shitty SFF-8643-SFF-8643 cables, but I’ve given up on PCIe Gen4 here since the Delock 90504 is PCIe Gen3-only (but I initially wanted to have a configuration that could handle PCIe Gen4 without having to replace the two used quite pricy Icy Dock MB699VP-B V1 for MB699VP-B V2 )
The only PCIe Gen4 link in the system is x4 between the CPU and the X570 chipset.
But since this Server should be limited performance-wise by the Intel XL710 with 2 x 40 GbE anyway I think I’m now pretty happy with the mentioned configuration and stop tinkering with that as of yet experimental system.
But that ZFS Server has nothing to do with the Windows system I wanted to directly use ZFS RAIDZ2 on mentioned in the first posting here.
Yeah I was wondering if perhaps instead of using ZFS in filesystem mode on windows (not supported), or mounting ZFS onto Windows through Samba; you can maybe use it well enough with ZVOLs over iSCSI; you’d still get thing provisioning, snapshoting, deduping, raidz2… but with native windows NTFS.
You could even run Linux/FreeBSD in a VM, virtual networking should be faster than physical.
Am open to suggestions on the software side once the box’ hardware configuration is finalized (cable management with the dozen or so SFF cables and fans is a biiiaatch, am using a SilverStone GD07 as a case, populated with Noctua fans)
Currently my plan is to use TrueNAS Core on the ZFS Server as a bare metal setup. The internal SATA SSDs and external mechanical SATA HDDs are meant to be used as RAIDZ2 or RAIDZ3, the two Optanes are for speeding up stuff, especially with the mechanical HDDs.