Why no OpenZFS on Windows?

Hi,

(Happy New Year!)

I’m currently annoyed by my desire to use six non-OS SATA SSDs on a Windows host system as something RAID6-like without a dedicated hardware RAID controller.

I know about my motherboard chipset software RAID options but AMD only has RAID 0, 1 and 10 and the driver performance is somewhat sub-optimal.

Just to see what happens I’ve tried VMware Workstation and its feature to raw passthrough entire physical disks to guests but while the performance with a Windows guest isn’t that bad using it with TrueNAS (core) is just abyssmal :frowning:

Is there a reason why there isn’t something like a virtual device driver program that makes it possible to use OpenZFS under Windows - was imagining something like VeraCrypt when using entire drives (just to help imagining how I’d use it)?

Regards,
aBavarian Normie-Pleb

Win-OS isn’t suitable for the way ZFS works. In short, running Linux/Unix style file systems in a VM based on a Win-OS host is kinda defeating the purpose of using these file systems, and their advantages over NTFS, in the first place.

Best solution, be it not for free, is to migrate the SSD drives to a separate machine with TrueNAS, Unraid or even a plain vanilla Linux distro (Debian, Ubuntu Server, etc) to gain the benefits of ZFS.

1 Like

Happy new year!

On a kind of design philosophy level, I think it comes down to the wants/needs of users of the respective OSes.

Windows, for the most part, is a user-facing OS, so the file system implementations, are made to be as hands-off as possible. You can use Storage Spaces to create a software-based RAID6-esque solution, which won’t perform wildly different from ZFS in use (resilience potentially being another story), but Windows deals with the details.

OpenZFS is popular in enthusiast and enterprise environments because the exact opposite is true for it. It’s configurability and resilience are its main selling points, but it’s not necessarily as user-friendly because of this.

1 Like

I can fundamentally understand your points of view.

However I disagree that it would be “unclean” to use it with Windows

  • Windows systems can also use ECC memory
  • The memory drive cache can be disabled (and you can also get drives with powerloss protection)
  • Would come in handy to check the datasets/drives of a dedicated ZFS server in case of a hardware failure there
  • Microsoft’s SMB3 is still a bit faster than Samba

But well, sadly Devember is over… :upside_down_face:

Microsoft has no interest in other formats, not even expanding ReFS to replace NTFS, and seem pretty keen on dissuading absolutely unwilling to help 3rd parties.

The ZFS on Windows project is slowly ticking along, but it’s a niche of a niche.
Anyone who wants reliable storage, uses a system that supports it; either a hardware controller on small scale, or a separate box (san/nas/file server) for larger scale

1 Like

They are working on that, aren’t they? Not sure how well it’ll work, or how good of an idea it is, in general, but there you go…

3 Likes

I am happy to be corrected, but last I looked, it was in testing/ alpha, the pools being made up of files, without direct disk access

Doesn’t look production ready (if it will ever be…), but assuming their checklist is somewhat accurate they do seem to be making good progress.

Marelooke beat me to it. ZFS on windows is actually being worked on. It’ll be a long while before it’s useable since it’s basically a single person project. Additionally the effort is first and foremost about windows being able access ZFS pools. Trying to boot on it may or may not be viable. But if it pans out it’ll be neat to have, even I personally keep windows confined to a vm on top of ZFS these days

Also note, the current project is here: GitHub - openzfsonwindows/openzfs: OpenZFS on Linux and FreeBSD

The zfsin repo was an early test repo.

I believe there’s also a Mac port in the works as well.

2 Likes

I think it is a touch further along than the windows one. But both the Mac and Win ports are spearheaded by a single person, so progress is slow.

3 Likes

GitHub - maharmstone/btrfs: WinBtrfs - an open-source btrfs driver for Windows BTRFS seems to be coming along better for Windows, by virtue of it being in ReactOS (Open source Windows clone.)

While others have definitely answered the question in a fortuitous manner. Its not the the unix file structure thats as much of a problem as its the memory models and the other functions of the system. after all BTRFS is doing really darn well at making progress to coming to windows with a software layer driver

Its pointed out above so linking would be redundant.

I think the thing with ZFS is it really would require more of than a driver. It would require switching how the OS allocates memory. While windows can certainly already do this in a primitive manner its not as easy as doing it in the linux kernel on the fly

Back to practical considerations… how’s windows iSCSI performance? Can you use a pair of 40G nics and mount a zvol?

Actually, the reason Windows support so few filesystems has mostly to do with the fact that they never bothered to implement a proper block driver device subsystem (BDDS), the way Linux has.

This means all block device support needs to be implemented as a userspace driver, which drastically reduces performance to the point of kernel-boosted NTFS and exFAT being superior to most other block device filesystems on Windows.

In comparison, Linux BDDS is a freakin’ Shinkansen to Windows’ steam engine train. That said, it is far from perfect. The reason ZFS isn’t default on Linux is partly due to licensing, partly due to a lot of ZFS features already being implemented in the BDDS in a conflicting way.

Bottom line: You are more likely to see performant ZFS support in ReactOS than you are to see it in Windows, since the only people who can modify the kernel to make it work, well, don’t care about it. :slight_smile:

2 Likes

In my research i did come across a release for ZFS on Windows, but it was not a final or stable product so i didn’t want to trust my data to it. In the end i created an Ubuntu VM using Hyper-V and passed through four 2 TB disks to the VM. I created a RaidZ1 pool using the 4 disks and then exported that as an iSCSI share on an internal private network between the VM and Windows Host. I mapped the iSCSI share to the host and, with guest practices installed on the VM, they share 10 Gbit connectivity between each other.

1 Like

I think a much more practical solution is the one mentioned earlier, mounting over iSCSI with a fast network link.

I’d one up this by saying that with the direction Microsoft has been taking Windows in the past few years, and where it’s headed, it doesn’t seem like it’ll be worth the time and effort to dev ZFS to work on such an unmitigated disaster of an OS.

…if you can still even call Windows an OS

1 Like

Is that question directed at me?

Currently I’m still building my dedicated 2 x 40 GbE ZFS Server:

  • 8 x U.2 NVMe hot-swappable in two backplanes (PCIe Gen3) (drives connected via Delock 90504 PCIe Gen3 x16 AIC with Broadcom PEX8749 PCIe switch, the PCIe switch gets 8 PCIe lanes from the CPU
  • Broadcom HBA 9400 8i8e gets the other 8 CPU PCIe lanes (8 internal SATA SSDs, 2 external SAS 3 expanders for up to 2 x 24 SAS/SATA
  • Intel XL710 QDA2 gets its 8 PCIe Gen3 lanes from the X570 chipset
  • 2 x Intel Optane 905P 480 GB, 1 x via CPU M.2, 1 x via X570 U.2
  • CPU: 5900X
  • Motherboard ASUS Pro WS X570-ACE
  • 128 GiB ECC DDR4-2666

The configuration seems to work fine but stability testing is a pain in the ass due to shitty SFF-8643-SFF-8643 cables, but I’ve given up on PCIe Gen4 here since the Delock 90504 is PCIe Gen3-only (but I initially wanted to have a configuration that could handle PCIe Gen4 without having to replace the two used quite pricy Icy Dock MB699VP-B V1 for MB699VP-B V2 :angry: )

The only PCIe Gen4 link in the system is x4 between the CPU and the X570 chipset.

But since this Server should be limited performance-wise by the Intel XL710 with 2 x 40 GbE anyway I think I’m now pretty happy with the mentioned configuration and stop tinkering with that as of yet experimental system.

But that ZFS Server has nothing to do with the Windows system I wanted to directly use ZFS RAIDZ2 on mentioned in the first posting here.

Yeah I was wondering if perhaps instead of using ZFS in filesystem mode on windows (not supported), or mounting ZFS onto Windows through Samba; you can maybe use it well enough with ZVOLs over iSCSI; you’d still get thing provisioning, snapshoting, deduping, raidz2… but with native windows NTFS.

You could even run Linux/FreeBSD in a VM, virtual networking should be faster than physical.

Am open to suggestions on the software side once the box’ hardware configuration is finalized (cable management with the dozen or so SFF cables and fans is a biiiaatch, am using a SilverStone GD07 as a case, populated with Noctua fans)

Currently my plan is to use TrueNAS Core on the ZFS Server as a bare metal setup. The internal SATA SSDs and external mechanical SATA HDDs are meant to be used as RAIDZ2 or RAIDZ3, the two Optanes are for speeding up stuff, especially with the mechanical HDDs.