Illumos... anybody using it?

Since I made the switch from Windows to Linux and managed a successful GPU passthrough in KVM a few months ago I am starting to become curious what else is available.

I am also running ZFS and read about Illumos having KVM ported to their OS which I find pretty exciting. It doesn’t support PCI-passthrough though…

Since ZFS is a frequent roadblock regarding kernel updates on Arch I thought about putting it into a VM and attach my SATA controller to it over VT-d. Would Illumos be a good fit for that? Or maybe there is reason to prefer FreeBSD instead.

Please give me some input.

1 Like

Bhyve, Xen (and possibly Qemu/KVM) is supported on FreeBSD, Just use that if ZFS support is a must. Illumos primarily serves as upstream for OpenZFS and the basis of several commercial hypervisors and cloud platforms, it isn’t useful for home applications.

You want ZFS on the metal. are you proposing running ZFS in a VM?

1 Like

I am looking into Xen as well right now. It looks a little more complicated than KVM but I like the idea of how it isolates the domU’s. PCI-passthrough also seems more mature but I don’t know… The only information that compares both techs is 5+ years old.

I’ve seen different blogs and reddits talking about that. Unless done properly using VT-d it is destined to be a disaster but it certainly is not impossible. Someone made it work and distributed blockdevices to an ESXi cluster.

Thought so. However, basically I am treating my PC more like a VM host. I would like the idea of making it a two-headed setup with both Arch and Windows while having something more robust on the bare metal. GPU-passthrough is a must however as well as ZFS.

FreeBSD and Qemu is probably your best bet, then:

Note that if you go with Xen, you need a Xenserver dom0 kernel:

Or do some fiddling to get it working native:

at which point ESXi may make more sense, and just pass through the FS to a freebsd instance, and set up shares for windows/linux VMs

VGA passthru is coming to Bhyve soon, which will be the absolute best solution for your use case:

Let us know how you get on with Qemu in FreeBSD. If you run into a lot of trouble it may just make more sense to bite the bullet and custom compile your linux kernels for the hypervisor. You lose boot environments, jails, and other good security, but at least it works.

Illumos’ tenant system isn’t very useful raw, so if you do want to use it it makes more sense to get one of the commercial products based on it.

I don’t recommend FreeBSD and kqemu. It is an old and not very well supported feature.

There is nothing wrong with running FreeBSD or an illumos distro in a VM and passing through a SATA controller. Either OS will be fine if you are just doing a VM dedicated to storage. I have used both at home.

If you decide to look into illumos, I suggest either OmniOS or OpenIndiana for your needs.

Currently on FreeBSD, Xen does not support PCI passthrough, Kqemu does not use hardware virtualization and has not supported by qemu for several years, and bhyve doesn’t support GPU passthrough. There is no KVM on FreeBSD.

On illumos, their KVM implementation doesn’t support PCI passthrough.

If you want to do GPU passthrough, right now you’re stuck with Linux.

The biggest problem with running your storage server in a VM is that you have to wait for the VM to boot for the storage to become available. There is a timing/dependency ordering issue that can be difficult to work around if any other VMs or services rely on the storage provided by the ZFS VM.

I was under the impression that Ubuntu has ZFS support out of the box these days. Is that not an option?

Listen to freqlabs. I’d have said the same, but I’m always game to see how inadviseable experiments go.

Yeah, I think this was the reason I didn’t try FreeBSD right away when I made the switch. Bhyve isn’t ready yet and Xen also seems to have quite a few shortcomings as their wiki states:

Missing MSI-X PV interrupt support (FreeBSD).
PCI devices with virtual functions will not work correctly (FreeBSD).
Missing migrate/save/restore support for guests (Xen).
Missing PCI Passthrough support (Xen/FreeBSD).
UEFI boot currently unavailable, legacy boot only

Seems it leads back to my original plan to virtualize the OS I am running ZFS on. I will give it a go.

As for the hypervisor… I am familiar with KVM already but Xen looks like a good opportunity to learn something valuable for business life. Would I benefit from using Xen over KVM in regards of performance/security?
ESXi might be a choice too but I remember to have read a whitepaper that stated it’s actually behind any other hypervisor in regards to PCI-passthrough. So I would focuse on either Xen or KVM.

May I ask how your storage setup looks like? Are you making use of SLOG or L2ARC? How much RAM have you assigned to it?
Only FreeBSD has TRIM support on ZFS right now, right (website might be out of date though)?

And thank you @tkoham for providing all these information!


That will be a problem indeed. I was thinking about this, yesterday.
Problem with my storage is I have a NVMe SSD in my setup which will have to reside the Vmm and Dom0 unless I run it on an USB drive. What I have is: 1x NVMe SSD, 2x SATA3 HDDs. The latter one will be mirrored.
Would be curious how you solved that.

SLOG is not generally useful. L2ARC is significantly less beneficial than RAM. In my configuration, they provide no appreciable benefit.

I do not run my storage in a VM anymore, since I have built a home server machine for running FreeBSD and ZFS on the bare metal. The box doubles as storage and a virtualization host. I do not require GPU passthrough, so bhyve works perfectly well in the few situations where I want to virtualize something (bhyve does have UEFI boot these days, by the way). Mostly I am using jails though.

I choose FreeBSD because the software availability is generally more current and flexible compared to any other OS.

When I did run a storage server in a VM though, I passed through a SATA controller as you intend to do (eventually an HBA). I gave the VM 8GB of RAM probably, and for just a few TB of storage that was enough. Of course, the more RAM you can give to ZFS the faster it will be. RAM is faster than any other storage medium, and ZFS knows how to use it well.