Unraid - NVMe for VM disks or better passed through

Hi,

currently taking Unraid for a test drive and like it.
But I’ve a few question and looking for some insight.

My setup so far is very simple 2x 3TB HDDs (XFS) and 2x 1TB SATA SSDs for cache (Btrfs).
A single Samsung 970 NVMe is passed through to a windows VM, performance is as expected.

So my idea is to format the NVMe and put several VM disks on this drive, so than I can use the speed for more than one VM.

But after watching Level1Techs video about “Enmotus MiDrive” I’m not so sure I would profit off the NVMe read/write speeds, since the VM disks are gonna be big files compared to all the small bits directly installed by windows/apps.
Or does the file system/vm manager actually knows that only part of the files inside an vm disk are being accessed and so the VM still profits from the underlying hardware?
And if so what kind of file systems is preferable for an NVMe (Samsungs own F2FS, Btrfs or something else)?

Any input is appreciated :slight_smile: thanks and take care

I’m pretty sure it will benefit from the NVME tech. As far as file systems, I’m not sure.

All my VMs for VFIO are native installs rather than qcow2 or images. I’ve tried the images in the past and they’re crap in comparison. Not sure if it’s possible with unraid but if it is, definitely go with native installs if you can. It would be worth it just to get another drive for this purpose. I’d bet that native install on a SATA SSD > qcow/vm image on NVME drive

after some testing, I can say the performance with vdisk is really great, had 4 VMs running of the NVMe and didn’t really notice any lags etc.
But this is not to say for games, don’t know how unraid handles the array, but my guess is uses the remaining RAM as a cache, and as soon the RAM was full, the game crashed. Not Windows though.
The other VMs didn’t seem to have the problem, but I was just running different benchmark tools on them. Or was browsing on one of them.
So for now I keep my NVMe passed through to the gaming VM and maybe get a second one for some other VMs and docker containers.

NVMe really suffers when being used for VM disks on several fronts.

You have to make sure you configure the guest so that it knows it’s on a NV drive, otherwise you wont be using trim/discard, etc. Qemu does pass trim commands down to the device if it can. At this time only the SCSI virtual disk controller supports this. This means you have to configure LUNs, etc… and you increase latency due to the complexity of the SCSI stack.

I run 2x 1TB NVMe drives along side an array of 6 SAS spinners, 4 SATA spinners, and 3 SATA SSDs. Personally I use the 2 NVMe drives as passthrough devices using a hacked up OVMF bios with RaidXpert2 support. Results are excellent.

Previously I have tried Store-Mi which turned out to be a very bad choice, horrid stalls, lag, and near impossible to recover data from if there is a fault. Full block device access can perform OK, but I find under gaming workloads it can also suffer from stalls causing microstutters. Mdadm raid0 full block device performs even worse on NVMe.

IMO for a windows system the only viable solution for a flawless experience is full NVMe passthrough.

A bit off topic but interesting. NVMe drives are fast… and by that I don’t mean storage speed, I mean I/O. And by fast, I mean really fast, so much so that they are bottlenecked by the performance of hardware interrupts (see spdk.io) and polling can achieve higher thoughput and lower plus more consistent latency performance. I have played with this and the numbers are oustanding, but you have to dedicate a full CPU core to the task (including it’s HT core). If I had a 32core TR this is what I would be doing.

Hi thx for the information. And certainly looks interesting. But at he moment it think it would be overkill.

Store-Mi is the software from Enmotus, right? Maybe if they work on the hardware side of things (fw), they could improve the software as well. But since the system works for now with the passthrough NVMe and is stable, I could just install a second NVMe and install the other VMs, either with a vdisk or native like @pantato suggested .

But I’m pretty eager to see how things are progressing, with the multi core overkill in the consumer area and support for VFIO increasing, it’s gonna be great to see what’s to come.