Help needed: Passthrough a M.2 SATA SSD

Hi there folks,
until today I used a simple RAW-image lying on an harddisk as storage for my Windows gaming VM. As you can imagine this was quite slow and partly I would describe it as torture in some games.

Since it was Christmas I treated myself to an 1TB M.2 SATA SSD I am going to put into my PC tomorrow. The SSD is going to exclusively used by the Windows VM.

I was unable to find a good explanation how to best setup the SSD in combination with the VM. Do I just put a RAW image on the SSD, do I pass through the SSD if so how (either via virsh or via virt-manager)? Do I need virtio (whatever that is) and do I specify iothreads?

If someone could help me out with a step by step instructions or simply a link to an up-to-date tutorial I would be really grateful. All other suggestions in this regard are welcome.

P.S. A fresh install of Windows for the VM is no problem, I have no need to transfer anything from the old image.

Thanks in advance!

Is your raw image using VIRTIO? If not, thatā€™s your problem. You can also use VIRTIO to pass the entire raw block device to the guest. Thereā€™s not much difference between using the block device or just a file.

Otherwise, you will need to pass your entire SATA controller to the guest. Depending on your motherboard that could be some or all of your ports.

As far as up to date tutorials nothing much has changed in a few years for VFIO. The Arch Linux wiki seems to be some of the best documentation Iā€™ve run across.

No currently on my HDD I am not using VIRTIO.

I am not planning on passing the controller to the guest. Just as I typed this I was able to make sense of the whole setup.

So to put it into words for everyone:

  1. When creating a VM with virt-manager in the Storage setting you should create a RAW Image and set ā€˜max capacityā€™ equal to ā€˜allocationā€™. The reason you should not set allocation to something smaller than max capacity is explained in these slides.

  2. As ā€˜Bus typeā€™ choose ā€˜VirtIOā€™ since it is more performant than the SATA setting.

  3. If you want to use a disk or image with ā€˜Bus typeā€™ set to ā€˜VirtIOā€™ you need to install the drivers you can find here. If you want to install Windows on such a drive create an additional virtual CDROM drive before you start the installation and mount the ISO file from the site I just linked into the newly created additional CDROM drive. During installation the setup wonā€™t be able to handle the VirtIO drive before you install the driver from the ISO.

Since I already have an NVME M.2 SSD in my system for the host, I created a 100GB RAW image on it and mounted it with the SATA and then again with the VirtIO setting. Each time I opened the ā€˜Advances Settingsā€™ and set ā€˜cache modeā€™ to ā€˜noneā€™ and ā€˜iomodeā€™ to ā€˜nativeā€™. The results are the following:

SATA mode:
SSD_SATA

VirtIO mode:
SSD_VIRTIO

Currently there should be a loss in performance since the computer needs to handle 2 layers of filesystems. First the NTFS filesystem on Windows and the the ext4 filesystem on the host when the changes are written to the RAW Image. I might make an additional benchmark later when my new SSD arrives about the performance with mounting the partition directly and using an RAW image file. However it seems the speed is good enough for my purposes so I might just use a RAW file in the end for simplicity.

What I donā€™t understand is when do I need to set the option like

iothreadpin iothread=ā€˜1ā€™ cpuset=ā€˜0,6ā€™

in virsh. Do I need this for VirtIO? Only when I use the ā€˜IOmodeā€™ setting ā€˜threadsā€™, or ā€˜nativeā€™ or in neither case?

DISCLAIMER: You can ignore my post if you want. leaving it in case usefull rather than deleting. Just noticed you have an m.2 SATA and I never used one so I canā€™t comment on the passing through of the entire controller; I did believe that all M.2 devcies worked like I did mine, but I could be wrong and the extra controller board on the M.2 NVME is a wildcard I wasnā€™t expecting.

Quick dump of how I am doing a full dedicated NVM drive passthrough:

lspci -nnk

3e:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 [144d:a804]
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 [144d:a801]
Kernel driver in use: nvme

vfio_nvme.sh - VFIO kernel modules are on; but I chose to acquire at runtime - you can just do it as per the guide at boot time with kernel parameters in grub.

#!/bin/bash

PCI_NVM=ā€œ0000:3e:00.0ā€
DEV_NVM=ā€œ144d a804ā€

Bind NVME drive to VFIO

echo ā€œ$DEV_NVMā€ > /sys/bus/pci/drivers/vfio-pci/new_id
echo ā€œ$PCI_NVMā€ > /sys/bus/pci/devices/$PCI_NVM/driver/unbind
echo ā€œ$PCI_NVMā€ > /sys/bus/pci/drivers/vfio-pci/bind
echo ā€œ$DEV_NVMā€ > /sys/bus/pci/drivers/vfio-pci/remove_id

unvfio_nvme.sh - undo it when done. Doesnā€™t really matter functionally in our dedicated drive scenarioā€™s; but I am thorough to a fault.

#!/bin/bash

PCI_NVM=ā€œ0000:3e:00.0ā€
DEV_NVM=ā€œ144d a804ā€

Unind NVME drive to VFIO

echo 1 > /sys/bus/pci/devices/"$PCI_NVM"/remove
echo 1 > /sys/bus/pci/rescan

QEMU command line argument is simple at this point:

-device vfio-pci,host=3e:00.0

The above simplified my VIRTIO block device handling, increased performance to bare-metal, let the native samsun utilities work on the drive, and fixed a problem where TRIM wasnā€™t working on the virtio drive (may or may not have been resolved by the rotation_rate=1)ā€¦ and more.

A no-gurantee dump below from my archive of one of the less pretty and maybe not best ways to do it. I have a dozen archive files and I canā€™t figure out generationally which is what. I sense less than 4 args per drive were needed and that going through the virtio-scsi-pci drivers wasnā€™t needed? Maybe I used a non-alias for the name. In case it helps to show all the nice arguments to research or give a starting point:

-object iothread,id=iothread0
-object iothread,id=iothread1
-object iothread,id=iothread2

-device virtio-scsi-pci,id=scsihw0,iothread=iothread0
-device virtio-scsi-pci,id=scsihw1,iothread=iothread1
-device virtio-scsi-pci,id=scsihw2,iothread=iothread2

-drive if=none,id=drive0,file=/dev/sdd3,format=raw,aio=native,discard=off,detect-zeroes=off,cache.writeback=on,cache.direct=on,cache.no-flush=off,index=0
-device scsi-hd,drive=drive0,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,id=scsi0,bootindex=0

-drive if=none,id=drive1,file=/dev/disk/by-partuuid/b3414ed6-fbc0-4b49-82be-a0302c8dccd8,format=raw,aio=native,discard=on,detect-zeroes=unmap,cache.writeback=on,cache.direct=on,cache.no-flush=off,index=1
-device scsi-hd,drive=drive1,bus=scsihw1.0,channel=0,scsi-id=0,lun=0,id=scsi1,bootindex=1,rotation_rate=1

-drive if=none,id=drive2,file=/dev/mapper/lvm_raid0_hdd-winten_gamedrive,format=raw,aio=native,discard=off,detect-zeroes=off,cache.writeback=on,cache.direct=on,cache.no-flush=off,index=2
-device scsi-hd,drive=drive2,bus=scsihw2.0,channel=0,scsi-id=0,lun=0,id=scsi2,bootindex=2 \

The key points are an iothread per drive, mind your cacheing configurations for performance, and detect-zeroes/discard/rotate_rate/others affect TRIM visibility? This one could be all out of date. Dumping it in case.

Let me know if I can assist. Direct passthrough is likely best for you.

1 Like

Thanks for your late/early reply. Indeed I needed to buy an SATA M.2 SSD in 2019 because of the limitations of the AM4 platform. If I put an PCIE SSD in the second M.2 slot the bottommost PCIE slot gets deactivated, but this is where my hardware RAID controller resides. However with an SATA M.2 I loose an SATA Port which I donā€™t need because of the RAID Controller.

I will benchmark the difference between passing though a RAW file lying on a ext4 host partition and passing though a partition of the SSD like described in the arch wiki later when my SSD arrives in the post.

No matter what it will be in the end it will be a VirtIO device because of the performance difference I already benched above. Would this be everything I need to add to the config?:

 <vcpu placement='static'>12</vcpu>
 <iothreads>1</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='10'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='11'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='12'/>
    <vcpupin vcpu='6' cpuset='5'/>
    <vcpupin vcpu='7' cpuset='13'/>
    <vcpupin vcpu='8' cpuset='6'/>
    <vcpupin vcpu='9' cpuset='14'/>
    <vcpupin vcpu='10' cpuset='7'/>
    <vcpupin vcpu='11' cpuset='15'/>
    <emulatorpin cpuset='1,9'/>
    <iothreadpin iothread='1' cpuset='1,9'/>
  </cputune>
1 Like

My SSD arrived and I made some benchmarks:

RAW image on EXT4 filesystem with SATA interface:
2_IMAGE_ON_UNENCRYPTED_EXT4_SATA

RAW image on EXT4 filesystem with VirtIO interface:
2_IMAGE_ON_UNENCRYPTED_EXT4_VIRTIO

RAW image on encrypted EXT4 filesystem with VirtIO interface:
2_IMAGE_ON_ENCRYPTED_EXT4_VIRTIO

PASSTHROUGH of the whole SSD with VirtIO interface:
2_PASSTHROUGH_VIRTIO

My takeaway from this is that as long as the host system has enough spare resources to handle the IO with VirtIO the system is fast enough. For peak performance use passthrough by id but only ever use the standard SATA interface for testing or if you are unable to install the driver.