Return to

Virtualize Physical Machine with Virt-manager

I am currently multi-booted with Manjaro and Windows, I would like to be able to virtualize my Physical Windows drives. I am attempting this with Virt-Manager by simply adding the block devices from /dev as sata devices. However when booting I am given this output.

Error starting domain: internal error: qemu unexpectedly closed the monitor: 2021-09-11T10:45:03.508556Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/dev/nvme1n1","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: 'file' driver requires '/dev/nvme1n1' to be a regular file

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/", line 65, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/", line 101, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/object/", line 57, in newfn
    ret = fn(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/object/", line 1329, in startup
  File "/usr/lib/python3.9/site-packages/", line 1353, in create
    raise libvirtError('virDomainCreate() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2021-09-11T10:45:03.508556Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/dev/nvme1n1","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: 'file' driver requires '/dev/nvme1n1' to be a regular file

This is the XML File for the virtual machine

<domain type="kvm">
  <title>Original Windows Host</title>
  <description>Personal Windows Host mounted from seperate drives, an m.2, ssd, and hdd. With GPU and other peripheral passthrough.</description>
    <libosinfo:libosinfo xmlns:libosinfo="">
      <libosinfo:os id=""/>
  <memory unit="KiB">32768000</memory>
  <currentMemory unit="KiB">32768000</currentMemory>
  <vcpu placement="static">6</vcpu>
    <type arch="x86_64" machine="pc-q35-6.0">hvm</type>
    <loader readonly="yes" type="pflash">/usr/share/edk2-ovmf/x64/OVMF_CODE.fd</loader>
    <boot dev="hd"/>
      <relaxed state="on"/>
      <vapic state="on"/>
      <spinlocks state="on" retries="8191"/>
      <vendor_id state="on" value="Your Mom"/>
      <hidden state="on"/>
    <vmport state="off"/>
  <cpu mode="host-model" check="partial">
    <topology sockets="1" dies="1" cores="3" threads="2"/>
  <clock offset="localtime">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
    <timer name="hpet" present="no"/>
    <timer name="hypervclock" present="yes"/>
    <suspend-to-mem enabled="no"/>
    <suspend-to-disk enabled="no"/>
    <disk type="file" device="disk">
      <driver name="qemu" type="raw"/>
      <source file="/dev/nvme1n1"/>
      <target dev="sda" bus="sata"/>
      <address type="drive" controller="0" bus="0" target="0" unit="0"/>
    <controller type="usb" index="0" model="qemu-xhci" ports="15">
      <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
    <controller type="sata" index="0">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
    <controller type="pci" index="0" model="pcie-root"/>
    <controller type="pci" index="1" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="1" port="0x10"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
    <controller type="pci" index="2" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="2" port="0x11"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
    <controller type="pci" index="3" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="3" port="0x12"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
    <controller type="pci" index="4" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="4" port="0x13"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
    <controller type="pci" index="5" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="5" port="0x8"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0" multifunction="on"/>
    <controller type="pci" index="6" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="6" port="0x9"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x1"/>
    <controller type="pci" index="7" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="7" port="0xa"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x2"/>
    <controller type="pci" index="8" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="8" port="0xb"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x3"/>
    <controller type="pci" index="9" model="pcie-to-pci-bridge">
      <model name="pcie-pci-bridge"/>
      <address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
    <controller type="pci" index="10" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="10" port="0xc"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x4"/>
    <controller type="pci" index="11" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="11" port="0xd"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x5"/>
    <interface type="network">
      <mac address="52:54:00:39:23:ca"/>
      <source network="default"/>
      <model type="e1000e"/>
      <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
    <input type="mouse" bus="ps2"/>
    <input type="keyboard" bus="ps2"/>
    <audio id="1" type="none"/>
    <hostdev mode="subsystem" type="pci" managed="yes">
        <address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
      <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
    <hostdev mode="subsystem" type="pci" managed="yes">
        <address domain="0x0000" bus="0x01" slot="0x00" function="0x1"/>
      <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
    <hostdev mode="subsystem" type="pci" managed="yes">
        <address domain="0x0000" bus="0x01" slot="0x00" function="0x2"/>
      <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
    <hostdev mode="subsystem" type="pci" managed="yes">
        <address domain="0x0000" bus="0x01" slot="0x00" function="0x3"/>
      <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
    <hostdev mode="subsystem" type="usb" managed="yes">
        <vendor id="0x194f"/>
        <product id="0x0303"/>
      <address type="usb" bus="0" port="5"/>
    <hostdev mode="subsystem" type="usb" managed="yes">
        <vendor id="0x0b05"/>
        <product id="0x1825"/>
      <address type="usb" bus="0" port="6"/>
    <redirdev bus="usb" type="spicevmc">
      <address type="usb" bus="0" port="1"/>
    <redirdev bus="usb" type="spicevmc">
      <address type="usb" bus="0" port="2"/>
    <memballoon model="virtio">
      <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>

If I am remembering correctly this worked previously. Any help would be appreciated.
Thank you.

Hi! I recommend adding the XML configuration for the virtual machine to assist with the troubleshooting. Seems like you may have made an error when adding the disk to the VM, the configuration should give enough info to understand what went wrong.

1 Like

I have added it to the original post. Thank you for your help.

I think it might be due to this:

    <disk type="file" device="disk">

I’ve passed raw disks to a VM like this:

<disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/disk/by-id/ata-WDC_WD120EDAZ-11F3RA0_5PJRHDVB" index="6"/>
  <target dev="vdd" bus="virtio"/>
  <alias name="virtio-disk3"/>
  <address type="pci" domain="0x0000" bus="0x0b" slot="0x00" function="0x0"/>

Refer to the disk as a block device and that should hopefully fix it.

1 Like

I used your disk config and it posted, but did not boot, and gave me a blue screen, it mentioned an inaccessible bootloader; and proceeded to restart and repeat. Upon booting windows as bare machine, it went into auto repair.
I am unable to give an screenshot as it is attached to a different output, and I have not setup looking glass.
Is there something else I am doing wrong?

Sounds like the issue I got whenever I cloned a Windows 10 installation to another SSD, for some reason it didn’t like it. Perhaps Windows 10 is confused in the VM because it expected to boot off of an NVMe SSD, but found a virtualized SATA SSD instead?

Which device type did you use in the VM? SATA, virtio or something else?

If all else fails, you can try to instead pass the NVMe drive to the VM by passing it through as a PCIe device. At least virt-manager provides a GUI to do that. I have never done this, but it might help you move in the correct direction. It’s similar to how you’d pass a GPU to a VM, but hopefully you don’t have to do any of the steps requiring you to isolate the NVMe SSD.

1 Like

I passed the nvme ssd through as a virtio type, essentially copying and pasting your exact config as essentially nothing else worked, I knew that it wasn’t a virtio device but using Sata wouldn’t work.
I think passing the nvme ssd through as a PCIe device is the best option, and thankfully both of my nvme controllers have their own IOMMU groups.

I am just unsure how to identify the storage devices attached to their respective controller. I think I might just trial and error them by just throwing one of them in and if things break then I used the wrong one?
Any suggestions would be nice.

These are those controllers

IOMMU Group 25: 75:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
IOMMU Group 13: 02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]

If you know which physical slot it is in, then try find /sys/bus/pci_express/device/ -type l. It should spit out the iommu_group:physical_slot. I am not near a GNU/Linux machine to test this out.

The output should looks something like this:


where XX is the iommu group
Y is the member id of the group
ZZ is the is the physical slot in the PC.

1 Like

I believe this is the extent of my current knowledge of my own computer and Linux. I tried to follow your instructions as much as possible and didn’t really recognize anything.

How do I identify the physical pcie slot on my pc and associate it with this output?

This was the output of the command you gave.


However I though to try and look in: /sys/bus/pci/devices/ and found IOMMU and pci identifiers that I could make out. But did not have the physical slot identifier that you mentioned.

This is the output for that:


Is there something I am not seeing? Any help is much appreciated,
Thank you.

I believe lstopo can help here. I only have one NVMe SSD, but it links the PCIe device up with the block device just fine.

Here’s my example:


This should indicate the slot, but Holy Subjuntives Batman, I have never seen it go above ten on a normal PC. lstop may give more information, but since you have exact devices, you may need to actually open up your case and look at the serial number on the device that you want to pass through in order to properly identify the right device.

To get the serial and its physical location you should be able to use lshw -C disk or lshw -C volume.
If your distro does not have lshw, then install, you may have to do it the old fashion way of pulling the drive until you find the right one and then check your iommu groups again and see which one disappeared.

If these are NVMe drives, then nvme-cli and you should be able to pull all info for the device. I don’t have NVMe drives as I am still on poorDozer.

1 Like

I don’t know why I didn’t think about this in the first place. I used lshw gui along with the commands and tools you guys provided and I was able to find the product name along with the bus info. Which really is all I need. As I know which is the boot device I’m after, and Virt-Manager just identifies each controller or pci device with bus info. So with the lshw gui I got this:
And conveniently, since all of the other drives I have attached to windows are in one sata controller and their own IOMMU group if I need to I can just use this the same method as the nvme drive too. Correct?
These are those drives:

IOMMU Group 5: 00:17.0 SATA controller [0106]: Intel Corporation 200 Series PCH SATA controller [AHCI mode] [8086:a282]

I am going to try this out and see what happens, going to try the nvme first and if all goes well I will try to add the rest of the drives as block devices if any errors occur I am going to try and pass through their sata controller.
I will return upon any developments. And will keep an eye on this topic if you guys have anything to say.
Thank you for your help.

1 Like

Yeah. Should work. IF you pass the controller, then anything that belongs under its group will be passed. You would still have to specify what to boot device in your configuration file though.

1 Like

I have done that. And am currently booting off of the nvme controller.
Update: It works booting off of the nvme device.
I also understand that passing through anything under the controller will be passed through, it appears that while looking through lshw and lstopo that these devices are under their own controller separate from other devices. Does this sound right?
I understand what you’re trying to tell me, I think the rest of the drives’ controller is a controller of it’s own? Upon looking for the same controller in other places I can’t seem to find it. I’m guessing this just means I can use it without having to worry about losing any major functionality?
Update 2: Included the other sata controller everything appears to be working correctly with no issues. Thanks to the both of you for your help.