I am currently multi-booted with Manjaro and Windows, I would like to be able to virtualize my Physical Windows drives. I am attempting this with Virt-Manager by simply adding the block devices from /dev as sata devices. However when booting I am given this output.
Error starting domain: internal error: qemu unexpectedly closed the monitor: 2021-09-11T10:45:03.508556Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/dev/nvme1n1","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: 'file' driver requires '/dev/nvme1n1' to be a regular file
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 101, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 57, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 1329, in startup
self._backend.create()
File "/usr/lib/python3.9/site-packages/libvirt.py", line 1353, in create
raise libvirtError('virDomainCreate() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor: 2021-09-11T10:45:03.508556Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/dev/nvme1n1","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: 'file' driver requires '/dev/nvme1n1' to be a regular file
Hi! I recommend adding the XML configuration for the virtual machine to assist with the troubleshooting. Seems like you may have made an error when adding the disk to the VM, the configuration should give enough info to understand what went wrong.
I used your disk config and it posted, but did not boot, and gave me a blue screen, it mentioned an inaccessible bootloader; and proceeded to restart and repeat. Upon booting windows as bare machine, it went into auto repair.
I am unable to give an screenshot as it is attached to a different output, and I have not setup looking glass.
Is there something else I am doing wrong?
Sounds like the issue I got whenever I cloned a Windows 10 installation to another SSD, for some reason it didn’t like it. Perhaps Windows 10 is confused in the VM because it expected to boot off of an NVMe SSD, but found a virtualized SATA SSD instead?
Which device type did you use in the VM? SATA, virtio or something else?
If all else fails, you can try to instead pass the NVMe drive to the VM by passing it through as a PCIe device. At least virt-manager provides a GUI to do that. I have never done this, but it might help you move in the correct direction. It’s similar to how you’d pass a GPU to a VM, but hopefully you don’t have to do any of the steps requiring you to isolate the NVMe SSD.
I passed the nvme ssd through as a virtio type, essentially copying and pasting your exact config as essentially nothing else worked, I knew that it wasn’t a virtio device but using Sata wouldn’t work.
I think passing the nvme ssd through as a PCIe device is the best option, and thankfully both of my nvme controllers have their own IOMMU groups.
I am just unsure how to identify the storage devices attached to their respective controller. I think I might just trial and error them by just throwing one of them in and if things break then I used the wrong one?
Any suggestions would be nice.
These are those controllers
IOMMU Group 25: 75:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
IOMMU Group 13: 02:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
If you know which physical slot it is in, then try find /sys/bus/pci_express/device/ -type l. It should spit out the iommu_group:physical_slot. I am not near a GNU/Linux machine to test this out.
The output should looks something like this:
/sys/bus/pci_express/devices/0000:00:XX:Y:pcieZZ
where XX is the iommu group
Y is the member id of the group
ZZ is the is the physical slot in the PC.
I believe this is the extent of my current knowledge of my own computer and Linux. I tried to follow your instructions as much as possible and didn’t really recognize anything.
How do I identify the physical pcie slot on my pc and associate it with this output?
However I though to try and look in: /sys/bus/pci/devices/ and found IOMMU and pci identifiers that I could make out. But did not have the physical slot identifier that you mentioned.
This should indicate the slot, but Holy Subjuntives Batman, I have never seen it go above ten on a normal PC. lstop may give more information, but since you have exact devices, you may need to actually open up your case and look at the serial number on the device that you want to pass through in order to properly identify the right device.
To get the serial and its physical location you should be able to use lshw -C disk or lshw -C volume.
If your distro does not have lshw, then install, you may have to do it the old fashion way of pulling the drive until you find the right one and then check your iommu groups again and see which one disappeared.
If these are NVMe drives, then nvme-cli and you should be able to pull all info for the device. I don’t have NVMe drives as I am still on poorDozer.
I don’t know why I didn’t think about this in the first place. I used lshw gui along with the commands and tools you guys provided and I was able to find the product name along with the bus info. Which really is all I need. As I know which is the boot device I’m after, and Virt-Manager just identifies each controller or pci device with bus info. So with the lshw gui I got this:
And conveniently, since all of the other drives I have attached to windows are in one sata controller and their own IOMMU group if I need to I can just use this the same method as the nvme drive too. Correct?
These are those drives:
IOMMU Group 5: 00:17.0 SATA controller [0106]: Intel Corporation 200 Series PCH SATA controller [AHCI mode] [8086:a282]
I am going to try this out and see what happens, going to try the nvme first and if all goes well I will try to add the rest of the drives as block devices if any errors occur I am going to try and pass through their sata controller.
I will return upon any developments. And will keep an eye on this topic if you guys have anything to say.
Thank you for your help.
Yeah. Should work. IF you pass the controller, then anything that belongs under its group will be passed. You would still have to specify what to boot device in your configuration file though.
I have done that. And am currently booting off of the nvme controller.
Update: It works booting off of the nvme device.
I also understand that passing through anything under the controller will be passed through, it appears that while looking through lshw and lstopo that these devices are under their own controller separate from other devices. Does this sound right?
I understand what you’re trying to tell me, I think the rest of the drives’ controller is a controller of it’s own? Upon looking for the same controller in other places I can’t seem to find it. I’m guessing this just means I can use it without having to worry about losing any major functionality?
Update 2: Included the other sata controller everything appears to be working correctly with no issues. Thanks to the both of you for your help.