Hard Drive passthrough to VM

Hello, I have set up a Windows VM on my machine for gaming but I have one main issue: my 2TB drive “passes” but does not appear in Windows.

Its a HGST mechanical drive being passed to a Windows 10 VM, created using virt-manager/libvirt with qemu, that other hard drives (a 256GB SSD and a 500B mechanical) have passed through just fine.

I don’t even know where to start to try to troubleshoot this. Drive works fine on the host and previously under Windows 10 when it was still my main OS.

Somewhat related, looking-glass fails to build with spice/spice.c:133:3: error: ‘strncpy’ specified bound 32 equals destination size [-Werror=stringop-truncation] strncpy(spice.password, password, sizeof(spice.password)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

You’re going to have to give us some more info.

What virtual bus are you using for the disk?

Have you installed the virtio drivers?

Have you checked to see if ovmf sees the disk?

Tried SATA, SCSI, and VirtIO.

After switching to SCSI or VirtIO, an unknown PCI device VEN_1af&DEV_1009, which is a Virtio filesystem device according to the internet…though the driver CD from fedora’s site doesn’t have a driver for it. My 2TB now appears in the disk manager…but split in 6 different partitions (none mountable), which is definitely not right, it’s only one partition. Where is this driver?

I’m using seabios as I can’t get virt-manager to see that OVMF is installed on my system, though I had this same problem when I used OVMF last year until a videocard failure forced me to revert to using Windows until now.

You should not use seabios.

build OVMF from source (use the ovmf-git pkgbuild to get build instructions if you need them), copy it to /opt/ovmf/*.fd, then edit your qemu.conf to point to it and restart libvirtd.

The relevant part of mine looks like this:

# Location of master nvram file
#
# When a domain is configured to use UEFI instead of standard
# BIOS it may use a separate storage for UEFI variables. If
# that's the case libvirt creates the variable store per domain
# using this master file as image. Each UEFI firmware can,
# however, have different variables store. Therefore the nvram is
# a list of strings when a single item is in form of:
#   ${PATH_TO_UEFI_FW}:${PATH_TO_UEFI_VARS}.
# Later, when libvirt creates per domain variable store, this list is
# searched for the master image. The UEFI firmware can be called
# differently for different guest architectures. For instance, it's OVMF
# for x86_64 and i686, but it's AAVMF for aarch64. The libvirt default
# follows this scheme.
nvram = [
   "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
   "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd",
   "/usr/share/OVMF/ovmf_code_x64.bin:/usr/share/OVMF/ovmf_vars_x64.bin",
   "/usr/share/OVMF/custom/OVMF.fd:/usr/share/OVMF/OVMF_VARS.fd"
]

If you can get ovmf to work, you might stop having issues with your disks.


I was using the ovmf-git pkgbuild and had only "/usr/share/OVMF/ovmf_code_x64.bin:/usr/share/OVMF/ovmf_vars_x64.bin", in my nvram line, as they are they only files that the build places. virt-manager still can’t see it despite reloading the services.

There’s still the issue of the missing driver which I think is still a problem.

Which windows build are you on?

1803 seems to be having major problems with virtualization.

No, my image is an older release.