i would assume that it would be possible you would just have to use the linux vm as the server. and use the windows client like normal. possible to break things as it is vm to vm but in theory it should work.
You mean the host application in the Windows guest?
not sure how it would work with the ivshmem-server running in the Linux client, is the socket shared then?
@wendell you would know better than me on this but my 2cents below. please correct me if wrong.
i think so but i have not dived deep into the code to check. my best guess in this case is get it working native lunix to windows. then copy pasta to the lunix vm restart the windows machine and test . you will have to kill the processes on your lunix machine before you restart the windows box and make sure to start them on the vm that you want to use.
but if i have read this thread correctly and my understanding of how this works and with the creativity of the solution it should be just fine. just be ware of possible keyboard and Mouse issues if you dont have dedicated hardware for each vm and a dedicated usb controller.
I have a KVM switch and dedicated USB controller.
From my testing the ivshmem-server tries to create the socket itself and will error if it already exists
minimally, I think you just pass it through to the “other” vm – the client may need to be modified slightly? I would have to check to see how the client connects to be sure.
I would love if the QubesOS people incorporate this – would kickass. We’d have the same problem as the unraid folks have though, with gaming. Things like native linux gaming won’t work because of the anticheats not running in a vm.
Depending on what you mean by “pass it through”, if it’s as a file the ivshmem-server wouldn’t work, if it’s as a PCI device like how the windows guest sees it the client would need to be re-written.
you’d create an ivshmem device in the linux guest in quemu the same way as the windows guest. then it should basically be fine
Everyone, there is an update incoming that completely changes things, please be aware:
Alpha 10 Release:
This release changes how the guest VM needs to be configured, please refer to the quickstart guide for how to configure libvirt as ivshmem-server is no longer used
That’s what I am using also, ensure your BIOS is current, I have not had this issue.
The second declaration invalidates the first, simply
iommu=1 would suffice here.
Because distro’s like Debian don’t have a qemu version recent enough to support this feature.
It can be done, but currently the code doesn’t allow for it. The client would need to be able to interact with the linux ivshmem virtual device rather then directly with the shared memory file.
The client at current has no way to interact with the linux ivshmem virtual device, it won’t work.
Updated to the latest version a10 from a9. I changed the ivshem arguments in my config and created the file with the right permissions. Now the VM refuses to boot and I receive the following error:
ERROR: PCI region size must be pow2 type=0xc, size=0x1200000
This is my QEMU config:
taskset -c 1-7 qemu-system-x86_64
-device vfio-pci,host=01:00.0,multifunction=on -device vfio-pci,host=01:00.1
-net nic,model=virtio \
The error is exactly what is stated: 18M is not a power of two number. Use 16M or 32M.
Argggh, yeah, I followed the instructions and it said:
For example, for a resolution of 1920x1080
1920 x 1080 x 4 x 2 = 16,588,800 bytes
16,588,800 / 1024 / 1024 = 15.82 MB + 2 = 17.82
You must round this value up to the nearest integer, which in the above example would be 18MB
Then put that in my VM
Thanks! Did the same as Allyriadil and already wondered where I put this value
It’s possible that ivshmem-server already rounded this up, the instructions may have to be modified to accomodate the new method.
For simplicities sake, we should probably just allocate 64M for resolutions north of 4K and just leave it at that.
Question I wondered about that: does the size matter in this case as long as it’s big enough, can it be “too big”? Could we not just set some utopic value like 256 MB and never bother with it again?
I don’t think it would, just a giant waste of address space really. I don’t know if
mmap marks that space as part of the process’ working set, and we don’t want all the neckbeards whinging about too much RAM usage as they seem to get very disgruntled and aggressive when something uses memory.
Edit: It doesn’t appear like it does.
I have updated the guide, really there is no reason why it should be limited to powers of two with the shared memory, qemu enforces it for memory maps though where the ivshmem-server didn’t.
If you like blocking off access to 1/4 of a GB of RAM, sure, go ahead. You could make it as large as you want, it would just be a huge waste of RAM.
Not aggressive, just cautious. If you adopt the attitude of not caring about RAM usage/wastage you end up with programs like Windows and languages like Java.
When I calculate
2560*1440*4*2/1024/1024+2 I get 30,125 so I used 32M to have a power of 2. But when using this resolution the host says it’s not big enough (don’t know the exact message). Using 64M works, but shouldn’t 32 be enough?
Also when I tried to build the latest version of the client, I needed to install libconfig-devel on Fedora which is not mentioned in the guide. Maybe add that?