Exciting stuff! Can’t wait to see more of this.
OH MY LORD
i need this right now!
seriously though how did you do it? never seen such a seamless vm. i will pay any amount of money to get this as soon as i get my new ryzen setup that is
All in good time . Just wetting your appetite.
I have spent the last month on this project (starting with the NPT fix) from initial conception to what you see here. Obviously this can run full screen fully integrated with the host.
Can confirm it’s legit
This looks fantastic, and i3wm best wm.
I’ve just started playing with pci passthrough, and this work represents the dream.
Oh man this is my dream (and that it’ll be simple enough for a pleb like me)!
Any luck or plans on AMD GPU’s?
Call me a sceptic, but what is it you’re developing exactly?
The 5ms latency sounds like you’re doing some form of screen recording.
Edit: read on, there’s no 5ms of latency, shuffling graphics around this way should get you mere microseconds of latency at less than or around 1% of CPU core at 4k 60Hz. There might be some overhead somewhere but theoretically this is all coming together as a very efficient mechanism to send graphics from one GPU to another.
The guest OS is rendering its graphics directly into a frame buffer in memory that is read and displayed directly by the host graphics card. There is no compression or anything like that. Essentially, it is a memory to memory copy.
As such it does not suffer from side effects from lossy compression (color shift, artifacts) and the latency is only as long as it takes to do memory copies, which on modern cpus is handled almost entirely without needing the CPU proper.
This will permit windows guests on linux hosts to display GPU accelerated video and graphics on the host with basically no delay, which will be the best user experience. Your windows guest with a passed-through graphics card no longer needs its own dedicated monitor.
This will be pretty huge, in my opinion.
This is great! I take it that the passed through card is setting the render target to a memory region somewhere, and then the window simply pulls from that shared region to display? I am imagining a series of steps like below.
- Guest renders to own GPU like normal
- GPU copies memory to main memory
- Window presents memory location as what it want rendered to X
- Normal X shenanigans
How close is that?
Neat! Will there also be a solution for Mac/Linux Guest?
Guys, I can’t afford to donate to the cause, but I’m willing to help with testing when I have time. I’ve got a pretty proficent background in Linux (senior Sysadmin at a software and hosting house for gambling sites). Already have a passthrough setup running on a Fedora host with Win10 guest.
Let me know it your interested in some extra hands?
I Name This
After the Titan goddess that gave the ancient greeks sight, glittering and glory!
Would like to second this. Perhaps putting up a thread that calls for a test when you want.
Optimus is a whole other ballgame that involves Hardware MUX’s and DMA Framebuffer magic which prevents GPU passthrough entirely since you cannot uncouple one GPU from the other in an effective manner.
It can work for some very Niche cases however where the hardware is setup correctly:
The technical TLDR:
Most laptops are of the middle type. dGPU passthrough needs a laptop of the far right type.
Hisrotically, no, but there may soon be an option in some scenarios:
Your first order of business will be to pass it through. If it does pass through then this… could… work
be sure to check out the diagram for all the different ways optimus may be connected… that may be relevant
I’m really excited to see this progress. Awesome work!