A little teaser of what is to come :)

All in good time :slight_smile:. Just wetting your appetite.

I have spent the last month on this project (starting with the NPT fix) from initial conception to what you see here. Obviously this can run full screen fully integrated with the host.

I am currently working with @wendell to get this to a point where it’s ready for release with @celmor and @SgtAwesomesauce providing testing and feedback.

20 Likes

Can confirm it’s legit :smiley:

39 Likes

wow thanks @gnif and @wendell you guys sure made me drool over this.
looking forward to future updates :smiley:

5 Likes

This looks fantastic, and i3wm best wm.

I’ve just started playing with pci passthrough, and this work represents the dream.

2 Likes

Oh man this is my dream (and that it’ll be simple enough for a pleb like me)!

Any luck or plans on AMD GPU’s?

4 Likes

Call me a sceptic, but what is it you’re developing exactly?

The 5ms latency sounds like you’re doing some form of screen recording.

Edit: read on, there’s no 5ms of latency, shuffling graphics around this way should get you mere microseconds of latency at less than or around 1% of CPU core at 4k 60Hz. There might be some overhead somewhere but theoretically this is all coming together as a very efficient mechanism to send graphics from one GPU to another.

1 Like

The guest OS is rendering its graphics directly into a frame buffer in memory that is read and displayed directly by the host graphics card. There is no compression or anything like that. Essentially, it is a memory to memory copy.

As such it does not suffer from side effects from lossy compression (color shift, artifacts) and the latency is only as long as it takes to do memory copies, which on modern cpus is handled almost entirely without needing the CPU proper.

This will permit windows guests on linux hosts to display GPU accelerated video and graphics on the host with basically no delay, which will be the best user experience. Your windows guest with a passed-through graphics card no longer needs its own dedicated monitor.

23 Likes

This will be pretty huge, in my opinion.

8 Likes

This is great! I take it that the passed through card is setting the render target to a memory region somewhere, and then the window simply pulls from that shared region to display? I am imagining a series of steps like below.

  1. Guest renders to own GPU like normal
  2. GPU copies memory to main memory
  3. Window presents memory location as what it want rendered to X
  4. Normal X shenanigans

How close is that?

Neat! Will there also be a solution for Mac/Linux Guest?

Guys, I can’t afford to donate to the cause, but I’m willing to help with testing when I have time. I’ve got a pretty proficent background in Linux (senior Sysadmin at a software and hosting house for gambling sites). Already have a passthrough setup running on a Fedora host with Win10 guest.

Let me know it your interested in some extra hands?

2 Likes

I Name This

Project Theia

After the Titan goddess that gave the ancient greeks sight, glittering and glory! :smiley_cat:

13 Likes

Would like to second this. Perhaps putting up a thread that calls for a test when you want.

@wendell @gnif Will this work with nvidia optimus laptops?

1 Like

Unfortunately NO.

Optimus is a whole other ballgame that involves Hardware MUX’s and DMA Framebuffer magic which prevents GPU passthrough entirely since you cannot uncouple one GPU from the other in an effective manner.

It can work for some very Niche cases however where the hardware is setup correctly:

The technical TLDR:

Most laptops are of the middle type. dGPU passthrough needs a laptop of the far right type.

image

Hisrotically, no, but there may soon be an option in some scenarios:

Your first order of business will be to pass it through. If it does pass through then this… could… work

be sure to check out the diagram for all the different ways optimus may be connected… that may be relevant

3 Likes

I’m really excited to see this progress. Awesome work!

Would it be possible to tell Windows to dynamically change it’s resolution based on the size of the Window?

Something like this: https://youtu.be/AxqvE4CWrK8?t=3m4s

I’ve seen the video with Wendell and I got some questions, if you don’t mind…

  • Looks like you’re using Spice (or maybe you hacked Spice to use it). Will this work across a LAN?

  • Since you’re copying frames across memory region, does this opens the option to run few VM’s on a single GPU? (I’m pretty sure the answer is No, but one can always hope…)

  • Can other operations work on the same GPU that runs the guest VM? like using it’s GPU to compress/decompress those frames so it can work reliably on network?

Thanks and keep up the good work :wink:

Spice is being used simply for keyboard and mouse input, I wrote my own light weight C client from scratch for this as the spice libraries are not really usable in their existing state in external projects. It is optional if you wan’t to use Spice if you would rather pass the VM a physical KB & Mouse directly.

No, each VM needs dedicated access to the GPU, this is a physical hardware limitation. It would be nice though.

Yes, compression in the guest in VGA hardware is completely possible, but it introduces latency and quality loss, which is what this project is attempting to avoid. At a later date we can look at adding this as I know there at current the other options to do this (Steam InHouse Streaming) is all game targetted and not general desktop.

3 Likes