Hi: Looking to build new PC for looking glass!

hello. Im interested in building a new pc with the intent of using looking glass.
This is my first attempt at using pci passthrough so i might be mistaken at many points. please correct me at any time.
The parts im looking at right now are ryzen 5800x cpu, rtx 3060ti, and asrock b550m pro4 mobo. i am considering using a gt710 for the host.
I have some concerns that i wish to address before i make any purchases:
first, i heard that there are problems with motherboard iommu grouping. From what i understand, the only device i wish to passthrough is the rtx 3060ti card; therefore, i require its PCI-e slot to be in its own iommu group. How do i know if the motherboard supports this?
Second, does PCI lane allocation affect my use case? would it be wise to look for 8x/8x?
third, i have seen on the website that <16ms timings are achievable with 60Hz refresh rate. I wish to use a 120hz (or possibly higher in the future) monitor. Should i purchase better hardware to achieve <8ms max timings or is it not feasible?
Fourth, i wish for the transition from or to the vm to be seamless: i dont want to have 10 seconds of lag switching away from the vm. Is this realistic? This is a very important feature and why i am interested in looking glass in the first place.
Lastly, when i am not using looking glass, would it be possible to use the guest video card if i am using a dummy plug and not attaching my display to the card?
Thanks in advance. This project is really cool.

See if other people have posted IOMMU groups for the model of motherboard you are looking at.

Looks like it is not great for the board you are looking at.

1 Like

Ok, let me try to address these questions for you, some of them are not LG related but VFIO.

There can be, it’s mostly luck of the draw. Worst case you have to apply the ACS patch to the kernel which basically bypasses this security feature.

Yes, LG is moving frames between GPUs, 4x is fast enough for most things, but the faster the better.

That’s only because I only have 60Hz monitors, people frequently use LG for 120Hz and faster. 4K @ 120Hz or faster is on the edge of what current hardware is capable of, YMMV. As for 60Hz, we are getting negative latency on the latest master build of LG with some tuning, see my latest post here on that.

LG is seamless, zero transition/lag time, it’s just like any other application on your desktop.

Not sure what you’re asking here.

2 Likes

Sorry, I thought LG was an implementation of VFIO alongside other features.

Thats unfortunate but avoiding the problem with a kernel patch is promising.

From what i understand, i cannot use my passthrough GPU in linux after setup. I would have my monitor connected to GPU0(not passed through) and a dummy plug for the (passed through) GPU1. Is that the case or can i use the (would be passed through) GPU1 to draw/decode/encode when LG is not running?

How do you expect to see what’s rendered on your guest’s GPU if you’re not using LG and not using a monitor? Why would even consider not using LG? I can tell you that I’m using my Windows VM daily, with a dummy plug and LG, and haven’t used anything else to view the guest for months.

Hmm, you might want to consider a 5700G instead of a 5800X - that way you will not require a second GPU card. No idea about state of Looking Glass on APUs at this time though!

Incidentally I dream of a time when I can build a SFF Looking Glass with APU and GPU side by side… might be a pipe dream though :grin:

APUs share system ram, we highly discourage their use for LG due to the memory bandwidth requirements.

4 Likes

Ah, thanks for the heads up, didn’t think about that. :confused: I wonder if a quad-channel (e.g. 2x2 dual channel) could do something about that, then mATX would be an option - but yeah, will always be a tradeoff.

Quad channel wouldn’t help much, the issue is the memory controller throughput. LG has the following copy path;

guest gpu → guest ram → copy to ivshmem → copy to local gpu

The frame data is copied around in system ram twice and then uploaded to the GPU. If your GPU is a iGPU/APU, then its copied again to the GPU system ram.

A basic 1080p @ 60Hz SDR stream is 475MB/s, multiply that by three for an APU-based system and that’s 1425MB/s. This is doable, but as soon as you start to desire higher FPS or 4K, then this number goes up exponentially.

Math:

bytes per second = width * height * 4 * fps
2 Likes

Ah, I see. And I assume no direct GPU-to-GPU method exist?

I’m thinking, instead of 16x to a PCIe slot do 8x/8x where 8x is copied directly to/from GPU #2. Should be feasible in theory but the use case for this is probably so low, you’d get laughed out of any AMD/Nvidia engineering team for suggesting it. Great for compute though… Possibly. Hmm… :thinking:

Not to mention you’d need a Vulkan extension to support it, and driver support to play ball, too…

(BTW, I’ve coded Graphics Memories in VHDL back in Uni, so know just enough to be dangerous here :grin:)

P2P PCIe transfers are a thing, but it’s unavailable to us at this time.

We are working on ways to improve this such as using dma-buf on the client side to have the GPU DMA directly from IVSHMEM into the GPU avoiding a CPU copy.

Right now investigating the usage of GEM objects for the same as NVidia’s newly claimed dma-buf support is fake dma-buf and doesn’t work as per the actual spec. (Can’t transfer memory between dma-buf fds that are not from itself)

3 Likes

Ah, right. And I assume doing a Torvalds here (“f*ck you xyz!” where xyz is any company not following the standard) with PCIe transfers for hardware that supports it properly and graceful fallback to 4-way copy for those who don’t, isn’t possible either. Shame :frowning:

I hope the situation resolves in a couple of years, it always seemed to me that iGPUs / APUs are the perfect fit for LG - no need for two GPUs, making sure IOMMUs are acceptable and so on. But I understand why we can’t always have the nice things too.

1 Like

Seems more technical than that, I have had communications with Nvidia engineers on this topic:

The use case sounds very cool. Great application of buffer sharing.

dma-buf is an opportunistic mechanism. There are any number of reasons importing a dma-buf can fail on various drivers, not just ours. IOMMU mapping limitations, dma addressing range limitations of various devices, memory offset/alignment requirements, etc. Providing a means to programatically both discover and resolve these various incompatibilities at runtime has been the focus of my research for quite some time, as you can see from various talks I’ve given publicly over the years.

That said, our driver simply doesn’t currently support importing dma-buf FDs created by other drivers, which is just another of these potential limitations. For example, you might be less surprised if the radeon driver failed to import a dma-buf exported by our driver that represented a region of local memory on our card (I.e., video memory). The radeon driver/GPU simply has no way to address this memory, so it clearly won’t work. There is nothing incorrect about the radeon driver reporting support for the general dma-buf import mechanism from EGL in such cases. The limitation in our driver is similar, in that our driver has no way to map memory it hasn’t allocated into our GPU’s address space at the moment. As I mentioned, this is a SW limitation, not a hardware one, and it will very likely be resolved in a future driver release, but I do not consider the current behavior incorrect.

I’d say it’s not for a lack of trying

3 Likes

Ah, I see. Sorry for taking up your time, I figured APUs would’ve been solved by now. Anyway from what you’ve told me you’re right - APUs are not yet ready for prime time, so to connect back to the original question, 5800X is indeed the way to go over 5700G.

I foresee a future where Looking Glass works everywhere, and if you have an APU, you ship a Windows+Game bundle, remove the Windows desktop and only have the virtual machine bare metal libraries, voila no more DLL hell or direct Windows dependency. Pipe dreaming, perhaps, and some ways to go, but… Keep up the great work :slight_smile:

Oh, and sorry for hijacking the thread with my ignorance, and thanks again for your patience!

1 Like

@wertigon’s “Hi–Jack” was still on topic to a large extent, yours is not, please create a new thread.

1 Like