SMPTE ST 2110 support?

This is likely a HUGE ask, but if there was a way to grab SHMEM and convert it into SMPTE ST 2110 with optional JPEG XS encoding, it would help immensely for the dual PC streaming crowd as you’d be sending uncompressed video over SFP via IP or JPEG XS video via 10Gbe.

A virtual network adapter using ST 2110 could also be a solution for not requiring IVSHMEM at the cost of some loss of efficiency, but if it was routed multicast, it can be pulled from different PCs on the LAN.

This initially was thinking about how to run a local Chromium instance and dumping GPU buffers to SHMEM, then writing something to turn that into ST 2110 which a ST 2110 video switcher could grab. Then I realized Looking Glass might make this process much easier, and not deal with trying to run too much on the host, as GPU processes are dedicated to a VM.

Quality vs using NDI as a KVM: Far better, because the standard allows both uncompressed video and visually lossless video.

Now there’s an idea for Linus. If one day vMix supports ST 2110, you could have BOTH 7 Gamers 1 PC and a live production switcher all in one with 2 motherboards in a single case. (Anti-cheat depending)

Looking glass running in guest copies frames to a piece of memory shared with host, and then notifies the host when frame data is ready.

There’s an OBS plugin, can OBS do smpte 2110?

If LG is already in the mix, this “crowd” are doing it wrong, there is no need for a “Streaming PC”, it’s just a waste of money, resources and power.

Use the LG OBS plugin and encode/stream it right there on the host system. If the machine is not powerful enough, invest in one that is (or hardware accelerate encode using NvENC on the host, etc), instead of running an entire 2nd PC.

As for encoding, network transport, etc… there is nothing stopping you using the OBS plugin source as an example and writing something, however this falls completely outside the scope of the LG project.

SMPTE 2110 over a virtual network adapter would be something to explore to find out potential bottlenecks of the current virtual network adapter drivers. It would be a bonus if that traffic was also routable over a multi-gig LAN and then able to be taken into something like a ST 2110 hardware switcher.

Using networking instead of IVSHMEM is at least worth exploring. There used to not be a standard to define uncompressed video in IP networks, but now there is.

None of the software out there right now supports ST 2110, only hardware switchers. (some hardware switchers are just Linux servers with special FPGA I/O boards purpose built for the hardware) vMix is who I’d bet to implement it first, but OBS will require tons of legwork.

A HUGE amount of time and effort has gone into optimizing the virtual network hardware, but even still when this kind of performance is needed either a full nic gets passed to the VM, or a SR-IOV capable nic that can be split into virtual functions. There is no need to “explore” this.

So a separate unofficial client to transform IVSHMEM straight with YUV conversion to ST 2110 or JPEG XS would make more sense. No host application changes.

Would involve a lot of I/O and several SHMEM files to have 7 instances converted over 100Gbe Fiber, so that’s where the JPEG XS component of ST 2110 comes in. And since it’s IP, remoting in with VNC/Spice for quick troubleshooting means less downtime going to a end user terminal. (Thinking 7 Gamers 1 PC here)

If you have the NIC to support it, ST 2110 to a separate OBS instance on another high-spec PC makes more sense than NDI since you’re dodging a lot of compression. Because the goal is to create sources you could switch between in switching software.

I’m trying to create “GDQ in a box.” 6U, with 2U for a hardware switcher, and a 4U server for Looking Glass driven VMs for Chromium instances, and other VMs for games.

Firstly “IVSHMEM” is just shared memory, you’re basically saying “transform RAM”.

Good luck with that, this is no easy feat. In theory it’s all possible but I would not use Looking Glass at all as a base for such a thing, the goals and implementation of LG are incompatible with doing this efficiently.

We value extremely low latency and capture of every single generated frame. If you are going to run at a fixed FPS for record/stream only, LG is not the platform to base this on.

Also this doesn’t make sense at all… If it’s 7 gamers on one PC, just do it all directly on the same PC. No networking, no compression into any format, just take the raw bit perfect streams directly and produce it there.

We have 128 core CPUs available these days and GPUs capable of doing h264/h265/etc, in hardware. You could even make it a 7 gamers + broadcast VM on one PC, nothing stops you giving one VM access to the IVSHMEM “files” for every VM on the same box.

Infact one of our Discord members (Corrgan) does this, two gamers on one PC with a third VM for capture/production, encode, record & stream:

It was a thought to create a backbone to modify ST 2110 itself so it could be compatible with the goals of the project first, then cater to standards for recording and streaming, since the standard doesn’t I believe define fixed frame rate or resolution, it’s the receiver that defines fixed standards.

It’s trying to follow on with Wendell’s GRID/VDI idea to base it off of a standard able to allow transport of uncompressed video (which ST 2110 is also very ultra low latency) and also specifically set it so it can be received in multicast by hardware switchers.

Broadcast backbone is how I thought about it (integrate with video walls, other facilities) and to offload some of the load like Chromium instances and dedicate a switcher to things for redundancy.

Nothing is faster or lower latency then local RAM, and there is no difference to the hardware between RAM and shared RAM.

Exactly! Which is not how LG is designed to operate.

Then you’d be better off writing your own capture software that runs in each VM and does this directly. Not only would this be designed specifically to fit the purpose, being far more efficient, it would also cut memory bandwidth across the entire system, providing a far better result then could be had by hacking on LG to do this.

It would also simplify your setup as there is no need to share the frames across the VM boundary, but rather just push them on the LAN.

In order to keep low latency LG needs to capture every single frame as fast as possible, how we deal with this frame rate disparity in OBS is by dropping all but the most recent frame when one is needed.

If the final goal was low latency record only and never to be used as an ultra low latency interactive virtual monitor in the LG client, the LG host application would only need to capture at 60FPS, cutting the guest VM load. If your game is running at 200FPS, LG will be capturing at 200FPS even if you only need 60FPS.

This is why other solutions like parsec have far lower CPU usage due to the fixed capture rate while LG is far better for latency and image quality at the expense of higher CPU usage.

Okay, if anyone could do that it would be the people at GDQ. There would be 2 approaches. A capture program that pushes it over IP from a host machine using NVFBC, or a FPGA that turns HDMI or DP into ST 2110. The program would involve less hardware, but beefier NICs. The FPGA needs more hardware, but you can separate traffic between a 1Gbe network for the PC and a 100Gbe network for ST 2110.

All theory at this point without a way to apply it yet.

So the key for Looking Glass is to literally act like unlimited VRR, which all current display output VRR standards would still be culling frames, so it would be like if FRAPS captured every frame. Any other interface/protocol would mean a compromise when changing buffers.

Not a hard thing to do by any means.

Why? Just take the captured frames and push them over the LAN in that format? At worst you might need a hw accelerator to encode, but doubtful.

Not sure why you think you need some custom FPGA for this.

This would be pretty easy to implement in software.

Analog inputs. I’m thinking Retrotink. And if there was a direct scaled image to ST 2110 source it could mean all picture traffic is over IP.

I’m thinking about what GDQ needs and they need Analog for retro consoles.

Then we are completely off topic for LG and this thread should be closed.

Looks like this is a thought that was completely incompatible with this project. I apologize. The standard itself is what killed the thought.