Is it possible on Linux systems to reserve / allocate some chunk of VRAM inside RAM (It means that I use iGPU and not dedicated GPU, because iGPU uses some part of RAM as VRAM).
Imagine that I have set in BIOS that 128 MB of my RAM can be used as VRAM for my iGPU, but I would like to use for example 32 MB of that 128 MB only for me, so at the end iGPU should use max. 128 - 32 = 96 MB of VRAM. And I would like to get reference (pointer) to that my reserved block of VRAM, because I would like to use it.
I would like to save to that reserved chunk of VRAM some data and I want iGPU to have direct access to that data.
No, how the video device decides to use the RAM it’s assigned is proprietary and hard wired in silicon, the Linux kernel has no control over this assignment nor how the physical hardware uses the RAM it is assigned.
Do I understand correctly that you want to have a memory region accessible via both CPU and GPU? OpenCL can help you out with that. clCreateBuffer accepts flags that allow you to share memory between host and device.
Yes, I still believe that this can be done, I just want to try it and want to learn it how it works.
Because I still believe that this can be used in Looking Glass, so I want to make some prototype.
For example, I set in BIOS that I want to have 128 MB of RAM to be VRAM (for iGPU), then in Linux host my app will allocate 32 MB of this VRAM and pass pointer / reference of that memory block into guest VM, where VM will save texture into it, and in host iGPU just display it, because it will have direct access to that block, so there will be no need of copying texture between RAM and VRAM (if VRAM is in RAM of course).
Because copying block of memory is slow operation.
Maybe you could have a look at opengl APIs like glBufferData etc , the VM will still need to copy frames from its vram into shared ram, and you’ll still need the doorbells / synchronization that comes with looking glass
In theory it is possible, but would require some very hacky code in Qemu and it would be extremely driver specific. The memory copy is not the slow part of LG, it’s the actual capture.
You would have to use glBufferData to get a pointer and pass that into Qemu as the destination for the IVSHMEM device, you would have to implement the LG client INTO Qemu for this to work, and even then you would have to find some way to implement client->host sync now 100% of the shared buffer is dedicated for texture use.