I found this article:
Am i right that there is hope now for SR-IOV 30xx, 40xx support?
I found this article:
Am i right that there is hope now for SR-IOV 30xx, 40xx support?
I was hoping to start a conversation, but no one seems interested
What really can be done on a modded card besides making it work with a system that’s not supported? Is there something I am missing?
Yes, you are missing something. I will try to explain what it is. The article suggests that it might be possible to replace or modify the BIOS of all current Nividia graphic cards to add features they don’t usually have or make the Nividia graphic card think it is a different graphic card.
I am interested as we have a surplus of used RTX4000 series and few used Quadros cycling through.
Would be nice to use GPU partitioning on RTX series cards just for the memes.
It may depend on whether Nividia decides to patch this ability with a driver update. Even then, if you could add SR-IOV support to the 3000 series or the 4000 series, you would need to get a custom BIOS from Nividia, which they would charge a lot of money for; otherwise, why purchase their enterprise-level graphic cards?
The additional VRAM is needed when you have 40 clients loading YouTube on Chrome simultaneously.
The ATX and Radeon PRO (yes, it is all capitalized for some reason) series is all we use in the enterprise space just for the SR-IOV support. Using an entry level 4000 or even 3000 series card for hypervisor rendering/RDP acceleration would really slash build prices on entry level servers.
BMC graphics are both a vulnerability and pain when RDP’ing to the host.
I am aware you can use servers without the desktop experience, but several Server 2022 features require the Desktop experience be installed for configuration/use.
I would take it generation differences wouldn’t do very much then for example 3000-4000 series bios and would only work with current generation like 3060-3070 if there was a difference in settings correct
Thanks, @TryTwiceMedia, for adding to the conversation.
Thanks for making my point a little clearer than me. As I was saying, I think Nividia will remove the ability the article is talking about; otherwise, it will hurt sales of other products Nividia provides, so the only real hope of SR-IOV support on Nividia graphic cards is getting a Nividia graphic card that supports Nividia VGPU and paying the last time I looked $250 per VM per year.
Spoiler alert:
Older quadros support SR-IOV && no recurring license fee
But they are older cards with the accompanying problems of older hardware.
When Nvidia added SR-IOV support, they used the same kernel as the production studio drivers so legacy cards magically had the ability to partition natively for free.
I have benchmarks somewhere with around a 5% drop in benchmarks with an old Quadro being partitioned and passed to a VM used with VNC software.
Now we are talking bottom of the barrel $200 Quadro that starts flexing when streaming 4k YouTube inside the VM, but it is happening without licensing.
@Necrosaro, I don’t quite understand your question, but here is what I think you are asking. For consumer Nividia cards from Maxwell to the Nividia 2000, it might be possible to add SR-IOV support (meaning Nividia VGPU support), but there is a catch: you have to pay what I call the Nividia Tax, which is $250 per VM per year. At least, that was the tax the last time I checked. It might be more expensive now. I have heard of other people not paying the Nividia tax and still getting SR-IOV support (Nvidia VGPU support), but I do not have any experience, and I have not tried to do so. If I am wrong, please correct me.
Oh maybe I am thinking of the wrong things
there is an VGPU hack out there that enables the VGPU server for free. “vgpuunlock”
i tryed to flash my RTX a4500 to an Rtx a5000 but it culd not flash my BIOS, sadly it work only with consumer GPUs.
Wait, could this mean SR-IOV/vGPU splitting between VM’s can now be done on a 980?! I can put my bios switch to good use!
I dont know how the Forum policy is on that but just Google „Nvidia VGPU unlock“
On YouTube are some Vids on that it Works up to RTX 2xxx Series
Some articles claim you can share a Nvidia graphic card between VMs; the catch might be that you have to share the card resources evenly between the VMs.
I have a 980Ti being split in a devops server right now.
Server 2022
SR-IOV & IOMMU enabled
Load the GeForce driver from Nvidia (just checked, that’s what’s on this server)
Get-VMHostPartitionableGpu
can be split 32 ways
It just works
I found that legacy cards default to 32