Socket To Me! $50k Epyc Server for VDI! | Level One Techs

The setup that you are talking about is doable right now, with arch linux, or ubuntu as a hypervisor and gpu passthrough on a windows VM. Nvdia gpus have suport via geforce experince for streaming to another pc over the network and steam in home streaming is also a good choice. For streaming client software, i use moonlight game stream, or the steam streaming service. For setting up the hypervisor and the VM, there are very good guides for both arch linux and ubuntu. Personaly i use unraid as a hypervisor it has a easy to use gui when creating VM s and assigning cpu cores, ram, and gpu. It also helps to avoid the error code 43 when installing gpu drivers of the passthrough gpu on a windows VM.

Will keep an eye out for the results of SR-IOV testing, itā€™d be great if it works!

Iā€™m already running my daily driver as a VM with GPU passthrough, itā€™s just doing it over the network so I could put the computer in a 4U server case somewhere else in the house and then play games with a minimal, clean and silent desktop.

A project for the future when I have a bit more than a weekend of free time and the space to do it :stuck_out_tongue:

This is the driver to which that series card is pointed to. I am not sure how I came to that conclusion but the Mx series cards have official support to VMware ESXi and a few others. The newer v340 was currently only written on KVM. If I remember correctly this driver is what was referenced for both cards, and that I had specifically chosen this card as donor drivers for my future attempts at treating the Radeon VII as an Instinct series.

1 Like

Since those GPUs have excellent double precision performance, I want to see Star Citizen performance in a VDI. Make that happen @wendell.

I want to create a server for virtualization LAB and passthrough 2 radeons under Windows Server 2016/2019 (DDA) or unraid. For this purpose I think of buying cheapest Epyc processor (8 cores) and need to select some motherboard.

Questions:

1.What exactly mobo was choosen to "Socket To Me! $50k Epyc Server for VDI!" project and why?

2.How does the IOMMU grouping look like for this mobo? What about passing through GPU, usb, sata drives etc.?

3.Does SP3\Epyc platform have any advantages for virtualization purposes (SR-IOV, IOMMU) over threadripper (except for max number of PCIe lanes and cpu cores?)

4.Are there any advantages of Intel LGA3647/Xeon E5 platforms over AMD when it comes to virtualization/passthrough/iommu/sr-iov etc.?

Thank you very much

Bumping because I want to see this project revived.

Instead of H264, I would like to see a NVFBC -> to NewTek NDI KVM solution to see if that has better latency while freeing up some GPU resources so that NVENC doesnā€™t have to be used. It might require more powerful thin clients, but my feeling on H264 baseline is that it still isnā€™t low enough latency to match something like the Wii U gamepad. But I believe NDI KVM could actually help in terms of not having to resort to H264, and a NDI server means you can grab a NDI stream and immediately have it as a source for a supported switcher like a Tricaster, or a OBS instance with multiple NDI sources. Itā€™s like your Twitch streaming setup, but better, and all users use the same machine.

The only obstacle is coders for the NVFBC to NDI KVM protocol EXE. And NewTek would have to sponsor the project so that those coders can get access to the low levels of the NDI API.

Linusā€™ fiber solution may be good for a specific hardware low latency solution, but NDI KVM only needs a thin client and fast Gigabit, even for 4K 60fps.

If you are able to find a H264 encoding preset that is as fast latency-wise as a Wii U gamepad, Iā€™d love to know.

i like, really would love to move to VDI thin clients at work, because 99% of our usersā€¦ donā€™t need the hardware they get. not even close. what iā€™d love to do, is put together a central server for each remote location, and run ubuntu LTS or centos kvm or vmware guests, that basically go into sleep mode when not in use. iā€™d say that no more than half the systems in a given location are being used at any given time, except corporate.

even most of corporate can be run on thin clients, aside from a few edge cases. that way, even with local per-machine storage, we could snapshot daily backups for corporate, and just keep a clone template for the other endpoints. os breaks? delete it and restore from the backup.

My dream setup is to just have a central rack somewhere and be able to connect to a VDI instance from a thin client elsewhere. It sucks that just about every major tech company of making that happen is trying to stifle that behind expensive licenses