4 users 1 pc suggestion

so im thinking of building a single box serving 4-5 users with very light workloads (its a school so mostly office stuff ) i was thinking of using pci passthrough to achieve this…
ie each client gets a dedicated cheap gpu + usb card…
to do this i have to use those 4 controller usb cards …
even with those things i dont think im gonna find a motherboard with that many pcie slots…
recently i came across these mining extender cards:

i was wondering if i could use one of these on each pci slot with a gpu+usb card and pass that to the vm…

has anyone tried this extender-multiplexer with pci passthroguh?

would you think its gonna work?

im planning to ESXi btw…

note that i know there are other solutions as well but most that i know of are expensive + they need a piece of hardware like raspberry pie for each client…

note 2: those multi controller usb cards are not available where i live and its gonna cost a lot of time and money to import one .

1 Like

Just throwing it out there, have you considered Windows Remote Desktop Services or Linux Terminal Services Project? That would be far less complicated and easier to administrate and potentially expand.

2 Likes

whole point of doing this is to not use something like a thin client…
save space and potentially save cost …
in this setup each client gets a video+audio out and a usb card …

Maybe look up the “7 Gamers One CPU” and similar Videos of Linus Tech Tipps. They are rocking a overkill setup, but especially on the Editor focused version of that, they go pretty indepth when it comes to PCI-e Lanes and USB Cards etc.

I fear you probably won’t be saving much money compared to thin Clients though. Mainboards and CPUs that support 8 or so Expansion Cards and have proper IOMMU Group splittings to pass them through won’t be exactly cheap…

2 Likes

Here is another solution that recently came up in my local LUG if Linux is an option for you. He calls it Cloud in a Box. It essentially creates containers per user with access through Apache Guacamole which then any thing that can run html5 can use the desktop. Here is the recording of his talk

I know it’s not what you had in mind but possibly a more cost effective way to accomplish the end goal.

Do you need Windows for the clients?
For linux multiseat you can use DRM leases and use 1 GPU.
HW opengl acceleration needs patched X.
Tested only with Radeon GPU + modesseting DDX.

For windows guests with KVM+Qemu, emulated QXL GPU + spice and same multiseat could work.
Hard to say how graphic performance would look, probably nothing great.

But it would be a maintenance burden as recent kernel, patched X and complex configuration would be needed.

thin clients are not as bad as many people think they are! they are rugged and can withstand a substantial amount of abuse.
and they are simple to configure.
and because many of them have the pci-e slot you could probably install an ssd and run a different os on them.
some of them you can plug sata laptop drives in.( i did this with a igel thin-client and ran ghost bsd on it) (it was very fast )with an 8 gig memory stick
I have them in my garage, wood shop, and pc repair center (all networked to a server in the repair center).
first thing i did was replace the memory sticks with the maximum amount of ram they could handle.

I think the cost and complexity of a project like that it’s not worth for a school. If you consider the added cost in $ per hour spent on configuring and mantaining a system like that I think you’ll be better off buy 4-5 compact machines that you can attach behind the monitors.

If you want to go with a “thin client” you could build one machine and use Raspberry Pis as RDP machines to connect to the main machine. If you’ll be using ethernet those little SBCs are perfect for the job.

1 Like

I think we are missing the point here, the box is supposed to be a aio box with a nas, nvr + some windows clients…
I don’t want to run thin clients…
lets first solve the pcie extender cards thing…

has anyone used one with a baremetal hypervisor?

my plan is to use those cards and passthrough two cards via each pcie ports…

So if I’m getting this right, are you wanting an all in one box and all the users that are going to use it are going to be right around it, correct?

And to your question about the cards, those cards usually are a PLX chip, as long as the slot you are plugging into is getting it’s lanes from the CPU and not the motherboards chipset, you should be able to pass through the PLX chip and everything attached to it. You would need something like a dual xeon v1/v2 type system or Threadripper system to get that built relatively cheap

Caveat, I’ve not used those PLX boards myself, just going by what I know about passthrough so far

if its PLX chip i think it would work fine yes…
BUT aren’t those supposed to be expensive?
these cards are pretty cheap…
i have some confusion with this pci passthrough…
ihave two intel i340-t4(4 port gigabit nic) nics in my homelab and esxi lets my passthrough each port as a seperate nic to a vm that sorta goes against this iommu grouping thing which is getting me confused…
i was thinking maybe somehow it would be possible to individually pass cards on mining multiplexer cards…

OSS has a lot of PCIe expanders (and switches and the like). They are very costly though.
https://www.onestopsystems.com/pcie-expansion

Those quad NIC Cards are intended to be used like that. The OS has the ability to individually address the NICs out of the box and pass them through. I’m pretty sure there is an abstraction layer above that, that doesn’t hurt on Ethernet, because, compared to a GPU the bandwidth is rather low.

Because of performance requirements in your case, you can’t abstract the GPU Part of the system. Thats why you need to attach the actual hardware to the VM. If you had one of those dual GPU cards, chances are, you could somehow pass each GPU to a different VM.

Also, virtual Network interface have been a thing since day one of virtualization. Accelerated 3D Graphics in VMs is comperatively new.