Hardware recommendations for a vdi server serving a midsize office (25-30 clients)

hey,
so I managed to convince a friend of mine to use vdi for a small office he is about to setup, the budget is limited so to me that’s the best route.

I’m planning to use fedora srever as the kvm host and use Truenas core as the storage backend .

clients will pxe boot a lightweight linux distro and connect to their respective VMs via spice or vnc.

for hardware acceleration I’m planning on nvidia 20 series cards with vgpu unlock .

clients mostly run windows office however let’s say 6-7 of them should be able to run solid works (not that heavy of usage though).

I want to use amd but I’m not sure exactly how many cores I would need.

will a single 5990wx be enough? (assuming 4threads each office user) or should I cough up a dual socket epyc server?

or maybe get two 5990wx systems and run them in a cluster?

should run truenas core a vm under those servers or get a seperate system for that? (by separate system I have something like a ryzen 5900x or 7900x in mind, how much cpu would I need for truenas to serve these VMs and maybe some other samba/nfs datastores in a fast and reliable fashion?)

I apologize for the the amount of questions, I have done this sort of setup but not at this scale, so I’m not sure resource wise what would be needed.

I’m not very familiar with spice, but can’t it only send out a linux environment to clients (no solidworks)?
vnc would give a very poor solidworks experience no matter how powerful the computer behind it was.

Do you expect the 25-30 client load all at once?

Fedora is a curious choice for a small office. The life cycle of a Fedora release is ~13 months after which security and functional updates stop, so in essence the choice of Fedora dictates at least an annual upgrade cycle.

I have seen lots of people choosing to run TrueNAS in a VM only to use different technology as virtualization host (in this case Fedora). I don’t really get the appeal for it considering that TrueNAS has virtualization built-in.
Why would you choose to run TrueNAS virtualized as opposed to run as virtualization host?

1 Like

KVM vs bhyve. Of course running TrueNAS Scale instead of Core would address that.

1 Like

Are you doing this for free or are you planning on geting paid/a service contract?

1 Like

are you using the term ‘friend’ quite loosely here??? or maybe you both have some masochism tendencies?

not to be the bad guy here but this looks like it might be a bigger project than you want to get you and your friend into.

just in case you are serious:

use an EPYC server but NOT a DUAL CPU EPYC server.

for your clients i would not add the extra work of PXE boot. just local boot them from a small SSD and keep a golden image on a deployment server like FOG. (or even a bootable recovery USB)

you should use a real PRO grade GPU for this deployment. Radeon Pro or nvidia A series maybe, like an A6000.

fedora with its KVM gui is a fine host, though ProxMox might be more common. Truenas core is good for performance and storage. i would build a small seperate box for Truenas just because you will be doing a lot of customizing to get your VDI and everything working, i would not ADD a TrueNAS VM to that build time.

1 Like

I did such an installation in 2013 for a call center with 10 clients.
I used two Ubuntu 12.04 LTS server with Linux Termial Server Project, running as VMs on ESXi and isc-dhcp-server was clustered for HA.

You should definitely use two servers, so you can carry out maintenance on one server in peace, then reboot all clients and then update the second server

The setup was such that the call center agents used Linux as the primary system, but also Windows apps could be used via Windows Terminal Server.
The accounting department wanted to use Windows directly, which was no problem you can also connect the Thin Clients directly to to a specific Windows Terminal Server without going through Linux first or the user can use hotkeys to select another server.

The hardware requirements were less than a Windows Terminal Server for the same number of clients, but that depends a lot on what exactly the clients are doing.
I used Xubuntu for the clients because of XFce and Remmina was used for the RDP sessions in Linux.
I’ve updated the solution over the years up to Ubuntu version 20.04

according to https://spice-space.org/ it works for both windows and linux guests and hosts.

If not I can use namachine or sunshine+moonlight.

  • yes expectation is for all the clients to load and function all at once.
1 Like

I take fedora as the closest analogue to redhat enterprise linux, plus updating the os once a year is not a bad thing IMO, there will be issues but I can deal with those.

truenas core is freebsd and virtualization features are no that full fledged on bhyve, especially for windows guest needing hw acceleration.

as for truenas scale, yes It’s based on linux but even If performance wise it was a match for bsd version I would not run my whole VM infrastructure under truenas, It’s simply is not built for that.

It’s free of course.

1 Like

If I’m using epyc server how many total cores should I get to serve my needs?

I have done PXE booting the os image in the past (on a raspberry pie though) so I don’t see that as much of an issue.

all clients will load a light weight linux distro simply just to log into their windows vm, so not much will be happening client side wise.

using a “PRO GRADE” gpu doesn’t solve much reason being for now I’m not planning to get a grid license - FOR NOW -.

my issue on using a seperate box for truenas is that It really needs to be a server of some sort so I can use NVME storage (I would need lots of pcie).
It’s a hard decision indeed.

Rocky Linux would be the RHEL analog you are looking for

1 Like

I’ll be that guy…

If this is the case, why are you recommending to your friend to set this up?

You’re potentially messing with this guy’s livelihood, unless you’re capable of supporting this I’d not be making suggestions to your friend to set it up.

IT projects are full of risk at the best of times, and plunging headlong into a project you’re not well versed in for a friend just has disaster written all over it. If not just for him, for you as well - you’re going to be the single source of support for this at all hours, for all problems… Good luck with any sort of solid works support query if you have one.

Ruined friendship, business continuity problems… tread carefully.

Additionally: you don’t want to do VDI without a farm of multiple machines, because if when this single machine has a hardware or software failure, all of your users are fucked.

25-30 users x hourly-rate is a lot of out-of-pocket money for your friend when this happens. 1x 8 hour day x 30 users at only $20/hr is almost 5 grand in wages for zero productivity alone (and guys running solid works aren’t only on $20/hr!!).

Would suggest at least 3 (smaller servers, rather than one massive one to handle the workload), so you can tolerate one failure (or outage for maintenance) and drop only 33% of your capacity.

1 Like

I could understand through the scale of economy in enterprises VDI saves cost in deployment, support and components.

But for small to medium size businesses, does it work out saving money at the end?

I’ll also dogpile on here.

Here are the redflags I am seeing:

Deploying a reliable VDI cluster is money $$$$ as well as manpower (Server maintenance and software upkeep). Yes, you can do this on a relatively small budget but not in any capacity I would say is reliable enough for business use. SFF and micro pc’s are so cheap these days that unless you are already running a decently sized server rack/datacenter I don’t think that the licensing and hardware costs will really come out in your favor.

If you want to truly take a project like this on then you should write down every step of the process and be very rigorous in your planning and execution. Whatever man-hours you think it will take pad an extra 30% or so for troubleshooting. Same with the budget.

Proxmox or XCP-NG will be your friend here. While yes, you can use Fedora or Rocky running a VDI server is more in the scope of a proper Hypervisor OS.

Hacking consumer stuff to run a business on is a bit risky imo. Yes you can do it and chances are that you can get everything up and running no problem but if this system is supposed to be your money maker then why not do it right? With using a proper enterprise card you would get software support and proper licensing that would make everything much more plug-and-play.

^This, don’t use business partners/friends as experiments for your personal curiosity or projects. Take a step back and be brutally honest with yourself about your ability to actually execute on this project.

If you do move forward then please feel free to update us here and document the process here.

2 Likes

my reasoning for fedora server instead of say xcp-ng xen esxi etc, are:

  1. I wanted the server to be kvm based (so no hyper V or esxi)
  2. I have used both xcp-ng and xen in the past and It was nothing but trouble.
  3. I have used proxmox and is my fallback for now if things don’t go smoothly with fedora which I doubt.
  4. fedora is the closest analogue to RHEL in more of official manner than rocky imo.
    5.In the land of KVM there are no "proper Hypervisor OS"s are they?

besides these my friend and I always had the Idea of setting a VDI infrastructure in mind so It’s more of a collaboration.

on top of that we are planning to get some older cheap hp z2 mini PCs (8th gen i5) as our client systems (got really good deal on a bunch of them), so if it comes to worst we can run windows natively on them.

again what I’m not sure about right now is how much hardware resources my NAS and systems combined would need, right now I’m thinking 4 threads each office pc and 8-12 threads each autodesk one.

As far as that goes…
For general Office users you can normally get away with much less than that. Your biggest bottlenecks in a VDI environment (or any virtualised platform for that matter) are normally going to be storage IO throughput and then memory, not so much CPU (because outside of occasional spikes, end user desktops are 99% idle for business productivity in terms of CPU). I’d say a baseline of 2 threads per user in terms of hardware would be sufficient. Heavy users maybe 4 threads (again for basic office apps). Every VM cluster I’ve managed for the past 15 years has run out of IO first.

For solid works, all bets are off. I would not run that via VDI at all. I personally wouldn’t even go off-brand for hardware; I’d get the HCL from soldworks (or contact your vendor for recommended platform) and buy the stuff they recommend. Which is likely a Quadro. Which won’t have crippled double floating point performance, etc.

The risk of compatibility or performance issues for who are likely the breadwinners in the business is too much. The hourly rate for a professional SoldWorks user, and the billable rate they can book out is not worth fucking around with non-certified home-built solutions. You only need one or two of them to lose a day of productivity and you’ve done the cost of a proper machine in lost billable. Easily. Never mind missing project deadlines, etc.

But seriously, what problem are you trying to solve here? VDI can work if you have a server farm built for it, as you can centralise data and control, but for 25-30 users in the same office I’d suggest that you’re simply building in a single point of failure (unless you build a farm with redundancy, which is likely cost prohibitive for 25-30 users) and adding complexity/compatibility issues.

Even if it works today, all you need is some Solidworks or windows update to break things and you’re boned; SolidWorks support won’t give a toss about your non-officially-supported platform.

2 Likes

So far we have carefully avoided exploring the ‘license’ costs of such a VDI setup, have the OP taken that into consideration or are we simply going privateer?

That’s a whole other issue.

But even if the licenses are legit (I’m assuming they are) Solidworks, or any other engineering software will likely not certify (or support) their product for use on some janky home brew VDI setup.

You’ll be lucky enough to get vendor support (e.g., I have some display bug/crash issue in this particular model when I rotate it in this manner or whatever) from them on gold partner closed source VDI platform (Citrix, VMware Horizon or whatever that’s called these days, etc.), never mind rolling your own.