Possible VM Server - somewhat of a noob

My father and I run a process instrumentation and industrial automation business in Wisconsin, and quite frankly I’m getting tired of of managing all of the employee computers, both on the software and hardware side. It really eats up my time when I should be out in the field working for the customers that keep our doors open. Every hour I’m not in the field is $150 lost, and unfortunately, I’m the only technician we have right now capable of certain things.

So, I was thinking of putting in one VM server with an HDMI drop and a USB drop with a hub at eash desk - there are 5 in total. This way I can consolidate all of the management into this one server and eliminate our NAS while I’m at it.

I was leaning towards independent GPU’s for each station as well, so I could avoid the VMware subscription cost and eliminate the need for thin clients.

I guess my questions are as follows -

  1. Do you guys see this thinking as logical and correct? Especially the independent GPU’s for each station. Can I do this with Windows Server or do I need extra software? Linux isn’t an option for us btw.

  2. Which platform should I lean towards? Both Intel and AMD offer core counts high enough for me, but I’ve heard there are some issues with AMD and virtualizations? I’m not really sure here and could definitely benefit from outside input. If AMD is a valid option, is there any benefit in going Epyc over Threadripper?

Thank you all for your input in advance!

Time to contract with an MSP, or look into Desktop as a Service (DaaS) from Dell, HP or Lenovo.

Some sort of home grown five workers one cpu modeled after a five gamers one cpu setup is going to be nothing but pain.

IMO, you are going to spend more time setting it up and getting it to work than you would save not having separate machines. For the

For the VMware remote access to a VM with a GPU, that also requires specific models of Nvidia card that support vGPU, although you might be able to get away with one, probably two cards. Then the Nvidia vGPU software is pricy, depending on what you are doing it is $450 per year per user. Then you have to setup and manage the licensing for the servers. And you have to buy, setup, and maintain the thin clients. And pay for VMware itself.

The windows server option of RemoteFX is deprecated, and that would also require thin clients.

So your only real option is to do normal PCIe passthrough, with one card per user.

You are not going to save that much time on the software side. You will still have the host OS to manage, and then five windows VMs to manage. So the software is going to be about the same.

So I would suggest either first looking into making the software side easier to manage or hiring out some or all of the labor like @gordonthree suggests.

1 Like

If you don’t mind us asking… What kind of problems are you specifically running into with the existing machines. Maybe there are some things that could be done to limit common issues instead of having to reinvent the wheel.

1 Like

Thanks to you all for the replies and guidance. I will look into an MSP, although it’s always been in my bones not to outsource what you can do yourself. The time saving might very well be worth it though.

I’ve been having issues lately with some hardware failures (mostly drives), as well as some Windows update issues that affected two of our PC’s and one of our worker’s laptops that she uses for remote work.

Now that I’ve thought about it a bit more though, some of our issue has been Covid related in that I’m now having to manage the laptops of our employees for their remote work, and my initial VM plan doesn’t help very much with this other than streamlining some basic software licensing. I should really start with nixing the VPN I have set up for remote file access and maybe switch to remote desktop or something because that would eliminate some of the software licensing headaches as well.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.