Hey folks. I am looking at building new computers for our engineering/machining departments. We are growing, have hired on, and have outdated machines. Im in a position that virtualization (I THINK) makes some sense. Since I really need 3 or 4 machines anyway, why not virtualize. None of these computers are all really used at the same time, in a heavy manner, so I think the new 3900x can do the heavy lifting with no issue. my hangup, however, is on PCIe lanes. I just dont think i have enough to do more than 2 workstations. with 24 lanes, 4 taken by chipset, 4 for NVME drive and that leave 16 for 2 x8 cards. (unless someone releases PCIe 4.0 gpus that can run in x4)
do you guys think this is worth pursuing? I could do 2 workstations, but 3 or 4 looks much more attractive. or is there a better way? These are CAD/CAM workstations.
You’d need something more like Threadripper/EPYC for that project. Other problems include how you’d handle passing through USB ports or peripherals, or just go with some form of remoting into those VMs/thin clients.
For what you need it’d probably be more cost effective to go with 4 individual systems
I was thinking of using a PCIe expansion card. Unfortuantely, PCie4 to PCIe 3 doesnt exist yet, but it could still be used to hot plug. (similiar to LTT’s build)
after looking at it, i agree the 3900x doesnt have enough PCIe lanes, but I also have a 6850k that has 40 lanes, and 6 PCie slots on the board. In that case, I have enough lanes/slots to AT LEAST do two workstations without an expansion card. Does unraid still require a GPU of its own?
For most hypervisors you shouldn’t need a GPU outside of the installation process and setting up the initial networking to access the webgui. Once you can do that, you can pass it through just fine.