I work for Bosch and we need to visualize 4 machines that exist which each have 5 fully occupied USB ports as well as intel Xeon e5-1650 processor and 32 GB of ram. Each of these machines has and Nvidia Quadro K4400. These machines are in the style of a massive tower and take up so much space that we are unable to add more automation benches for vehicle simulation. We need to build a system that can host between 4 and 12 virtual machines each with between 4 and 6 cores and between 12 and 32 gigs of ram each virtual system needs to have 5 usb ports passed through most likely via PCI-E expansion card and each system will need to have a minimum of 2 TB of storage space. There is planned to have around 16TB of physical storage on board in raid 5 in spinning rust. While we plan to have around 256GB on board in Raid 1 for the OS and host software.
Please help me find the best solution to this problem and or give me input on how I can order out a machine and build it for my department.
each machine will need 4 usb to connect to the physical automation equipment as well as 16GB of ram because the automation software sadly is not forgiving
the workbenches will end up having the Quadro removed from them and returned to resource allocation we don't need them for what we are doing. most of what we do is heavy cpu tasking along with large data storage. the Host machine will cover the GPU each machine will need to use a PCI-Express to USB adapter to connect the equipment
very little display output what little display output if it gets too demanding will go through a cheap AMD or Nvidia card for the host machine the virtual machines will have only 12 MB of Video Ram given to them
Well then outside of server stuff, maybe this for some parts ideas, threw in the workstation GPU just cuz, but it does have 4 display port outputs and would be low power
CPU supports 40 lanes and 768gbs of RAM, though the speed of it's not too much you could always go higher, that's just the minimum I think, it's 8 core 16 threads
the 128gb RAM kits start at like $650 so screw that
I have to admit I am very impressed Bravo! My manager has informed me that the system needs to be build to accomidate up to 12 virtual guests each with 4 cpu cores and 16 gb's of ram and 4 usb ports which is about 40 pci lanes and 48 cores assuming we leave 4 cores for the host vm that is about 56 cores... do you think you can help find a mobo that will support 8 PCI-Express connectors and dual CPU?
In all reality, you're going to want more than one system for a few reasons:
Redundancy, what happens if a component dies?
Load balancing, most hypervisors allow for this, and it'll not only increase speed, but allow you to not max out your boxes, instead load balancing the VM's between the two.
Scaling, you want to be able to scale in the future.
Just make like 2 of the machines I posted but with higher end CPUs or something for the extra cores, gives you more than enough PCI-e, not sure on the RAM though you still may need a server board if you want more
On newegg at least I can only find one board that supports 512 GBs of RAM that's 2011 v3
C612 is the chipset for 2011 v3 server boards I guess
Of course if you don't care about power consumption and could live with less CPU performance the 990FX chipset on AM3+ supports a ton of PCI-e lanes, but then you're limited to like 8 core CPUs.
- -
Also is like just a bunch of ITX i3 based machines out of the question?
Something like this I guess with like a tiny case like the Elite 110
So... something that is equivalent to 4 - 12 Intel Xeon 1650 (v3?) machines. If the CPU utilization is going to be maxed out most of the time that is going to be difficult to do, if not impossible. 4 is doable.
Off the top of my head I would suggest something like...
You really might need more than one virtual machine host. If whatever you're doing is really that CPU intensive and was fully utilizing a Xeon 1650 (v3?) cpu, that's pretty hard to match. Also for industrial use you definitely don't want to go for an X99 motherboard, gotta be Intel C612 for the ECC memory support, for reliability sake.
The CPU'S that are in the current boxes arent fully maxed out in fact they see only about 30% maximum utilization and around 10 gigs of ram being used at a constant load with the software. We do not have any other rooms available in our location except this one so we cannot use Thunderbolt to bring the cables into another room. The issues that are faced with the current set up are that when we find a solution to one issue in the automation software we have to try and copy the roughly 25GB fix from one computer to the other over FireWire that's right FireWire its the only way around our internal security protocols that Bosch put up and USB mass storage devices are disabled by default we cannot enable them. The set stations take up a lot of working space basically the space 3 gamin rigs would take up because half of it is automation equipment that will be rackmounted into a shelf and all of the cables will be ported over to our processing box where all of the majic will happen. Since my department wants to cram 12 of these set ups in to a room that is no bigger than 8 Ft X 6 Ft we need to optimize the space. That is why we are building this solution, we are currently maxing out the internal 1TB storage drive and the 256GB ssd's with software on every machine and need to find a better solution to allow for mass storage and extremely compact computing to allow for more octopus boxes.
Well... if it's about 30% utilization, the dual 10 core xeon e5 build above should allow you to run about 8 clients. If you're only using that much ram you could go with about half the memory, two 64GB kits for a total of 128GB... but hey, always good to have room to grow. Just remember the motherboard only supports Registered ECC dimms.
If you wanted to scale up to above 8 clients comfortably, you would probably need a second machine, then you could run 16 clients with your workload.
If you needed to move large amounts of data between the two hosts, maybe your IT security guys would let you put a 10GBE network card (maybe SFP fiber even, would worry them less) in each machine and just have them hooked directly to each other adhoc, not the network.
Also for your storage problems, that ASROCK motherboard has 10 SATA3 ports and plenty of PCIE3 X16, if you need to expand by installing an additional drive controller.