Building Corporate VM Host Machine

Hello,

I work for Bosch and we need to visualize 4 machines that exist which each have 5 fully occupied USB ports as well as intel Xeon e5-1650 processor and 32 GB of ram. Each of these machines has and Nvidia Quadro K4400. These machines are in the style of a massive tower and take up so much space that we are unable to add more automation benches for vehicle simulation. We need to build a system that can host between 4 and 12 virtual machines each with between 4 and 6 cores and between 12 and 32 gigs of ram each virtual system needs to have 5 usb ports passed through most likely via PCI-E expansion card and each system will need to have a minimum of 2 TB of storage space. There is planned to have around 16TB of physical storage on board in raid 5 in spinning rust. While we plan to have around 256GB on board in Raid 1 for the OS and host software.

Please help me find the best solution to this problem and or give me input on how I can order out a machine and build it for my department.

So, do the machines need much CPU performance?

and then you need, what like 8 PCI-e slots? GPU/USB for each machine?

each machine will need 4 usb to connect to the physical automation equipment as well as 16GB of ram because the automation software sadly is not forgiving

Well you could go for something like this, has 8 PCI-e slots

Or ya know, just like make an array of cheap machines in ITX cases, the HAF stacker cases might work for ya

what about relocating the machines next door, than use e.g. thunderbolt to bring the USB, and graphics back to your workbenches?

1 Like

the workbenches will end up having the Quadro removed from them and returned to resource allocation we don't need them for what we are doing. most of what we do is heavy cpu tasking along with large data storage. the Host machine will cover the GPU each machine will need to use a PCI-Express to USB adapter to connect the equipment

So wait then you don't need GPUs to be passed through?

correct just a stupid amount of USB

And then does it need a lot of display outputs?

very little display output what little display output if it gets too demanding will go through a cheap AMD or Nvidia card for the host machine the virtual machines will have only 12 MB of Video Ram given to them

Well then outside of server stuff, maybe this for some parts ideas, threw in the workstation GPU just cuz, but it does have 4 display port outputs and would be low power

Motherboard has a ton of PCI-e slots and supports 128gbs of RAM
http://www.newegg.com/Product/Product.aspx?Item=N82E16813188163&nm_mc=AFC-C8Junction&cm_mmc=AFC-C8Junction-PCPartPicker,%20LLC-_-na-_-na-_-na&cm_sp=&AID=10446076&PID=3938566&SID=

CPU supports 40 lanes and 768gbs of RAM, though the speed of it's not too much you could always go higher, that's just the minimum I think, it's 8 core 16 threads

the 128gb RAM kits start at like $650 so screw that

PCPartPicker part list: http://pcpartpicker.com/p/p2wPyc
Price breakdown by merchant: http://pcpartpicker.com/p/p2wPyc/by_merchant/

CPU: Intel Xeon E5-2630 V3 2.4GHz 8-Core Processor ($629.99 @ SuperBiiz)
CPU Cooler: Thermalright HR-22 Fanless CPU Cooler ($98.94 @ Newegg)
Motherboard: EVGA Classified EATX LGA2011-3 Motherboard ($275.99 @ Newegg)
Memory: G.Skill Aegis 64GB (4 x 16GB) DDR4-2400 Memory ($249.99 @ Newegg)
Memory: G.Skill Aegis 64GB (4 x 16GB) DDR4-2400 Memory ($249.99 @ Newegg)
Video Card: AMD FirePro W4100 2GB Video Card ($154.99 @ B&H)
Power Supply: SeaSonic 760W 80+ Platinum Certified Fully-Modular ATX Power Supply ($148.99 @ SuperBiiz)
Total: $1808.88
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-18 16:43 EDT-0400

I have to admit I am very impressed Bravo! My manager has informed me that the system needs to be build to accomidate up to 12 virtual guests each with 4 cpu cores and 16 gb's of ram and 4 usb ports which is about 40 pci lanes and 48 cores assuming we leave 4 cores for the host vm that is about 56 cores... do you think you can help find a mobo that will support 8 PCI-Express connectors and dual CPU?

The server from the linus video probably, I think it had 10

Or wait for Broadwell E which will have more cores

In all reality, you're going to want more than one system for a few reasons:

  1. Redundancy, what happens if a component dies?
  2. Load balancing, most hypervisors allow for this, and it'll not only increase speed, but allow you to not max out your boxes, instead load balancing the VM's between the two.
  3. Scaling, you want to be able to scale in the future.

what would your proposed machine(s) be? I am open to as many ideas before my meeting with the treasury on Wednesday

Ya...it would also be cheaper.

Just make like 2 of the machines I posted but with higher end CPUs or something for the extra cores, gives you more than enough PCI-e, not sure on the RAM though you still may need a server board if you want more

On newegg at least I can only find one board that supports 512 GBs of RAM that's 2011 v3

C612 is the chipset for 2011 v3 server boards I guess

http://www.newegg.com/Product/Product.aspx?Item=N82E16813132259

Of course if you don't care about power consumption and could live with less CPU performance the 990FX chipset on AM3+ supports a ton of PCI-e lanes, but then you're limited to like 8 core CPUs.

-
-

Also is like just a bunch of ITX i3 based machines out of the question?

Something like this I guess with like a tiny case like the Elite 110

PCPartPicker part list: http://pcpartpicker.com/p/tgMpxr
Price breakdown by merchant: http://pcpartpicker.com/p/tgMpxr/by_merchant/

CPU: Intel Core i3-6100 3.7GHz Dual-Core Processor ($111.99 @ SuperBiiz)
Motherboard: MSI B150I GAMING PRO AC Mini ITX LGA1151 Motherboard ($109.69 @ SuperBiiz)
Memory: G.Skill Aegis 32GB (2 x 16GB) DDR4-2400 Memory ($114.99 @ Newegg)
Total: $336.67
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-18 17:17 EDT-0400

So... something that is equivalent to 4 - 12 Intel Xeon 1650 (v3?) machines. If the CPU utilization is going to be maxed out most of the time that is going to be difficult to do, if not impossible. 4 is doable.

Off the top of my head I would suggest something like...

2x Intel Xeon E5-2687W v3
1x ASRock EP2C612 WS SSI EEB Server Motherboard Dual Socket LGA 2011 R3 Intel C612
2x Crucial 128GB (4 x 32GB) 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory
6 - 7x StarTech 7-port PCI Express USB 3.0 card (might need a hub if you need more ports)

You really might need more than one virtual machine host. If whatever you're doing is really that CPU intensive and was fully utilizing a Xeon 1650 (v3?) cpu, that's pretty hard to match. Also for industrial use you definitely don't want to go for an X99 motherboard, gotta be Intel C612 for the ECC memory support, for reliability sake.

USB hubs?

The CPU'S that are in the current boxes arent fully maxed out in fact they see only about 30% maximum utilization and around 10 gigs of ram being used at a constant load with the software. We do not have any other rooms available in our location except this one so we cannot use Thunderbolt to bring the cables into another room. The issues that are faced with the current set up are that when we find a solution to one issue in the automation software we have to try and copy the roughly 25GB fix from one computer to the other over FireWire that's right FireWire its the only way around our internal security protocols that Bosch put up and USB mass storage devices are disabled by default we cannot enable them. The set stations take up a lot of working space basically the space 3 gamin rigs would take up because half of it is automation equipment that will be rackmounted into a shelf and all of the cables will be ported over to our processing box where all of the majic will happen. Since my department wants to cram 12 of these set ups in to a room that is no bigger than 8 Ft X 6 Ft we need to optimize the space. That is why we are building this solution, we are currently maxing out the internal 1TB storage drive and the 256GB ssd's with software on every machine and need to find a better solution to allow for mass storage and extremely compact computing to allow for more octopus boxes.

Well... if it's about 30% utilization, the dual 10 core xeon e5 build above should allow you to run about 8 clients. If you're only using that much ram you could go with about half the memory, two 64GB kits for a total of 128GB... but hey, always good to have room to grow. Just remember the motherboard only supports Registered ECC dimms.

If you wanted to scale up to above 8 clients comfortably, you would probably need a second machine, then you could run 16 clients with your workload.

If you needed to move large amounts of data between the two hosts, maybe your IT security guys would let you put a 10GBE network card (maybe SFP fiber even, would worry them less) in each machine and just have them hooked directly to each other adhoc, not the network.

Also for your storage problems, that ASROCK motherboard has 10 SATA3 ports and plenty of PCIE3 X16, if you need to expand by installing an additional drive controller.

1 Like