4 Virtual Machine System, FX 8 core base? 4 SSDs?

for the OP from here
https://linustechtips.com/main/topic/610476-10-oss-on-1-pc/#comment-7895322

10 machines from 1 while neat, seems like it would be a massive pain, so I came up with this

Is there a better way to do it? 4 machines per 1 physical seems a lot more reasonable and easy to do with something like this where the GPU has 4 mini display port outputs and is low power

Is it best just to run a host OS off a USB stick and do 4 virtual machines from there? Or just have 1 main machine and 3 virtual PCs?

"The slightly less pain in the ass way to do it would probably be AM3+
based, just downclock the CPU to like 2ghz for lower power consumption

and a machine like this would give you 4 systems in one, the workstation card is low power and has 4 mini display port outs

just give every machine 2 cores, 8gbs of RAM and an SSD, probably
running live GNU/Linux of some kind from a USB so you don't need a host
OS, or just install an OS onto one of the drives and have that be one
machine, dunno if it has enough USB ports

Also I don't know if there's going to be a bandwidth issue having 4 PCs share one hardware connection

PCPartPicker part list: http://pcpartpicker.com/list/YGzpgL

Price breakdown by merchant: http://pcpartpicker.com/list/YGzpgL/by_merchant/

CPU: AMD FX-8300 3.3GHz 8-Core Processor ($118.88 @ OutletPC)

Motherboard: ASRock 970A-G/3.1 ATX AM3+/AM3 Motherboard ($79.99 @ SuperBiiz)

Memory: Mushkin Stealth 32GB (4 x 8GB) DDR3-1600 Memory ($109.89 @ OutletPC)

Storage: A-Data Premier Pro SP600 128GB 2.5" Solid State Drive ($39.99 @ Amazon)

Storage: A-Data Premier Pro SP600 128GB 2.5" Solid State Drive ($39.99 @ Amazon)

Storage: A-Data Premier Pro SP600 128GB 2.5" Solid State Drive ($39.99 @ Amazon)

Storage: A-Data Premier Pro SP600 128GB 2.5" Solid State Drive ($39.99 @ Amazon)

Video Card: AMD FirePro W4100 2GB Video Card ($169.99 @ B&H)

Case: Fractal Design Define S ATX Mid Tower Case ($69.99 @ Newegg)

Power Supply: EVGA SuperNOVA P2 650W 80+ Platinum Certified Fully-Modular ATX Power Supply ($98.68 @ Amazon)

Total: $807.38

Prices include shipping, taxes, and discounts when available

Generated by PCPartPicker 2016-06-13 00:51 EDT-0400

Something I'd like to play with should I ever have the spare cash is
buying one of these USB docks and passing it through to a virtual
machine

http://plugable.com/products/ud-160-a

I think you'd need a PCi-e USB adapter to passthrough, and I'm not sure how it would work with a virtual machine
"

Oof, well if you want 2008 era performance yay, but I would do an 8 core intel chip for VM's. I've seen a VM system with an FX chip and played with it. It wasn't that pretty. It also could have been set up wrong, but I'm just saying.

yea..... i would use a 2670 or possibly a v3 12 core since cost/performance wise it's best bang for your buck. also might be worth getting multiple cheap older gpus instead of one newer one. sounds like the guy isnt gaming so you'd only need something like a 10-20$ gt 2xx or 3xx series with around 1g of vram to run videos and monitors up to 4k then call it a day. lot of the boards for xeon 26xx series have enough slots for the gpu's and throw one or two case fans blowing on them for cooling. save more money for the ram which you cant really skimp on.

Using the workstation GPU for low power and it's 4 display port outputs, the super cheap stuff only supports 3 displays mostly, 950s can do 4, but they're around the same price

@FaunCB
Most of the stuff I've read around virtualization is possibly just old, but it was like the One place AMD was really good for since their cores are physically there and probably some other stuff, and I think each machine only had to do like basic facebook and web browsing

and it's just the cheapest way I know of to get 8 threads on new parts

I used an A6 5400K APU for a while and it wasn't too bad, I could even do some PS2 emulation with an OC

Yeah but you could even do an A10 and have better percore IPC power with microcode updates as they come out.

**With ASync

but then each machine only gets like 1 thread, and a really slow thread at that

Again, Async. You can pass stuff over to an onboard iGPU and if you really wanted to, do a multi GPU system with 2 380's or something and pass stuff over that way. Use one 380 as a 380, use the other as CPU overhead, use the iGPU to display.

While difficult, I'm sure something like that can be done.

I just wouldn't use an FX chip is all. Compared to something that would act like a bigger system than it is in VM, an FX chip wouldn't do as well as say an X99 box or a 12 core dual xeon system maybe... A buffed up mac pro could definately handle that sort of thing and for, what, 350 bucks? 12 cores, 64 GB possible DDR3 ram, and stuff it with SSD's.

Eh fuck it do that instead.

i dont mean one old chip, i mean 1-2 per vrm. even if oyu bought 10 you would still save $ and since you are going to need a fancy-ish mobo to run a xeon + all dat storage, mind as well get one that can support 6-10 shitty gpus.

What's the intent here, just browsing and general Windows usage, no gaming etc.?

You could easily do this with one machine running common desktop parts. An i7 or FX-83xx would be best but for 4 VM's even and i5 of FX63xx or less could do it.

Few points to remember;

1) Your Hyper-Visor needs to be able to assign/pass-thru a graphics output and USB port to dedicate to the VM. The free version of ESXi or Proxmox should be able to do this. (Hyper-V has some clever stuff but pass-thru isn't in the main product - yet) .

2) Unless your VM's are going to run at 80-100% CPU all the time, there is no problem with over allocating resources - each VM could have 2-4 vCPU's even if the host only has 8 logical processors. This is kind of the point to virtualisation.

3) If the main storage has decent performance these is no point giving each VM its own SSD. Better off to create a RAID 10 array from 4 decent sized drives and use that to host the virtual hard disks for each VM.

4) If you are intending to do hardware pass-thru for graphics you will need a GPU for each VM. I don't believe it is possible under Proxmox or ESXi to assign a GPU display port to each VM and for the GPU to be shared amongst the VM's. -- if you used a hyper-visor like Hyper-V on Windows 10 Pro you could open a VM screen on each connected monitor, but then you would have a problem with multiple keyboards and mice - I don't think there is a way to assign them 1:1 to each VM under hyper-V.

In the server world it is now common to run a Hyper-visor (ESXi or Hyper-V) off bare metal just using an SD card. All the disks are then available for the guests. Both hyper-visors also support over alocation of Memory and CPU resources to each VM and the hyper-visor can dynamically adjust what each VM is getting based on load.

And where would you put those 4 gpu´s in with 970 boards?
Atleast that was the whole point of that original Linus Project.

Reading the OP's linked thread I can now see what this is about. When we have requirements like this (regular office workloads, not gaming) in a work environment we would be talking about a Virtual Desktop Infrastructure.

In this sceanario mapping GPU's and USB ports is not necessary. We build our VM's on a cluster (2-3 servers) and provide the staff with dumb terminals (thin clients). Now each virtual workstation is centrally managed and on a highly available servers. The clients have dumb terminals or Remote Desktops Apps on the end of a network.

Both Hyper-V and ESXi can be used to host this kind of solution (probably Proxmox too) and the VM's can be allocated video ram against a GPU and are able to run OpenGL or Direct X in support of CAD or design software too if needed.

This is the kind of thing: http://www8.hp.com/us/en/workstations/virtual-workstation.html

I doubt a $5,000 budget would get very far though....

1 Like

Good point, didn't some 990FX boards support 4 x GPU's? This is all a moot point now, it looks like the original requirement was for work based machines rather than some theoretical system...

Well i think that the original idea is more gained towards office use,
But then i still dont realy see the benefit of doing this on an AMD FX platform.
Of course the "8 cores" could do it, but i think that the performance wont be that great.

It'd be good enough for that kind of work, it's just the more cost effective solution I could think of for new machines anyways

Updated Idea

*I just had the best idea ever, the money saved with the FX 8 core lets you buy solar panels and a battery bank to power the PC, thus making it cheaper in the super long run as well over a more powerful PC

I wouldn't recommend an amd fx-8300 for this. the 2670 is probably your best bet. Same price, supports ECC (not needed though) comparison here. Keep in mind that you can get a 2670 for ~$75. The 2670 also has 20MB of L3 cache, where the 8300 only has 8MB. In the most recent edition of the Tek, Wendell mentioned that more cache is helpful. It's hard to determine exactly how much it helps, but in certain memory-intensive situations, it helps a lot.

Also, someone mentioned getting old shitty GPU's. Keep in mind that the old Nvidia cards (200 and 300 series) don't support PCI passthrough with KVM. Who knows, it may work, but I can't say without question that it will. You're better off getting 400 series and newer.

1 Like