Virtualization Server Build

Good day Level1Techs,

So I have been wanting to tackle this project for a while. I am planning on building a virtualization server with the intent of it acting as a NAS and also have it run 2 or 3 Windows VMs. I wanted to go for a Ryzen build, but the configuration ended up being way out of budget for one with a sufficient core count.

After doing some research I found that you can get some decently capable hardware from AliExpress, namely 2x Xeon E5-2678v3 in combination with an “X99 Dual Socket” motherboard (ZX-DU99D4).

Now I have some worries about the component selection so that is why I am here. The main worries for the build are as follows:

The motherboard (ZX-DU99D4) - I was planning on getting the Machinist X99 Dual motherboard, but the shipping costs way too much, so I have to select another seller who looks legitimate but has zero sales for the motherboard (“MiReDo’s Store Store” on AliExpress). Would ordering from this seller be worth the risk? Would you recommend another motherboard?

The CPUs (Xeon E5-2678v3) - So considering that this system will be used with Proxmox running a few virtual machines permanently together with whatever other temporary projects I come up with. The Windows VMs won’t be used for any demanding work, two of the VMs will mostly be used for general office work like emails with the third VM being dedicated to act as the server for accounting software. I was planning on using a FreeNAS VM to manage user access to the various directories with Proxmox managing the backups for the VMs. Would these CPUs be sufficient to handle the required task along with some side hobby projects?

Here is the complete parts list:
Screenshot_20200831_204411

I’m planning on expanding the storage in the future, but limited HDD space should be okay for now. I was planning on reserving one of the drives for parity.

Let me know what you think and any advice will be appreciated :slight_smile:

Passmark for E5-2678v3 is 15K/1750 240W TDP
Passmark for 3900X is 33K/2729 105W TDP and it support ECC

I was planning on getting a Ryzen 3900X, but the CPU alone ends up being the same as I would pay for two 2678v3 CPUs including a motherboard and shipping. The two Xeons also have more cores for allocation to run VMs. Would the 3900X be able to handle continuously running 4+ VMs? My main concern with the 3900X was that I would not have enough cores to allocate to all the various VMs.

I was recently in a similar position: I wanted a Threadripper build but it was just too much money. In the end I went with two Xeon E5-2697v3’s in a refurbished Asus Z10PE-D16WS motherboard.

I cannot vouch for the motherboard - I looked at similar but went with something I recognised from a trusted local seller on eBay. In terms of horsepower, I think your CPUs will have little problem handling your load.

Addendum - just in case you consider changing to a board like this - it is an SSI-EEB form factor board which may affect your choice of case.

Those Threadripper CPUs are the dream, one day… I’ll take a look to see if I can find any reputable brand name motherboards, they’re quite difficult to find in South Africa. Most of the components aren’t ideal, but I have to work with what I can source locally.

I’m not familiar with that form factor, the case in the parts list can fit up to an EATX, but if the board gets changed then I’ll definitely make sure that it is compatible with the case.

It’s unfortunate that these motherboards aren’t covered as much in the English forums, they seem really interesting, especially if you’re a tinkerer then you can do things like BIOS mods to unlock turbo boost for the CPUs.

I am quite new to the virtualization topic, but the way I understand CPU selection is that you would like to have specific cores dedicated to each running VM and that you can’t have multiple running VMs sharing the same cores. So that is why I selected those CPUs as they have a large core count. This is opposed to the 3900X where I would only have 12 cores to allocate to the various VMs. Is my understanding regarding this concept correct? Because then that would cement my decision to select the 2678v3 for my build.

You can run as many VMs as you want on as few cores as you like. The question is what are you doing with them and how important is the performance of those VMs when loaded simultaneously?

I run like 6 VMs on a 3600x but I’m not asking a ton of my VMs at any given time. I only really load one up.

That’s not the case.

Thank you for the clarification. I would really like the Windows VMs to be very responsive (hence why they will also be running off the NVMe drives), so then I think the safest bet would be to get the dual 2678v3 system and dedicate some cores to those VMs (about 6 threads each). I could then also dedicate some cores to the FreeNAS VM (I’m unsure as to how demanding this is but 8 threads should be enough). With cores to spare, I can run some non-performance-critical hobby VMs and not impact the VMs I actually care about.

I respectfully disagree but its your money so I’ll let you figure that out.

I’m here to learn, what would your recommendation be for the workload I set out?

I do have a Ryzen 1700 system that can be repurposed for the job if you think that it will be sufficient.

I dont know what your workload is

Frankly more cores is great but if the software you’re running isnt built to leverage them then you’re just pissing into the wind giving each machine more cores when the software wants a fast core.

I wouldnt spend my own money on old hardware, especially hardware of questionable quality like those x99 boards you suggested.

Again, its your money and your choice to make. I’m just some idiot on the internet.

What I would like to do is have 2-3 Windows VMs that’ll be mostly for office work, things like emails. I would also want all those VMs to be backed up so that they can be restored if something goes wrong on the system. These systems need to have access to a NAS that is reduntant for all the documentation that they’ll be handling, its not a lot of data, but it is important to keep safe.

Since the users won’t be using any performance intensive software, the core speed won’t impact them much. If I’m not mistaken, the core count will help with data compression/decompression and such for the VM backups as well as for the NAS.

I would love to try and find a solution that doesn’t involve sketchy Chinese motherboards. The challange is that consumer CPUs that I trust to have enough performance for the task (eg Ryzen 3900X) cost so much in my country that the CPU alone will take up 1/3 of the budget.

I think I’ll be conservative and repurpose the Ryzen 1700 system as a trial run for the time being. The results I obtain from that trial run will allow me to assess the actual amount of resources the task will require. Do you think that’ll be a better approach? It’ll definitely be cheaper if it works out.

I appreciate the criticism as it is easy to get drawn into a confirmation bias.

You can definitively >overprovision< a fast CPU. Meaning you can put multiple VMs on one Core. This means that you need less physical CPUs than virtual ones. This is especially useful when you won’t have load on many VMs at once.

Only problem here is, that at least as far as I know, you can not set CPU limits in qemu, like you can for example in Docker. In Docker you can say that a certain container never uses more than 50% of the capacity of the cores assigned to it for example. This means that when you have more virtual CPUs than physical ones and you have a high load in all VMs some will get throttled and even maybe get laggy in regards to input and time critical stuff like that.

The reason Adubs is advocating the 3900X is, while you won’t be able to assign a physical CPU to every virtual CPU in total your VMs would be able to get more work done.

1 Like

I’m not even sure he needs a 3900x

Its not clear to me what kind of load the machine will actually be under. Could be fine with a 3600x for all I know. I know its been a great little CPU for me.

image

I dont understand enough about the workload to say one way or the other though.

When I started the concept for this project I didn’t know that cores could be overprovisioned which is what sent me down the rabbit hole to allocate each VM it’s own physical cores. With the new knowledge the community has provided me I think that a locally obtainable (and new) CPU and motherboard will preemptively prevent a lot of headaches if I decide to go that route.

I’ll have to do some investigating to see whether the Ryzen 1700 will suffice, and if not, I’ll consider something like a 3800X/T or maybe even 3900X just so that I have some certainty that the CPU will be able to handle the workload. I’d rather overestimate the requirements than underestimate them.

You don’t have to dedicate cores to a VM - as an only user or in a family setting, sharing them between VMs is a great option.

You don’t need cores for backups, most bulky data like videos and similar is already compressed, if you do backups with snapshots and incrementally you don’t need to worry about compression (also, HDDs are <$20/TB and SSD are less than $80/TB … you likely won’t save 1TB with compression…and you can do this in the background and overnight)

If you’re transferring snapshots to a remote machine, you’ll care about encryption and checksumming speeds - modern CPUs have a significant edge there.

Also, new motherboard has lower power usage (rule of thumb $1 / watt*year + can use smaller=cheaper power supplies, easier cooling) and you get a lot more future proofing through modern connectivity options (nice fat PCIe 4.0 lanes for speedy networking or storage or future usb4.0 expansion cards, usb-c port or two, 10Gbps usb 3 ports for external drives and stuff, and so on).

B550 or X470/X570 and a 3600 … do 64 GB ram and you’re done. A year or two down the line you can upgrade the cpu for cheap.

Very interesting insight risk.

Yes it is for my family, they need to travel a lot and can’t always be at the office to access their computers. So with this computer they can access their office VM/Computer from anywhere so long as they have an internet connection, files will also be a lot more secure with the NAS VMs managing redundant storage. I’m doing Chemical Engineering and might need to run things like fluid simulations, that’s why I need a bit of extra overhead. The office VM performance mustn’t be impacted if I’m running an intensive task.

Storage isn’t as cheap here in South Africa, ~$80/TB of HDD storage. Luckily they don’t have significant storage needs else it would’ve severely impacted my budget for the project.

I was thinking of adding a better graphics card down the line if I need it for my simulations as well as a 10Gbps network card to use it as a pfsense router. But those are all projects for a future date.

for reference, how intensive are these, how often do you run them, and for how long - kind of software are you using, which os?
From little that I remember, aren’t a lot of these simulations just variants of nbody particle simulations, that just use the CPU with vector operations to burn through ram bandwidth as quickly as possible and update everything to advance the simulation from one delta-t to the next?

Currently the only software I use for my simulations are Solidworks and Aspen Plus, unfortunately I haven’t found a Linux alternative for these applications so they only operate under Windows. That is also why virtualization is so appealing to me as I can run Manjaro as my daily OS while only using Windows in a VM when I need it.

I usually also have to code my own fluid simulations in Python, but those haven’t been that demanding since I usually only model small sections of systems at a time. That might change in the future as the size and complexity of the processes increase.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.