ok so here is my situation, i built a file server/ media server a few years back. i’m running a old amd x4 cpu that needs to be upgraded. so i’m thinking about what i want it to be able to do, but first i need to get a couple of things straight.
1st question about VMs
what is the deal with cpu core allocation? lets say i have 16 threads and i want to have all threads available for converting video files. but i want to also have some virtual desktops to use. this is confusing to me. it seems i can create 16 different VMs and set them each to use all threads, but does that mean i can only use 1 VM at a time?
VM’s are whole Operating systems inside your host OS.
So if you allocate 1 core or 16 cores. Thats is what the VM will have access to.
You can look at the strain etc on a host or VM with the OS tools, eg top on Linux. A windows file server is very single threaded so it will be happy with 2-4 cores. But it all depends on the loads you put on it.
You can allocate multiple cores to a VM, even if you do not have sufficient cores available in your machine.
e.g., i can allocate 2 each cores to 3 VMs, run them all simultaneously, and only have 2 cores in my laptop.
Performance will suffer of course (you can’t just conjure more processing capacity out of thin air) but it will work, and from within the guest OS it looks like it has however many virtual processor cores.
For your example you could build a VM with 16 virtual cores and have it crunch away on video or whatever, and the host OS kernel will still schedule processing for the host OS or other virtual machines you are running.
One of the benefits of Virtualization is over-provisioning of resources (CPU, RAM, storage).
While there is no sense to give for single VM more cores or RAM than the host machine has. E.g. 24 cores while the host has only 16 - if such VM needs that power it will actually hurt performance tremendously. it is possible to “promise” as much as the host machine has, knowing that not every VM running at the host needs it at the same time (e.g. CPU) - or even is running at the same time.
CPU power
Virtualization of CPU is more about sharing the CPU time, not cores per se. Core/thread configuration is more like max limit. Or in some cases also fix for some software issues. Or license limitation of guest OS or software.
Easy to dynamically redistribute, if all running VMs need full capacity of CPU they will share the CPU power equally.
Virtualization is literally build upon over-provisioning of CPU power. If you Plex server does need some CPU then it will received an equal share of CPU time. And the only draw back is that other VMs will receive a little less of that time (they will finish job latter, proportionally to how much less CPU time they have). But usually your PLEX server is not used 24/7 so it will stay in the background and wait, without taking much of CPU at all.
RAM
Here over-provisioning has some limitations. If you have host with 16GB of ram, you can easily create 4 VMs on it that will have each “promissed” 16GB of RAM (minus some space for hypervisor). And you can even start all of those 4 VMs at the same time, no problem. Hypervisors give only as much RAM as it is really needed. However it is not easy to take it back. So the problem starts when those 4 machines after booting will start running programs that really need those 16GB of RAM. At some point hypervisor will need to pause and suspend 3 other VMs in order to have the only one running with those 16GB of ram in use.
It is not easy to re-distribute the RAM when it it still used by the VM, even if it no longer really needs its. There are some possibilities (hypervisor swapping, hypervisor actively talking with the guest system). But in simple scenarios it means that, if inside VM you see than only 2GB of 16GB are being used because the app that needed those additional 14GB stopped running, those 14GB cannot be easily taken back from this VM.
Storage
It technically has the same limitations as RAM. However if you need 16GB of RAM then usually - but not always - you really need it, unlike the storage space. Storage - you need to fill it, and it takes time. You do not need this “promised” space right away.
Most, but not all, Virtualization software will have implemented thin and thick storage provisioning.
Thick provisioning - the hypervisor actually prepares space for the whole disk. And the “promise” becomes the reality right from the begging.
Useful for cases when dynamically growing of the the size of the files for the disk is not beneficial at all, as such growth usually is accompanied by fragmentation.
Also useful when it is known that the disk will literally fill right away.
Or performance of the storage is some consideration.
Any work area that is cleared and filled with temporary files.
Thin provisioning - the hypervisor actually only “promises” the space for the disk. And as with RAM, the actual usage of space grows with the need. E.g. useful for general purpose OS partition.
Essentially when you set a number of CPUs for a VM you’re saying how many threads that VM will have on the host machine. It doesn’t take cores away from the host and you can have more VMs than CPU cores. Basically it’s a setting for limiting the VM to a certain amount of threads so that you don’t slow everything down too much but you can set it to whatever you like an experiment with what works best.