Need advice on external storage for VM server

Hi folks,

I’ve got an oddball situation and need some advice.
TLDR : I need to convert an existing server into a VM host, but the server has no storage that is suitable to hold the actual virtual machines. It currently only has one NVME, and more TLDR the internal storage can’t really be expanded. So I’m forced to look at external options. Software stack is TBD , probably something like OracleVM, proxmox, etc. No microsoft, no vmware on the hypervisor. Virtual machines to include 1 windows DC, 1 windows fileserver, 3 linux flexlm licensing servers with almost no overhead, and a linux subversion repository.

I haven’t really needed to connect dedicated external storage to anything (other than backups on a NAS) inover a decade. Is it viable these days to use a NAS device (box of raid5 SATA drives) for virtual machines in a situation like this? Is there some other interconnect that would be suitable? My server has an available PCIE slot so I could use an add-on card if needed (I’m old enough to remember needing to add SCSI cards to link up with external storage bays, etc). Box also has two spare/unused SFP network ports.

I’ve been assuming that the main bottleneck will be the SATA raid5 array and not whatever interface links it to the server, but am I totally off base? Its been incredibly difficult to find discussions about this sort of use-case that weren’t trying to sell me something.

It will depend on your selected hypervisor, I can only speak for vmware: You can use NFS mounts as your VM storage, I’m sure the other big not Microsoft ones do support this, too.
If your hypervisor doesn’t support this, then iSCSI would be a suitable replacement. This feature is available in many NICs, so no addon card necessary.
And if you want to forgo the NAS way, DAS is the next category to look at. There are SAS HBAs with external connectors so you can plug in a storage shelf directly to your server.
And if you like it a little more complicated a SAN solution supported by your hypervisor would be the last option to look at. But I think that’s to much for your stated requirements

1 Like

I meant more performance-wise.

Is storing my VMs on a DAS/NAS going to cripple performance?
Or is this a common thing and I’m worrying about nothing?

DAS would be like local storage, no performance hit you would notice in your VMs
NAS, if using 10GbE and up, would also have a low impact.

Using HDDs or SSDs will make a much bigger difference in noticeable performance. Caching on the NAS side could help, but it’s not required.
Booting from HDDs is really slow, starting the apps in the VMs, you will notice the difference if you are used to SSD. As long as that’s fine for your use case, it will still work great.

i strongly advise against this. I fuckin hate Microsoft (in particular their shit tech support) with a burning fiery passion, but until ProxMox (and really, the underlying QEMU) has full TPM virtualization support that can be split amongst VM’s in a way Microsoft Server can handle, Windows Server is the best hypervisor available for virtualizing Windows VM’s.

If this is a production environment, a ton of Windows security features and roles cannot be installed or activated without valid TPM 2.0 passthrough.

Aside from that, dedicated storage and compute nodes is still how most high availability deployments are built so there is no issue running a storage node with 40 or 100 gig networking links.

Even a 25 gig SFP DAC connection can give you real world 2 GB/s performance inside the VM assuming the storage node has the hardware to saturate the link.