Storage server for MS Hyper-V VM's with NVME drive non raid configuration?

Hello everyone,

Watching some of the Hardware Raid is Dead videos on YouTube, I’m a little stuck in my thoughts. It seem like this topic is making a bit of trouble for lots of people including my self. I have brought this topic up to other members of my team and I have maybe rubbed some of them the wrong way when it comes to what we should do with a set of new servers.

Some back ground.

We have a Synology RS1619xs+ with a -RX1217rp expansion unit where we store our VM’s and client backups. The Synology has 4 12 TB mechanical drives in the head unit and 16 12 TB mechanical drives in the expansion. We have a Hyper-V cluster with 5 nodes and a Urbackup server that stores server and client images from the clients we support. Do to slow performance we are moving away from this system and on to four new of Dell Servers. To replace the Synology for our VM storage we have a PowerEdge R760 with 24 2TB NVME drives. Internally we have come to the conversation of how should this server be built. If hardware raid will slow down the NVME drives and we have server 2022 installed as the OS what is the way forward when building this server to get the best performance from the NVME drives? I have also seen many topics stating that MS software raid is not the best way to go so, what is? As i am not the Boss we have to keep the system Microsoft based OS.

Raid controller in the server for the NVME drives is a PERC H755 Front.

Thank you in advance for your feed back.

1 Like

*Due

Sorry, couldn’t resist.

Storage Spaces and ReFS provided you will only use this hardware with Server 2022 and newer Windows OS’s.

Any repurposing will require a format of the drives as nothing integrates with ReFS. I do not recommend using ZFS plugins for Windows in enterprise environments.

ReFS requires JBOD configuration with the PERC card. I have ran disaster recoveries on ReFS from replacing the host OS drive to rebuilding the arrays after simulating drive failures. it just works

You cannot run the host OS on a ReFS array, so run it on a single NVME and back it up to the Storage Spaces array or to another drive using the Server Backup role.

1 Like

Thanks for the input TryTwiceMedia. The host OS is installed already on a BOSS-N1 with raid 1 on two 512 GB NVME drives. I’m not really asking about the file system so much as where to start before the file system type is defined. From the point of RAID or no raid and what are the options at that point of creating storage pools on my system is more the direction I was going with the line of inquiry. Am I to assume you are pointing to a raid configuration with the PERC card and Refs? I’m really needing to know more about the solutions that are not going to slow down the NVME drives because the RAID card chops the speed by 75% at which the NVME drives can read and write due to the bottleneck of the hardware card.
I’m looking for more information based on the video form Feb 14 “So if Hardware RAID is dead… then what?”

1 Like

Your PERC card will always be a bottleneck in this config.

Configuring the PERC for JBOD will make it better, but never really good.

Going from the backplane to the MoBo and bypassing the PERC card is best case, but will require dual CPU’s with 24x NVME drives.

Cost: few proprietary Dell cables ($100 each, last time I checked)

That gives you full nasty PCIe speeds, but be sure to eject each drive from Windows before removing via hot swap as I have seen Windows handling NVMe drive loss with kernel panic.

1 Like

Just double checked the Dell recommended configuration after posting and thinking about 24x NVME drives:

This is your objective

Page 192

1 Like