Proxmox PCIe Gen5 to M.2 storage performance on Windows

Hi,

I recently watched the Level1 video on “Taking a look at ICY DOCK’s Gen 5 M.2 Expander Featuring Crucial T705 Memory,” and at the end, Wendell mentioned that if someone has a specific workload to test on the SSD array, they should reach out. I have a specific workload that I would be interested in testing.

For an SMB, I am currently planning a Proxmox server deployment on high-end consumer hardware for Windows virtualization. The main reasons for choosing consumer hardware are cost and higher clock speeds for the CPUs. Server CPUs have been found to perform poorly for Windows virtualization due to their lower clock speeds. Since these will be test VMs, the hardware failure risk isn’t a concern, so I need maximum performance for Windows virtualization at a relatively low cost.

From previous testing with other hardware, both consumer and server hardware, I have found that Windows virtualization on any hypervisor is quite suboptimal. Even with a Proxmox server equipped with a lot of RAM and fast storage, Windows performance remains poor (even with proper hypervisor agents and drivers).

I was considering building an ATX board with an AMD Ryzen 7900X (or similar) and ECC RAM. For fast storage, I was looking into a solution like the one Wendell showed in the video: a PCIe x16 to 4x M.2 Gen 5 adapter (like the ASUS Hyper Gen5) to create a RAID 10 configuration of 4 drives in Proxmox. Since I don’t need a graphics card, this would leave more PCIe lanes for storage.

However, I’m wondering how the performance would hold up with, say, 4-10 Windows VMs running in parallel. These VMs would need to be fast for development tasks like using Visual Studio, but they won’t be handling graphics-intensive workloads (no gaming, general work only when remoting to them via rdp).

In summary, what would be extremely helpful for me to see tested on the system in the video would be:

Proxmox installed on a separate SSD drive for booting.
A RAID 10 configuration of the 4 drives, mounted as a storage pool in Proxmox.
Running 2, 4, 6, 8, or even 10 Windows 11 VMs simultaneously on that system to evaluate performance.

Seeing this would save me a lot of money on hardware for testing.

If anyone else has suggestions for optimizing Windows virtualization performance, I’d really appreciate any advice.

Thanks a lot!
@wendell

I would strongly recommend you consider EPYC 4004 on AM5.

the EPYC 4564P is actually faster per core and overall, plus full virtualization support from AMD as nested virtualization triggers a host reset that AMD has officially stated will not be patched in 7000 or 9000, but is fixed with a microcode update on EPYC 4004.

Combined with a board from ASRock Rack, Gigabyte, or SuperMicro and you’ll have a production ready server for not alotta cash.

1 Like

…and leaving out the minor fact that 4564P is at least twice (close to three times) as expensive?
There are a few e-tailers in .eu listing it with a ~50% discount which looks very suspicious)

Where did AMD state that it wouldn’t be fixed and regarding the bug this was the statment so not all hardware were affected to begin with?

+	 * These Zen4 SoCs advertise support for virtualized VMLOAD/VMSAVE
+	 * in some BIOS versions but they can lead to random host reboots.
+	 */

Reference: x86/CPU/AMD: Clear virtualized VMLOAD/VMSAVE on Zen4 client - kernel/git/tip/tip.git - Unnamed repository; edit this file 'description' to name the repository.

So the official bug is nested virtualization, but device integrity inside a Windows 11 VM on server datacenter will trigger a host reset as well.

We just had to fix a 7900x with that exact issue.
Disabled device integrity in the VM and it’s been up for a week straight.

Super strange.

Some guys grabbed the 4564P for $279 when NewEgg ran a special.

I saw it for $449 a couple weeks ago when I ordered my 4464P.

1 Like

Thank for the tip about the EPYC 4004 and the 4564P. Do you think virtualization will not work stable at all on 7000 and 9000 Ryzen CPUs at all (like a 7900x). the issue is that the 4564P costs around 800-900€ in Europe where I can get a 7900x or similar for almost half the price.

Maybe anyone else has a take on the Storage part of this project (the m.2 ssds in a pcie adapter in a Linux md raid 10)?

It works fine and again, it doesn’t as stated above affect all hardware just a few specific BIOS releases.
I run a few VMs in bhyve on FreeBSD and it works great without any issues at all and been doing do for a few months now. That being said, it’s 7900 not a 7900X in my case :wink:

To be fair, I find the comparison quite odd and bother are based on the Zen 4 arch so performace is more or less identical or at least AMD haven’t said otherwise.

1 Like