Ampere Arm Max M128-30 -- Setting up Windows 11 for ARM VMs

Background

This is the system76 Thelio Astra – configured with 2tb of local storage and 512gb ram. It is a beast – easily the fastest Arm desktop you can get in 2025, and a much better Arm developer experience than other arm-based solutions I’ve tried, including the Qualcomm Arm dev kit that was eventually cancelled/refunded/recalled.

It ships with Ubuntu 24.04 LTS, which is where a lot of the testing and development Ampere is doing is first tested.

The arm cores are Neoverse N1, and there are 128 of them at 3.0ghz. As this CPU is designed for server workloads, these CPUs are 3ghz all the time, even when fully loaded.

pre-setup

Apt install all the deps

Windows 11 for Arm

https://www.microsoft.com/en-us/software-download/windows11arm64

VirtIo ISO for drivers (important!!)

https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/
I used Index of /groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.271-1 for the video.

Virt Manager

Before one does anything, one must enable xml editing. This is because the gui has bugs and wrong assumptions that are worse on the arm platform. We need to edit the xml to solve those issues. Do this first!

image

Virt manager’s gui has a lot of quirks that have persisted for years. These are worse on the arm platform.

Next, we’ll setup the OS from the iso.

new VM

Idiocy Alert #1

This is a bug I reported years ago, but the ticket was closed. When you pick more cpus here it makes more cpu sockets, not cores. like as if you have a 16 socket system. which is a terrible idea for most modern OSes.

for now leave one cpu but we’ll fix it in another gui later. Please also select a reasonable amount of ram such as 16384 or 32768.

be sure to check customize before install, or it’ll fail anyway with

image

…turns out hyper-v fixups are not on the arm platform. Seems obvious when one says it out loud…

Idiocy Alert 2

hit the xml tab and delete the section entirely.

This is the cpu configuration I went with:

Additional Setup

For whatever reason it was also necessary to add more console hardware. Spice channel, display spice, and video ramfb.

I had some trouble with keyboard/mouse on the console after that, so I just passed through another different usb mouse and keyboard to the vm.

It was also necessary to copy the arm64 drivers for virtio and virtscsi to a usb memory stick and pass the memory stick to the vm. This is because the win11 arm installer seems not to have any drivers for virtio, emulated sata or scsi…

Setup USB witth virtio drivers

Simply copy the arm64 drivers to a usb stick. Then add hardware and map that usb stick to the vm.


Here I have mounted the iso and copied the needed folders. Wait for the transfer to finish.

Finally, we can begin the installation.

Windows, how do I loathe thee, let me count the ways

This screen is not “cant find disk” it is “cant find cdrom”

Use the browse button, go to the usb with the copy of arm64 and start loading drivers.

Windows Has Bugs Too

Okay so the driver was added to find the usb “cdrom” - but that doesnt survive a reboot. You have to load it here as well or it doesnt survive installation.

You’ll get INACCESSIBLE_BOOT_DEVICE.

Use load driver on this screen to load the storage driver and other virtio drivers so they’re available on first boot.

FIN

This is pretty cool

Arm VFIO Bonus Round

With Linux working this well on an Ampere workstation with 128 cores, and with IOMMU working, vfio passthrough of PCIe peripherals opens the door to help ease hardware and driver development. In the video I passed throug an Intel AX210 – a device that currently doesn’t have ARM64 drivers – which would make it easier to do device driver development (especially on windows).

Part of the reason for this is Windows doesn’t have a lot of low-level facilities for resetting PCIe devices or resetting hardware peripherals without a reboot whereas rebooting a VM with an assigned pcie device can usually reset either the device or root bridge without issues. The server pedigree of these Ampere CPUs appears to me to mean that this functionality is quite robust – something I can’t always say on competing platforms.

To that end I can also pass through an Nviida RTX 4060 Ti to a virtualized Ubuntu instance and successfully run the ARM64 native nvidia drivers in the VM.

If you are a hardware or driver developer looking to work on the Arm platform it is hard to imagine a better workflow than working on these sorts of problems in a self-contained virtual machine that has real hardware tied to it.

2 Likes

Following… I hadn’t even thought about VMs for Windows device driver development, but this server seems to be an ideal companion for that task. Especially it being nearly silent, compared to many of the server-oriented Ampere builds you can buy that are pre-assembled and ready to go.

4 Likes

I keep throwing around the idea of getting one of these ASRock boards and using some second hand ram and a cpu with a 4070ti that I have lying around. If there was an ARM version of Proxmox that you could throw on here, this would replace my NAS in a heartbeat! And if there is ever a version of Steam that can run on this bare metal, I might even switch over to this for my desktop!

1 Like

Hmm if Android emulation works well this could be a great machine for the test setups I work with.

It would be great if this could run a multitude of phone emulators at the same time.
We are running into so many issues using Android emulators on our windows test hosts, including the PCs just dying (no bluescreen or anything, just black screen with 50W power draw).

We have physical phones too but they are an even bigger maintenance nightmare, so a good stable virtualization setup would be amazing.

1 Like

got a step by step for doing a setup to test/investigate?

3 Likes

Is used for Android dev/test by many automakers, game developers. Is used as the arm virtual hardware host by various entities doing pen test of virtualized phones.

Will get back to you :slight_smile:

Hey Wendell,
Apologies for the late reply but it’s been a big difficult to get the time required to make a proper post.
Would it be possible to get back to this in 3~6 months? The current temp solution we have (hard-rebooting the pcs we use) works well enough for this problem not to be a priority right now. Assuming we can pick this up when we’ve wrapped up some other things that is.
If that’s not possible, I’ll bring up that this is an opportunity with an expiry date and I guess we’ll get some limited time.

Hey all,

I am hoping to see if you all geniuses in server world can help me out with the mess I have gotten myself into. Just an FyI I have zero familiarity with servers but I managed to purchase an Ampere Altra with 160 cores and 128GB RAM with two NVMe SSD Samsung and Intel hard disk 1 TB each. I managed to install ESXi Fling thinking I should be able to run all of my lab VM’s on it which do not support ARM64 architecture. I have seed. The workarounds of ditching my ESXi Fling and replace it with Ubuntu ARM64 with Qemu to run the x64 and x86 workload or install windows 11 ARM64 VM with Hyper-V to run my x64/x86 workload. I want to know your opinion as far as the workload is concerned is for lab purposes and will run 8-10 VMs only. The most resource hungry VM would be Eve-NG that runs the network and security devices on it. Please help me understand and make the right choice in moving forward with this. I appreciate you all help in this.