Ryzen Homelab w/ Virtualization (self-hosted AWS)

Hi guys! I’m looking to do a new homelab/home server build.

My AWS credits recently expired and I am looking to create a sort-of self-hosted AWS EC2 service.

I am mainly looking to spin up linux VMs for hosting my websites/webapps/files. I would also like to be able to train data science models on a GPU.

I was planning to install KVM (controlled with Kimchi or Mist.io?) on a single 500GB M.2 NVMe drive.

I’m still trying to find compatible hardware parts that I can easily virtualize. GPUs seem to be tricky though. If I buy a graphics card or two, would I be able to use a GPU on my VMs with IOMMU or SR-IOV? And apparently NVDIA blocks virtualization on consumer cards?

Tentative parts:
Ryzen 1700 or 1700X
Asrock Taichi X370 Motherboard
500GB M.2 NVMe Drive (for hosting the VMs and maybe hosting personal files)
16gb or 32gb Taichi compatible RAM (probably OC’d to 3200 since Ryzen supposedly depends heavily on RAM clock speed)

I think in Wendell’s Taichi mobo review, he said multiple GPUs share the the same IOMMU group. Does this mean I could still use the GPU, but only for one VM?

Also as a side note, I was thinking I could have Plex media server VM and an OwnDrive (file hosting) VM. Haven’t looked much into how that works though.

Disclaimer: I’m new to the homelab/virtualization scene

1 Like

Thought about open stack?

2 Likes

Ooo interesting. I am looking into it right now and I’m intrigued.

I’m still wondering if my hardware will be compatible for creating vGPUs - especially if I chose an NVDIA card. Also I wonder if everything will run fine on an M.2 NVMe drive instead of a standard SATA drive.

This looks like a cool project though. Might be exactly what I need.

As long as you use KVM and spoof the HyperV vendor id for Windows Guests, NVIDIA GPUs can be passed through just fine

Sr iov requires specific enterprise GPUs

1 Like

I’d consider using a container of your choice for those instead of a whole VM. Much less overhead that way.

1 Like