Hey y’all I’m in the process of spending my end of year bonus on finally building out a server. Being more of a software person than a hardware person, I’ve been doing research for what hardware I’m looking for. Letting my requirements drive my solution, my uses for this server are two main things:
Have a VM where I pass through a GPU (my current RX 580) so that I can use it as my main personal workstation for both a Windows box. I would not be running this headless. I’d be running HDMI cords form my GPU to a monitor.
A K8s cluster so that I can run various services.
Plex
TrueNas
WikiJS
PiHole
etc. (other things that don’t take up too much system oomph).
For the hardware, I’m mainly stuck between two choices at the moment:
Ryzen 7 7900 or Intel 7 13700
From what I’ve seen online both are within spitting distance of performance. Though the Ryzen does seem to be more efficient, but that Intel does include Quick Sync Video. Which for Plex transcoding would be great.
Which is where my main question comes to head:
I don’t seem to understand the relationship between PCI devices and the resulting devices that would be found in /dev/....
Maybe another way of saying, if I have a dedicated GPU in my system along with an intel iGPU. Can I pass the dedicated GPU into a VM, while also sharing access to Quick Sync Video (which I believe is through sharing /dev/dri/...) and still leave the integrated GPU for the host OS? Or do I even need to worry about the host OS having a GPU?
Hopefully these are silly questions with easy answers
Personally I would go with the 7900 - you can always use the GPU or buy an extra GPU for transcoding purposes.
As for the dev tree, that is a whole different can of worms. The dev tree is divided on the type of equipment, not the interface. So, for instance, a block device (hard drives and other things that may have filesystems) in /dev/block, mouse and keyboard in /dev/input and graphics cards in /dev/dri.
Once you realise this I think it will be a lot easier to understand this filesystem. What you want for virtualisation is good IOMMU groups however, and that requires a solid motherboard. See this for further information:
AM5 will get long support until at least 2025. I would choose 7900(x) over 13700(k).
Plex, depends on how many concurrent streams you need to transcode. If you only need 2 streams or even 3 streams concurrently, you probably can go without gpu encoding.
Thanks for the white paper, that was really good for describing the baremetal → hypervisor → VM layers for an physical device. It makes much more sense on what I’m trying to “carve out” from baremetal such that I can dedicate to certain VMs (or at least the guides mucking around Grub and setting flags actually make sense for the arcane commands I’d be typing in).
My minimum use case is for me and my partner. Primarily using chromecasts and then potentially streaming from our phones if either of us travel. Worst case it’d be both of us streaming remotely at the same time during travel.
However, I’d love to open that up to our families where it could get up to 9 streams. Which then my upload speed (~80Mb/s) starts to potentially be the bottleneck for 1080p h264 streams. (I’d like to not dedicate 100% of my network upload to just streaming).
I’m trying to keep the wattage down so that I’m not running a miniature space heater and I can keep my fledgling homelab to just one 20Amp circuit on my house. There’s a spot in my basement that I could potentially move it to that has a dedicated circuit that nothing is plugged into, and I’m keeping an eye on the climate this season to see if I’m willing to through my hardware down there. If that works out, I’d run a dedicated Ethernet line between there and my office. (Plus adding another circuit from my breaker panel is much easier to that location).
Adding in a budget GPU for streams is doable, but something I want to try to avoid initially. Which for 9 streams there seems to be great budget and low power options ( nVidia Hardware Transcoding Calculator for Plex Estimates ). Plus maybe even think about dedicated AV1 stream support in future hardware if that actually takes off to replace h264/h265 (instead of having to trudge through a batch process of re-encoding all my media … but that’s a future me problem).
To add another dgpu for plex is doable, but it takes a pci-e slot which is a scarce resource on both AM5 and z790 platforms nowadays. The market leans toward more m.2 connectivety rather than pcie, which I believe it is stupid.
If you can live without AV1 support, you can do it with intel quicksync instead of another dgpu. Put Plex in a container and share the intel igpu to the container. I am assuming you would run linux system as your host.
Beside gpu, you probably need a dedicate 10Gb nic with sr-iov support. That’s why you would run out pcie slot very quickly.
This is pretty much the same project I’m working on. From this weekend’s tinkering, I can say that IOMMU groupings and PCIe passthrough could both be big headaches depending on your choice of hardware. So far for myself with an Asus ROG Crosshair X670E Extreme and Ryzen 7950X:
The SATA, networking, and Thunderbolt controllers are in the same IOMMU group and I cannot pass just the Thunderbolt controller to the guest VMs while leaving storage and networking to the host.
I am unable to pass the iGPU through to the virtual machine despite it being in its own IOMMU group, and I suspect this is the same reset issue that’s common to all of AMD’s iGPUs.
I get error code 43 in the Windows guest when passing the dGPU (NVIDIA RTX 6000 Ada) to it. The dGPU doesn’t do any better with proprietary nor open source drivers in an Ubuntu guest.
And if I can’t, I can’t. Oh darn, I’ll have to build another server if I want to have the ability to stream to more people. But As I’m forming up the list of hardware that I’ll be purchasing here in the near future I’m just trying to setup myself to be flexible if possible, but making sure I try to hit my minimum use cases with things.
Yes, Proxmox (… so Debian). Which they have an easy method in using kernel 6.2. That’ll be nice since I’m using relatively new hardware.
Well good luck as you debug your issues. I’m hoping that my RX 580 that I’m planning on passing through won’t be too much of an issue with the guest VM.
Not all is lost here though even if a split is necessary, while the AM5 platform should make great headless servers, both AM4 and Intel 1700 can offer good deals if you need to split… Here is a decent server build example with a Ryzen 5600 and 32 GB of RAM, for instance. Note that the 1TB NVME is ridiculously large, but feels silly to pay $10 or $15 less for half the size:
My first post on the forums was going to be about the build I’m just starting, but here I am instead heh. Always been interested in this approach, as a Twitch streamer. Have a specific Windows install, specific to OBS and Videah Gaems, which is tailored and maintained. My exsysadmin brain kicks in with the old “why are you introducing additional points of failure” argument and I skitter away.
If you can spare the PCI-e slot, the lowest end Intel card is still sublime good for AV1. A mate uses one on their Plex PC and it’s been crushing it. I’m hoping to shift to av1, using my gaming PC as encoder because 4080. Just need to get all my clients to line up.
I actually have a current AM4 system (still rocking my 1600X) and was giving some hard thought into getting a 5950x. However once built out it didn’t save enough money to just go ahead and upgrade to the new AM5 platform.
I’m sort of hearing the sirens song of the 7900 / 7900X when it comes to their productivity benchmarks. Though having those 8 extra threads on a 5950x for shenanigans is also quite appealing.
That’s also something that I’ve looked into ( Intel QSV’s wikipedia page (that “Arc Alchemist” column)). It’s an option, that looks neat for the future.
Regardless hoping to make a new thread detailing my adventures and setup as I get things going (and hopefully it’s more of post of success than one needing help and support )
Yes, I am doing exactly this with an Intel E-2288G.
I pass the integrated GPU to my linux server (and with some clever reordering of loading/unloading of kernel modules, I’m able to get console output to go through the proxmox VNC display adapter while still using the iGPU for transcoding). This makes it easier to administer remotely if I have any issues with networking.
And then for a while I was passing my 3090 to the windows VM as a raw PCIe device. I’ve since moved it back to linux since windows has been a really poor platform for some of the ML experiments I’ve been doing lately, but I’m planning to add a 2080 I have laying around to the system and use that for windows once I free up a PCIe slot.
So yes, +1 to intel in this situation. When spending lots of time on the console-redirection hack, I ran across many posts like yours, finding frustration with this use case and AMD CPUs/iGPUs.
Attached an image of what the first set of IOMMU mappings look like in proxmox. The first (empty name) spot is the iGPU.