? about Ryzen 1700 PCIe passthrough on AGESA 1006

Hi there, I need some help with some testing regarding the ryzen 1700 or better, to see if the following scenario is possible at this time:

ryzen 1700, 16 or 24 gb 3200 ram, various hard drives and 3 nic's plus the built in board nic and my rx 480, I want to put Ubuntu 16.04 server on it as the host, (I use ubuntu server simply because I am lazy and have been using it since 2010 and I have not found any of the other distro's that have compelled me to switch). So 16.04 with kvm for 5 vm's one will run pfsense and handle the firewall and routing, 3 will have 16.04 server, one to run a nas and internal vpn, one for nextcloud and the last 16.04 will host onlyoffice, (nexcloud and onlyoffice don't really play nice on the same machine, I know it can be done but it is just easier to separate them).

And now for the 5th vm, I want to run windows 8.1, for games and my general computer use, so I want to pass the rx 480 on the pcie3 x16 and one of the 4 nic's through to it along with using an opeteron or other cpu category in kvm so I can get around Microsoft's unbelievably stupid ban on updates for win 7 and 8.1. I absolutely detest win 10 and refuse to use it. However I have programs as alot of us do that just don't seem to work on linux, I know its getting better but it is still just not there and wine has never worked that well for me.

Before I commit to this type of machine, as opposed to staying with my current server, an athlon 860k, 8gb ddr3 1600 and various hard drives for the nas, and my win8.1 rig, i5 3570k 12gb ddr3 1600, rx 480, and a 256gb ssd, and just not upgrading at this point, I need to know whether the rx 480 on the pcie3 x16 and 1 nic can be passed to the win8.1 vm using iommu on either a b350 or x370 mb. I have some hd7850's and a 6450 laying around for the host that i would put on the pcie2 x4 which would probably end up being x1 for booting the host, but i shell in to all the 16.04 systems and pfsense, the host i add xfce and run x2go from my windows machine and don't really need a video card except to post at boot,to set up the host to begin with or if something goes horribly wrong and i have to manually fix something on the host or a vm. the other vm's I just shell into with winscp and putty, so they are all just basic servers.

I currently have this setup running on the equipment mentioned above, pfsense runs great and all of the other vms and host uses about 20% of the athlon 860x and about 5 gb's of ram. So the only thing i need to add into the mix is the win8.1 vm with enough horsepower to exceed the i5 rig I currently use.

2 Likes

@wendell Maybe you could help me out with this since there are no other takers?

Well not sure if @wendell allready had time to dive a bit deeper into Ryzen and gpu passtrough.
It seems like that the iommu groupings on most am4 boards are kinda messy.
But the upcomming Agesa patch might fix some of that.
I still think that you are probablly going to need a X370 board to be succesfull on this.

Hopefully we see a future video on this subject.
Because i think that allot of people are having the same questions.

I have tested this on an ASRock X370 Gaming K4 fatality and with it GPU passthrough in all it's forms is fine. No problems whatsoever.

HOWEVER

PCI-e 1x slots come off the Chipset and are all in one group still. This sucks.
If you have a mainboard with a sensible BIOS you can use the Linux ACS patch to help that.

Also of note, some boards have a glitch whereby populating the last PCI-e 1x/4x slot on the board close to the second PCI-e 16x slot causes the 16x slot (8x really with dual GPU) to run at slow 4x or even 2x speed with some cards in passthrough mode.

If you get such a situation, move your PCI-e NIC card to another slot and try again.

That said you don't need to passthrough a NIC or even audio for that matter. It is perfectly possible to run a virtual bridge for your network with the e1000 nic and also use pulseaudio virtual output for the VM audio.

EDIT: Also Ubuntu currently sucks for passthrough as it requires lots of nonsensical workarounds to make vfio work.
Stick to centos/fedora/arch whatever else that isn't debian/ubuntu.