Anyone tested SR-IOV on AM4 boards?

So as far as I know, I have two boards, an Asrock taichi x370 and some asus x570 thing, if memory serves correct both have SR-IOV enable options.

What I’m wondering is if there are any pitfalls of buying SR-IOV capable hardware with these boards, or if I should go and grab something like an intel X550-T2, a cheapo microtik 10g switch, and just make like 10 VFs on the nic and get sweet sweet vswitch performance at over 1000m for each VM.

Cause I don’t know if anyone else noticed but you could have a metric butt ton of VMs doing whatever the hell you want and have 20Gbps of throughput on your server with two goddamn cables. Sounds about as cool as it gets.

The plan
get This and this
along with something like this

Slot the 1 port into the PC, slot the two port into the proxmox server, setup IOMMU on the server and set the number of VFs, setup the switch (looking for more switches or routers that run more FOSS stuff but it seems like you pretty much just run a special little box with linux/freesd and put a ton of nics in it from what I can find) and badda bing badda bop everything gets it’s own virtual interface with decent or even really really good performance and I can finally start getting some big boy transfer speeds over my network.

and I may be able to get rid of this rats nests of cables and not spend money on some giant 24 port switch and trying to cram more nics into my server.

Idk if its changed but the first two 16x slots share the same IOMMU groups

That is a what people over here like to call a “good look.” Fortunately I don’t have any gpu virtualization plans for the server.

The PC though…

I can’t speak for AM4 stuff as I’ve not tested SR-IOV there, but I run a couple of x399d8a-2t boards at work, and with both the built in intel X550 nic, and an x520 on the other board, sr-iov doesn’t work.

It’s not a problem as I get 7gb throughput without it which is plenty.

It is strange though as I’ve got GPU passthrough running no problem on the same boards.

It could be a software thing, I’m using Hyper-V 2019.

Anyway, my point is not to worry if sr iov is working as it’ll still be hugely faster than 1gb, teamed or otherwise.

Quick note, I noticed that the Intel Nics you are looking at are Ethernet (RJ-45) and the switch you posted is SFP+.

Yeah, I’d just be grabbing some of these

I don’t remember exactly what they’re called but I call em gbics even though I’m pretty sure that’s not what they’re called. Remember reading a server the home article about like, 10 of them.

The professional in me tells me that shouldn’t be a problem, but the Fedora wearing FOSS boy in me wants to say it’s all Microsofts fault.

Why not one of these. Just curious. I love mine. https://www.ebay.com/itm/Netgear-XS708E-ProSafe-Plus-10GBase-T-8-Port-10Gbps-XS708E-100NES/184375468399?epid=170098413

only 8 ports and it’s huge as hell. I move often enough that bulk is a factor for me.

I also hate netgear, for not really any good reason either.

Fair enough. It is large, but also rackmountable (Which is why I went with it).

To each his own, personally, I wouldn’t want to buy a $40-$50 adapter per port I’m using which is why I went with it. But if you think it might be a mixed environment with RJ and SFP then the Microtik is probably the way to go. Let us know if you get the SR-Iov working on it. I’d be interested to know. Cheers.

Yeah, I meant to say that the somewhat older Xeon boxes work just fine with sr iov on x520 cards with Hyper-V set up in much the same way.

1 Like

That’s definitely fair. Is it just a dumb switch or does it have Vlan support and such?

It is a managed switch. You have to go through their software to do it which is Windows Only.
I use mine as a dumb switch, but, it’s kind of like a 10gbe vlan, in that, I use it solely to run my 10gbe network at home(different subnet from my 1gb network).

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.