Enable sr-iov on intel 10gbe adapter?

Does anyone have tips on enabling sr-iov virtual functions for an Intel x520 10 gig network adapter?

OS is Fedora 31, kernel 5.3.13-301.acspatch.fc31.x86_64
Card is an Intel x520-DA1

Debugging info:

# echo 7 > /sys/class/net/enp8s0/device/sriov_numvfs
-bash: echo: write error: Input/output error

# dmesg
[794215.424272] ixgbe 0000:08:00.0 enp8s0: SR-IOV enabled with 7 VFs
[794215.425165] ixgbe 0000:08:00.0: removed PHC on enp8s0
[794215.514577] ixgbe 0000:08:00.0: Multiqueue Enabled: Rx Queue count = 4, Tx Queue count = 4 XDP Queue count = 0
[794215.553253] ixgbe 0000:08:00.0: registered PHC device on enp8s0
[794215.730059] br0: port 1(enp8s0) entered disabled state
[794215.764002] pci 0000:08:10.0: [8086:10ed] type 7f class 0xffffff
[794215.782748] pci 0000:08:10.0: unknown header type 7f, ignoring device
[794215.788066] ixgbe 0000:08:00.0 enp8s0: detected SFP+: 5
[794216.044593] ixgbe 0000:08:00.0 enp8s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[794216.052254] br0: port 1(enp8s0) entered blocking state
[794216.052881] br0: port 1(enp8s0) entered forwarding state
[794216.827781] ixgbe 0000:08:00.0: Failed to enable PCI sriov: -5

# lspci -s 08:00.0 -vv
08:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter X520-1
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 32
        NUMA node: 0
        Region 0: Memory at 47e7ff00000 (64-bit, prefetchable) [size=512K]
        Region 2: I/O ports at 4000 [size=32]
        Region 4: Memory at 47e7ff80000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at bad00000 [disabled] [size=512K]
        Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
                IOVCap: Migration-, Interrupt Message Number: 000
                IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+
                IOVSta: Migration-
                Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00
                VF offset: 128, stride: 2, Device ID: 10ed
                Supported Page Size: 00000553, System Page Size: 00000001
                Region 0: Memory at 00000000bae80000 (64-bit, non-prefetchable)
                Region 3: Memory at 00000000bad80000 (64-bit, non-prefetchable)
                VF Migration: offset: 00000000, BIR: 0
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

# cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.13-301.acspatch.fc31.x86_64 root=UUID=f475580b-35fe-4bd2-a027-98fd9162f747 ro resume=UUID=74b0f396-4d81-49a4-bc90-959789232f6a pcie_acs_override=downstream,multifunction iommu=1 amd_iommu=on rd.driver.pre=vfio-pci systemd.unified_cgroup_hierarchy=0

I checked my bios on the network card, no option to enable/disable SRIOV just stuff about boot protocols

Checked the system bios, ACS was turned off, so I turned that on. IOMMU is confirmed set to “ENABLE” and not “AUTO”. Didn’t see any option for SR-IOV in the system bios.

Any ideas?

I’ve never had the privilege to play with this before. I’ll nudge the people I know who have touched this sort of stuff.

1 Like

no sr-iov option in bios? it may not be supported on your board. What board?

Gigabyte x399 Designare Ex, f12i BIOS

Ah, you need to go back to f10. The newer agesa fix some Nvidia pcie aspm issues but you can disable aspm and be fine

FYI every single Asrock AM4 mobo I have seen has an SR-IOV BIOS option that is disabled by default. So you probably must find and turn it on or bug sloppy gigabyte until it is supported on your mobo.

Strangely, the feature started working after updates and a reboot last night… I now have seven additional 10gb network adapters on the system to play with.

Of course, NetworkManager grabs on to each of them and makes sure they get IP addresses from the network :frowning:

Assigning to KVM guests using pci passthrough works great, no fiddling required. Intel PROset drivers loaded right up

image

1 Like

I discovered what was wrong, the ACS override patch seems to kill the virtual functions

# dmesg |grep 08:10.0
[    5.070523] pci 0000:08:10.0: [8086:10ed] type 00 class 0x020000
[    5.076393] pci 0000:08:10.0: Adding to iommu group 60
[    9.848950] ixgbevf 0000:08:10.0: enabling device (0000 -> 0002)
[    9.853971] ixgbevf 0000:08:10.0: PF still in reset state.  Is the PF interface up?
[    9.854395] ixgbevf 0000:08:10.0: Assigning random MAC address
[    9.855384] ixgbevf 0000:08:10.0: be:73:77:78:57:87
[    9.855720] ixgbevf 0000:08:10.0: MAC: 1
[    9.856056] ixgbevf 0000:08:10.0: Intel(R) 82599 Virtual Function
[  110.565171] ixgbevf 0000:08:10.0: NIC Link is Up 10 Gbps
[  110.618667] ixgbevf 0000:08:10.0: NIC Link is Down
[  111.751543] ixgbevf 0000:08:10.0: NIC Link is Up 10 Gbps
[  113.962696] pci 0000:08:10.0: Removing from iommu group 60

there’s no iommu group 60, and lspci shows no device 0000:08:10.0

if I boot without pcie_acs_override=downstream,multifunction I get my virtual network cards

grrrr!

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.