Intel Arc Pro B50, SR-IOV, and me

Anyone rocking the C622 chipset rocking B50s or anything with ReBar? On my SM X11SPi-TF I have above 4g decoding enabled but there is no visible option for ReBar. Havent got my B50 in yet just prepping the system.

1 Like

You we’re exactly right. Once I removed that, now i’m in business. I got the sriov_numvs set now to 4 and it works. Now my only issue is after i add it as a resource mapping in the GUI, I can’t get it to persist after a reboot. How should this be handled, I’m seeing different methods?

  1. Optionally use sysfsutils to set the number of VFs on boot. Install sysfsutils, then do echo "devices/pci0000:00/0000:00:02.0/sriov_numvfs = 7" > /etc/sysfs.conf,

OR

  • The second, more generic, approach is using the sysfs. If a device and driver supports this you can change the number of VFs on the fly. For example, to setup 4 VFs on device 0000:01:00.0 execute:

echo 4 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs

To make this change persistent you can use the ‘sysfsutilsDebian package. After installation configure it via **/etc/sysfs.conf** or aFILE.conf’ in /etc/sysfs.d/.

OR

I had the same problem. I ended up creating a udev rule which persists across reboots.

  1. nano /etc/udev/rules.d/70-sriov.rules
  2. SUBSYSTEM==“pci”, DRIVER==“xe”, KERNEL==“0000:83:00.0”, ATTR{sriov_numvfs}=“4” [Change to the PCI address of your card" and set sriov_numvfs to what you want, save and exit]
  3. udevadm control --reload
  4. udevadm trigger (to test it w/o rebooting), i manually set sriov_numvfs back to 0, then ran the trigger command and rechecked the file to make sure it updated to 4.
  5. reboot to fully test

Home stretch!!!

2 Likes

You could just add the echo command to the crontab as an “@reboot” entry. I did this alongside all the others I had for setting ASPM and the cpu governor. But this also works.

Another person on the thread shared…

apt install sysfsutils
echo “devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:01.0/0000:03:00.0/sriov_numvfs = 7” > /etc/sysfs.conf

This looks to be clean and stop the system from breaking or if i ever need to reset the file?

how would you do that with crontab?

The thing that will break it is, any changes to the pcie topology. Like, add an M.2 SSD, now the entry is wrong. This is the other half of why I like to keep it in the crontab. On one system, my entry looks like this:

@reboot (sleep 15 && echo 4 > /sys/devices/pci0000:00/0000:00:1c.4/0000:04:00.0/0000:05:01.0/0000:06:00.0/sriov_numvfs)

This way, I can easily edit it if the topology changes, or disable it if I like, update the number of VFs…

1 Like

Is there any chance that there might be a way to passthrough or assign certain display outputs to virtual functions or VMs?
At this point, my only hesitation stopping me from buying one of these cards is the inability to directly attach a physical display the VMs.

Buying a seperate display adapter and copying the image over pcie would work… But itd be so convienient to have the VM GPU and the VM display adapter together in a single PCIe device.

Would such functionality be possible via a vbios/firmware update? Or is there something in the hardware itself that mskes this impossible?

1 Like

I don’t know of any device anywhere that works this way at all. :thinking: If anything I think it’s strange that people assume this should exist. SR-IOV for GPUs is used almost exclusively in the context of VDI, where you have servers racked up in a datacenter, that are being accessed remotely. The odds of anyone needing to plug a physical monitor into a VF ever is exactly zero.

1 Like

I get it. Thats why most sr-iov gpus simply dont include display adapters. I was hoping the arc pro B50 would be first of its kind, already being unique for physically having display adapters alongside sr-iov support.

I bet you could pass a displaylink adapter or similar through, over USB to have access to a framebuffer for the GPU to draw into. One of those usb cards with 4 separate controllers, plus 4 hubs, and 4 displaylink adapters, to go with the b50, that would be interesting.

1 Like

That is most likely what ill do. It would just save alot of hardware and labor if i could have VM display adapter and VM GPU in the a single card.