Two GPU in the same IOMMU Group?

Hi, I am new to GPU passthrough. Im able to create a VM now with GPU passthrough with 1 card (GTX960 ) but since i have additional card (GTX1060) i hope to use it with my host. The problem i have now is i have two GPU under the same IOMMU group and hope to split it with patching asc override 4.17 kernel from https://queuecumber.gitlab.io/linux-acs-override/ . But it still appear under the same group after patching.

Specs:
intel core i5 7600k
Asus Z270-A Prime
24GB DDR4 RAM
IntelHD Graphic (Host for now)
GTX960 (Guest)
GTX1060 (Guest)
Ubuntu Desktop 18.04.1 LTS

After patching, i updated /etc/default/grub and update-grub with

GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash intel_iommu=on pcie_acs_override=downstream,multifunction”

[IOMMU 0] 00:00.0 Host bridge [0600]: Intel Corporation Intel Kaby Lake Host Bridge [8086:591f] (rev 05)
[IOMMU 10] 00:1f.0 ISA bridge [0601]: Intel Corporation 200 Series PCH LPC Controller (Z270) [8086:a2c5]
[IOMMU 10] 00:1f.2 Memory controller [0580]: Intel Corporation 200 Series PCH PMC [8086:a2a1]
[IOMMU 10] 00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]
[IOMMU 10] 00:1f.4 SMBus [0c05]: Intel Corporation 200 Series PCH SMBus Controller [8086:a2a3]
[IOMMU 11] 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8]
[IOMMU 12] 05:00.0 USB controller [0c03]: ASMedia Technology Inc. Device [1b21:2142]
[IOMMU 13] 06:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd Device [144d:a808]
[IOMMU 1] 00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 05)
[IOMMU 1] 00:01.1 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x8) [8086:1905] (rev 05)
[IOMMU 1] 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)
[IOMMU 1] 01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fba] (rev a1)
[IOMMU 1] 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 3GB] [10de:1b84] (rev a1)
[IOMMU 1] 02:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
[IOMMU 2] 00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 630 [8086:5912] (rev 04)
[IOMMU 3] 00:14.0 USB controller [0c03]: Intel Corporation 200 Series PCH USB 3.0 xHCI Controller [8086:a2af]
[IOMMU 4] 00:16.0 Communication controller [0780]: Intel Corporation 200 Series PCH CSME HECI #1 [8086:a2ba]
[IOMMU 5] 00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822]
[IOMMU 6] 00:1b.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #17 [8086:a2e7] (rev f0)
[IOMMU 7] 00:1c.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #1 [8086:a290] (rev f0)
[IOMMU 8] 00:1c.4 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #5 [8086:a294] (rev f0)
[IOMMU 9] 00:1d.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #9 [8086:a298] (rev f0)

From above output you can see that both GTX 950 and GTX 1060 are under the same group. I have isolate both card with pci-stub before patching the kernel. Will this be an issues?

01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)
Subsystem: ZOTAC International (MCO) Ltd. GM206 [GeForce GTX 960]
Kernel driver in use: pci-stub
Kernel modules: nvidiafb, nouveau
01:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1)
Subsystem: ZOTAC International (MCO) Ltd. Device 1376
Kernel driver in use: pci-stub
Kernel modules: snd_hda_intel
02:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1060 3GB] (rev a1)
Subsystem: NVIDIA Corporation GP104 [GeForce GTX 1060 3GB]
Kernel driver in use: pci-stub
Kernel modules: nvidiafb, nouveau
02:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
Subsystem: NVIDIA Corporation GP104 High Definition Audio Controller
Kernel driver in use: pci-stub
Kernel modules: snd_hda_intel

Have you tried passing it through to the VM yet?

If so, did the VM boot or did it return an error saying there were problems with the iommu group?

Hi Kriss,

Sorry for late replying, have not tried that yet after patching because they are falling into the same iommu group. Will give it a try tonight and update here. Thank you.

Hi.

Actually got this to work yesterday on my own system (had an extra GPU not being used for anything). I applied the 4.18.3 kernel with the ACS Override patch, and used only downstream as I don’t need the system to split off the GPU audio from the GPU.

Make sure you are booting with the correct kernel. If you had the same numbered kernel, or a newer one, you are most likely auto booting into that one. Press ESC on the grub screen and choose advanced options, and the desired kernel.

Keep in mind that the ACS patch has some risks, and is considered a last resort to get stuff to work. What it essentially does is override the ACS, and tricking the system into thinking that the devices are separated, when in reality they might not be fully separated. This has some inherent security issues, but probably not something you need to worry about as a home user. It might also cause instability in your system for the very same reason, but I have not experienced anything yet.

PS. If the ACS patch is working then your PCI devices will appear as in different IOMMU groups when you check them.