Sr-IOV with pm1733 ssd and XXV710-DA1 network card

Will using hardware supported virtualization hardware “SR-IOV” be good for a high performance “multi-seat” PC. Will this hardware and underlying software be the optimal way to build a “2 gamers 1 CPU” build. PCI-Bifurcation, Sr-IOV, Arch-Linux, Looking-Glass, KVM, Qemu, VFIO are all of interest.

Sounds good if you have enough pci lanes. What cpu are you working with?

 The motherboard is Threadripper (64 pci-lanes). I'll need a Threadripper motherboard that supports Sr-IOV well enough that I can create virtual SSD containers from the PM1733-Sr-IOV-SSD. Threadripper will also need to support the Sr-IOV functions of the network-card. IOMMU Groupings would have to be seperated properly and PCI-Bifurcation will be needed.
 Pci-slot one will be split X8/X8 (Host GPU/VM1 GPU). Pci-slot two will be X8 (VM2 GPU). Pci-slot three will be split X8/X8 (Network-card/PM1733-SSD). Pci-slot four will be split X4/X4 (USB-card/USB-card). Onboard nvme X4 slot for Host-SSD.
 Is this level of Pci-Bifurcation available? Does Threadripper have issues with Rom-Bar sizes, above 4G decoding, or ARI features in the motherboard that may cause issues with the Sr-IOV features of the PM1733-SSD or Sr-IOV network card?

I have a Gigabyte x399 Designare and it offers 8/8 and 4/4/4/4 on the two x16 slots.

Iommu groups are pretty good, there’s a few onboard devices that I can’t passthrough but not a big problem.

As far as sriov for your SSD that’s going to be driver dependant. You might have to use a precompiled binary from the manufacturer, and a specific supported OS.

Here is some info gathered from a product datasheet.

Minimum supported operating systems
NOTE: Supported on all operating systems using the included native drivers.
•Microsoft○Windows Server 2019○Windows Server 2019 (including Hyper-V)○Windows Server 2016○Windows Server 2016 (including Hyper-V)○Windows Server 2012 R2 64-Bit•Linux○Red Hat Enterprise Linux 8.0 and above○Red Hat Enterprise Linux 7.6 and above○Red Hat Enterprise Linux 7.5 (Kernel 3.10.0)○SUSE Linux Enterprise Server 15 and above○SUSE Linux Enterprise Server 12 SP3 and above○SUSE Linux Enterprise Server 12 (Kernel 3.12.61)○CentOS 7.5 (Kernel 3.10.0)•Ubuntu○Ubuntu 18.04 and above•VMware○ESXi 7.0 and above○ESXi 6.7 and above○ESXi 6.5 and above.
Samsung SSD Toolkit provides simplified management and diagnostics. It also allows for the partitioning of a PCI® function into many virtual interfaces for the purpose of sharing the resources of a PCIe device in a virtual environment.
The Samsung SSD Toolkit is proprietary software designed to help users with easy-to-use SSD management and diagnostic features for server and data center usage. The CLI (command line interface) tool currently supports Samsung data center and enterprise NVMe SSDs and supports Linux®.

Well that’s not so bad… Ubuntu or Centos 8 would be my choice for your host system then, so you can run the closed source SSD partitioning tool.

If you haven’t bought into a platform yet, consider…
TRX40 threadripper is crazy expensive compared to old threadripper, and building a new system with parts from 2017 is kinda iffy. If you’re planning more work than gaming, look into building an EPYC system instead. Roughly same cost as TRX40, and you get even more PCIe lanes and likely a better IOMMU grouping.

Epyc: Better support for Sr-IOV including PM1733 and its Virtual Namespaces, and Sr-IOV network cards. Epyc has 128 PCIE lanes that can handle all these devices. IOMMU grouping is likely better.

Thanks gordonthree, I will be better served using an Epyc system.

1 Like

Epyc is the wrong choice for a gaming-oriented build. You want higher frequencies and Threadripper is an obvious choice.

There isn’t that much point to it… absolutely no reason to get a pricey ssd with sr-iov support when you’re aiming for just two game vms. Get two normal ssds and pass through one entire drive for each vm.

Networking sr-iov is also a bit pointless. All the normal stuff is plenty fast enough without adding sr-iov. That said - intel sr-iov works just fine and is well-supported.

I got a first gen Threadripper, and I use one m.2 slot for the host, and the two others I use for two smaller ssds that I pass through directly to two different high-performance vms. I also use sr-iov with an intel x540.

What motherboard have you been using for your threadripper and any idea how many pci-e slot iommu groups you get that you could pass through separately?

I did some searching and found that Amd Epyc Rome CPU at 3.3 Ghz max boost can run modern games in the 120 Fps range and above.

I agree, Rome and the TRX40 series Threadripper seem pretty well matched. I suppose the downside to EPYC platform for gaming is the motherboard not offering much by way of overclocking like a TRX40 board would.

Threadripper Pro is coming in March 2021 and it has 128 pcie lanes. It doesn’t overclock but its stock clocks are above Epyc.

It’s also more expensive than Epyc, and more niche? I wonder if AMD will actually have product for sale, or is it going to be a lottery like the low end 5950x and 5900x?

Tom’s Hardware managed to find them in the database of one of the retailers.

Threadripper Pro 3000 series chips exist within the sWRX8 platform, which is distinguished by support for eight-channel DDR4 memory (both conventional UDIMMs, as well as RDIMMs and LRDIMMs) and 128 PCIe 4.0 lanes. Built upon a full-featured I / O chiplet from EPYC processors (Rome generation).

The Threadripper Pro 3000 series includes three processors: the 64-core 128-thread 3995WX (2.7/4.2 Ghz for $6086), 32-core 64-thread 3975WX (3.5/4.2 Ghz for $3043), and 16-core 32-thread 3955WX (3.9/4.3 Ghz for $1253).

No longer an exclusive to Lenovo, I guess that’s good.

1 Like

Threadripper Pro is out has anyone tried VFIO with this system? Has anyone tried KVM Qemu setup?

To learn something on my end:

I get the purpose of SR-IOV when using respective GPUs and ethernet adapters.

What is the purpose of SR-IOV when using an NVMe SSD like the PM1733 or HBAs?

I remember when testing some HBA a few years ago with ESXi it showed the HBA to support SR-IOV…?

Can you create multiple partitions on such an SSD and passthrough each partition with close to native performance to separate VMs?

Yes as far as i understand.
NVME Namespaces or some such i think, would be another supporting feature playing into it.