Thank you.
I appreciate this.
I think that my second problem is going to be that my Mellanox MSB-7890 is an externally managed IB switch.
So, I don’t know if there is a way to like “encapsulate” the ETH frames into an IB frame, and send it on the IB network, to have the receiving target then unpack the ETH frame out of said IB frame, and then do what it needs to do from there.
I REALLY don’t want to have to buy a 100 GbE switch if I can avoid it.
Funds are really available for that right now, so if I can work around it, that would be nice.
Thank you.
edit
I am currently in the process of deploying and testing out SR-IOV with my Infiniband NIC.
I am not 100% sure that this will work just yet, but I am in the process of testing it.
Two of my compute nodes (both of them are AMD Ryzen 9 5950X, uses Asus X570 motherboards, and I don’t have a GPU installed in it anymore as said Mellanox ConnectX-4 card takes up the primary PCIe slot now, so I can only remote into it over ssh and/or administer the system via the Proxmox web GUI).
Having said that, I don’t know if the Asus X570 motherboard has an explicit option to enable SR-IOV like my Supermicro dual Intel Xeon motherboard does (X10DRi-T4+), but so far, it looks like that with IOMMU enabled, it APPEARS to be working (at least as far as I can tell with lspci | grep Mell
and ip link
.
It’s almost 4 AM here, so I need to get to bed, so I’ll test out deploying containers and/or VMs tomorrow to see whether I will be successful in passing the virtual functions through (or not).
We shall see.
(Thinking about my “IB switch problem” was what led me to remember reading about SR-IOV from the Mellanox driver guide for the MLNX_OFED driver for Infiniband. Previously, I didn’t really pay much attention to it because I didn’t really use nor see the need for it, but now I understand what it was talking about better, so I am going to test it out.)
Thank you.