Having trouble finding the right networking gear, to finish my 20Gb/s home network. Need some assistance!

Hey all! I recently bought 5x Mellanox MHRH2A-XSR, which are capable of delivering 20Gb/s over Infiniband, and/or 10Gb/s over ethernet. I, of course, want to go with the 20Gb/s infiniband route, for double the network speed, and reduced CPU load, however, I’m having trouble finding some equipment that I need…

Firstly, I need the full-height PCIe bracket for the MHRH2A. The MHRH2A-XSR comes with the half height bracket, while the MHRH2A-XTR comes with the full-height bracket. Aside from that, I’m pretty sure they’re identical. I have been able to find plenty half-height brackets, but I can’t find any full-height brackets for my cards.

Secondly, I can’t find any SFP+ connector Infiniband transceivers (or not that I know of…). I found these SFP+ transceivers, which I think will work with my Mellanox cards, however, I don’t think that will transmit Infiniband signals, only the 10Gb/s Ethernet signal, which is not optimal. I found these cables which have the SFP+ connector, and are supposedly infiniband, however, it doesn’t seem like I’m able to change the length of the fiber cable, which I’m really hoping I can do with my Infiniband

TL;dr : Need help finding a full-height PCIe bracket for my Mellanox MHRH2A cards, AND need help finding (or learning) the correct Infiniband connector thing.

Thank you :slight_smile:

Edit: Forgot to say, I also can’t find the Windows drivers for the Mellanox MHRH2A cards. I can only find the firmware updater + firmware file (I have already updated the firmwares BTW), so finding the link of the driver page would also be very nice. Thanks again!

Not 100% about this but I think transceivers are transceivers, it doesn’t matter what you’re using them for. That’s why they’re separate from the interface card. Could be wrong about that, especially when it comes to infiniband.

Also, just looking at what you posted in the stuff you acquired thread, those don’t look like sfp slots, they look like xfp

So does that mean I just have to find a 20Gb/s transceiver, and tell the card to run on Infiniband on both ports, instead of ethernet?

Whenever I google the model number of the cards (MHRH2A-XSR), they always mention QSFP/SFP+, and never XFP, so I think they are QSFP/SFP+ (not sure if there is a difference between the two. I lack hands-on experience with this type of enterprise gear).

I don’t know anything about infiniband but I would say so.

Yeah looks like QSFP, SFP won’t work. You’ll need to look for QSFP+ transceivers.

I’m guessing they’re different connector designs? Will the same type of fiber cables work between the two?

Yeah they’re a different form factor. You’ll have to make sure you get fibre with the right terminations to match the transceivers, as well as making sure you get multi mode fibre if you use multi mode transceivers, etc. Or you can use DAC cables.

I’d suggest using DAC as the cable you need is about $40 a meter plus two transceivers, whereas as a DAC cable is about $40.

So I’f I bought a couple of these guys, would I be able to use normal fiber cable with it? How would I tell my card to do infiniband instea dof 10Gb/e? .-.

Edit: I currently have some of this fiber cable

The transceivers should work unless they’re vendor locked. But they use different cable, they are quad sfp, so basically 4 10gb sfp modules in one, they use cables with multiple strands. MPO I think they’re called. But this is outside anything I have experience with.

Are you sure you want to use infiniband over 10gb Ethernet? You know infiniband doesn’t support tcp/ip so you will have to hack something together to get it to function like a standard network.

1 Like

Seems like a huge headache. As Dexter mentioned, you lose TCP/IP which is what basically everything runs on. You can get used 40Gbps QSFP equipment for cheap if you really need to push past 10Gbps.

1 Like

For starters, I would’ve gone with a connectx-2 VPI QDR card if you wanted to mess with IB. For what you have, you will need to get some DACs and go point to point without a switch. You will need to setup opensm on one of the machines to handle the IB layer and then you can run IPoIB. I have never personally worked with a DDR card that ancient and don’t know what the loss will be but for comparision a 40Gb IB link will give you 32Gb for IPoIB. You might be able to get a qsfp to sfp adapter that will let you use sfp+ DACs and transceivers, but it would depend on if that card is true VPI and will let you change to IP instead of IB. You should still be able to get an IB connection and run opensm to get IPoIB.
If you wanted an actual network instead of direct connections, you can find a used mellanox voltaire IB switch for a few hundred. For cables, search for QSFP to QSFP and any should work unless your card is flashed with non Mellanox firmware and then it could be vendor locked. You should be able to flash it from a vendor FW to Mellanox if that is the case though. For drivers, here is a link. If you are running 2.9.1000 FW or above, go with WinOF 4.80.

IB is fun to play with, but there are also headaches. I have managed IB in production environments and always have ended up switching to a pure IP network. Was there something specific you wanted to accomplish with the cards?

Edit: If you want a 10GbE setup, here are some suggestions: Cable, 2 cards, dual port sfp+ switch or 4 sfp+ port switch.

2 Likes

When I was looking into infiniband, I did come across people talking about IPoIB, and had no idea what it was. Upon looking into it a bit more, IPoIB seems like Infinibands replacement to TCP/IP.

I already bought the cards, and am just trying to get the most performance from them as I can.

What’s the difference between this and my ConnectX-2 VPI/IB?

Yup, that’s the firmware I flashed!

Ended up installing 5.10

I guess I never really did mention this… So my plan was to hook up my PFSense router, main desktop, and main server together (using both QSFP connections on the card in my server, and single connections on my desktop + PF Sense), so I could have high data transfer rates, and future-proofing me for many years, until SSD’s are cheap enough to replace HDD’s, in terms of price/GB.

Is there a reason why I can’t do TCP/IP over Infiniband? I’m willing to learn more about Infiniband, and do IPoIB, but if none of you people think it’s worth it, I won’t bother.

Also, another question: Are QSFP+ and SFP+ both the same form factor (as in, will a QSFP+ and an SFP+ transceiver both fit into my MHRH2A’s slot)? Thanks :slight_smile:

No QSFP is a much bigger module

I believe QSFP can be broken out into multiple SFP+. Perhaps you could do that and run LACP?

Does anyone know if that’s possible? The adapter obviously exists, but I don’t know how IB vs TCP/IP factors in there…

So I’m only limited to using QSFP+ transceivers/copper cables, correct? Those modules are capable of handling 40Gb/s, however, my card only supports 20Gb/s per port, correct?

Sorry for asking so many questions. There appears to be so little information on this card online. I can’t find a data sheet or anything for my exact model.

You can’t use a breakout cable on the NIC side, you can on the switch side if the switch supports it and you have the license for it.

1 Like

IPoIB isn’t hard to setup. You just have to install and run opensm on one of the computers. Also, set it up as a service to start with windows. Opensm manages the IB side of things so that a link can be started.

Your is DDR and not QDR. So 20Gb instead of 40Gb.

You might have issues. If you do, downgrade to get it to recognize and work properly.

If you want to put cards in each, make sure they are compatible with the OS of your server and pfsense. If you can get it working with pfsense, you could have your computer and server connected to it and then pfsense would route the traffic to the server. You can probably get IPoIB and drivers working on pfsense, but I doubt there are any resources to help you do it. I would just have a direct connection to the server and nothing to pfsense. Also, unless you have a massive spinning array of rust or several SSDs, you won’t come near maxing out the link. Plus you would need storage on the source end just as fast to perform the read. Even if you did, I would assume you are not running server grade hardware so you will run into using a ton of CPU to handle the throughput. It is possible to get RDMA working over an SMB share with a connectx-2 card, but from what I remember the drivers only work with windows server 2012R2 or 2016 and requires some forceful firmware flashing that could brick the card. If you get it all running, I would suggest making a ramdisk on each side and doing iperf between them.

Here is a picture that shows a QSFP card, QSFP to SFP adapter, and various sfp cables. I would go with QSFP copper DAC because then you don’t have to worry about compatibility with the optics. I have come across optic cables that would only run at one speed so better to not roll the dice on it.

1 Like

Alright, cool. So once OpenSM is setup, will I still be able to simply setup LAN file sharing between my server, desktop, and PFSense box, over the 20 gig connection, or will I need to do a lot more steps to get that working?

Oh alright, well darn. Well, now I know for next time!

Oh yeah, I totally know this. I don’t even have SSD’s in my server. The reason for wanting more then 1 gigabit is because I want to future-proof, and I got my Mellanox cards for a (I think) good deal, of $100 CAD for 5. The theoretical max transfer speed is 2.5GB/s, which I know I won’t achieve for years.

I just read a bit into what RDMA is, and I personally cannot imagine what I’d do with it currently, however, as my knowledge grows, it may be something nice to have available.

Was sort of trying to avoid going with Direct Attached Copper cables, because of the limited length of it (I think it’s 3M max?). I may need to run cable many more meters. I’m also guessing that QSFP+ to SFP28 doesn’t exist, unlike the QSFP+ to SFP+ adapter.

BTW dude, thank you sososo much for answering so many of those questions. You really clarified so much for me, since there was very little information online regarding the questions I was asking (or, I just couldn’t find it lol).

Once opensm is setup, that will show the link as connected in the adapter properties. You will need to configure the adapter properties and set a static IP on each end. Then you should be able to transfer.

The cards you have are not future proofing anything. They are ancient and will only get older. You should view your setup as something to test with only. If you wanted something to test and still be useful, I would still suggest the equipment in my previous post where you run 10Gb with SFP+. SFP is super easy and you can get cheap SFP+ to fiber adapters that can transmit for km.
Even if you have a lot of data and higher speed storage in a few years, you still wouldn’t be maxing out the link. Unless you are moving large files around a lot, you won’t ever use the bandwidth. I regularly move 100s of TB around networks and that is when you see the benefit of higher throughput or a compute node that is connecting to several different arrays at once.

Here is a link to some QSFP to SFP adapters. Fiber QSFP cables and optics for the QSFP to SFP adapters are out there, but I caution about compatibility. I have never worked with a DDR card and do not know if there will be issues. I have used a connectx-2 QDR card with the QSFP to SFP adapters and an SFP+ MMF converter without issue.

No problem. IB info can be hard to come by if you don’t know the right places to look. IB is only used in the enterprise so there probably aren’t a lot of consumer friendly guides out there. At least you didn’t try to setup Fiber Channel :slight_smile:

1 Like

So, in your opinion, do you think I should try to sell these cards at a profit, and just buy SFP+ cards, or something else?

I don’t think it’ll be hard to sell them at a profit, considering I paid around $20 each, and sell for more than $40 on EBay.