Infiniband to Ethernet? any options?

I’m rebuilding my home lab and have a few 40GB mellanox infiniband cards lying around, could I build an opnsense or pfsense router that has an Infiniband card and an ethernet card to have both types functional on the same LAN?

I’ve seen the uber-expensive 1u devices that do this but I don’t want to spend 600 dollars on that when i could just get a faster ethernet switch.

I was just thinking last night of selling my 40/56gb infiniband switch with a Ethernet gateway built in. Bought it for $1k a few years ago and don’t use it anymore. Id sell it to you for $200 which mostly just covers the shipping on such a large heavy item if you are in north america somewhere. That’s less than the $600 you were saying :stuck_out_tongue:

But yes, you can run opnsense and run an Infiniband card on it. You will have to run the inifiniband controller on a PC though, I don’t believe they make a plugin for one on opnsense but I could be wrong. You cannot run pfsense and do this, Netgate deliberately rips out all Infiniband drivers from pfsense and will never put them in. I even offered to pay a large sum to them for the work to do it, in the proper area for that type of contract work for features and everything, and Netgate promptly deleted my thread and banned me from the pfsense forums. I found out later that Netgate apparently has some sort of business related grudge against Infiniband, though I have no idea why

2 Likes

I do find corporate grudges super interesting, and now I’m curious to discover the reason (will edit this post if I ever find it). I do like infiniband given that on the used market it was so cheap, and if you can make it work it is amazing (high bandwith, low latency, very well offloaded nics). I also probably have an excess of infiniband cards and should give them away once I finalize my rack.

I’m curious what you use now instead of infiniband. Also if the OP doesn’t take it, and you are willing to offer it to me, $200 for an infiniband switch does sound tempting, even though I am probably fine with sticking with point to point connections.

The switch and Infiniband was always overkill for my situation. I got it cause everything was cheap. I stopped using it when I could no longer afford the use of a PCI E slot in my storage server for a NIC when it had built in 10gb Ethernet and I maxed out the 24 drive internal HBA I had started with and needed to expand into disk shelves. Hard drives are too slow to every really take advantage of the infiniband network, so 10GbE is already more than enough for them.

The switch is a Mellanox/Voltaire Grid Director 4036E. Seems you can actually find them for $220 on Ebay today. The E signifies the built-in Ethernet gateway to let the SAN network talk to Ethernet devices as well. It also runs the Infiniband Subnet Manager locally on the switch which is nice, no needing a PC for it. Port to port latency is supposed to be 100ns, but it seems to measure closer to 300-400ns most of the time. Still fast for its age. Here is the manual if you want to taker a look and see if it is something you think you can get configured for your use:

I feel tempted to repeat your mistake, but I should probably hold off.

Currently I am also bottlenecked and don’t really get much use beyond 3-4 Gbit but I do think that will change in the very near future with all my new gear. I also am looking into https://telescope.timd.io/ which is software to run Looking glass over RDMA networks which sounds fun and actually maybe even useful. Besides that I will have ~235 SAS drives (hence my username) hooked up which even though it is a majority of HDD should still be a lot of raw bandwidth. Also seems like it would probably be handy for MPI workloads (and apparently you can even cluster with llama.cpp over MPI).

Thanks for the info, I remember seeing them around that price a few months ago as well, but at the time I had even less to justify the purchase.

1 Like

yeah I’ll pass since atm I need to be silent focused and enterprise stuff won’t really work.