Mellanox Infiniband cards performing at sub-gigabit speeds

Hi all,

So i’ve recently purchased a Mellanox MHQH29B-XTR 4X QDR CONNECTX-2 card and a Mellanox MHQH29-XTC ConnectX card, along with a QSFP cable to connect the two.

I’ve installed these cards in 2 servers, 1 running Proxmox 5.2 and the other in one running Ubuntu 14.04 LTS. After installing the necessary modules and setting static addresses on each card they are both working and iperf tests show roughly 17 gbps, which is great.

But then i use SCP to do a file transfer between either card and the actual transfer rate is somewhere are 40 MBps. Well under gigabit speeds.

I’ve made sure both cards are set to ipoib mode and have the maximum MTU size set. All iperf tests show they should be transferring much much faster. I’m at a loss as to what to try next.

CC: @oO.o

2 Likes

Sorry, I haven’t messed with the QSFP cards, just the SFP+.

What firmware version is installed on the cards ?

One thing you can try is to use the cards in Eth mode, its limited to 10GbE but that’s still faster than what you are getten currently on scp.
You can change the mode by either changing it in
/etc/rdma/mlx4.conf
or by
echo eth >/sys/bus/pci/devices/”pci device”/mlx4_port1
probably need to su it.

IB software support is kind of sketchy in my noobish experience.

It looks to me like a problem between SCP and IB itslef.

Herre is a post on the Mellanox forums about that problem without a solution, so you aren’t alone.
https://community.mellanox.com/thread/1838?db=5
Another one from STH. No good solution to it either as far as i understand.
But some more technical details and such.

Looks like SCP is just not the thing to use over ipoib.

My short experience with IB on Cx2 cards is with iSer (iscsi + rdma) with relative good performance.

Can you check if you’re using AES crypto and if you have AES-NI enabled?

edit: Ugh, I see this was a while ago, … at least let us know what it was

1 Like