Return to

infiniBand Cabling and Switching



I have a few questions regarding infiniBand cabling.

First does it have to be one complete non interrupted cable for both copper and fibre, or can I say have a fibre cable going in to a lc keystone jack through trunking and then out the other side in the same way.

I have fibre lc cabling already as a few years back I wanted to use 4gb fc and do ip over fc but I ran into issues which made it not usable (I can’t remember what they were but windows driver related issues comes to mind).

Second if I get a qdr fibre switch does it matter the protocol used via the connection, i.e. can I only have it in 40gb infiniband and not 10gbe even though 10gbe is supported by the adapter or does it completely not matter.

Thanks in advance for any help.


Homemade NAS Upgrade help and advice

My experience is rather limited to what I have done so far, and all of that is after Ltts video on infiniband with CX2 qsfp QDR cards, or now, OEM HP mCX345 fcbt (I think) FDR / 40gbe VPI cards.

I first of all would advise against infiniband since software support is very limited.
To have common software use the link, you need “IP” packet simulation and that t is done via “ipoib” which is CPU bound.
You can get up to about 27Gbit on ipoib as far as I have seen poster on some forums with heavy tweeking, which is better then the 10GbE VPI mode that comes with the Cx2 VPI cards, but Cx3 40GbE is just easier and less CPU bound.
And since its Ethernet, all software will use it.

And also, smb direct only on cx3 and above.
In case you want that.

Let me now try to answer your precise questions.

“Does it have to be one single run of cable”:
No, but Direct attach copper cables have transcievers on both ends and you shouldn’t even think about cutting and splicing that!

I think fiber is OK to be spliced or trunked, though I have never done it.
Logic said its fine, but there should be some “gotchas” with the fiber polarity, and the loss over the run.
Better wait till someone else green lights this.

“Does a QDR switch support 10gbe”:
Well, it obviously depends on the hardware,
but if the switch is advertised as QDR and or 10GbE it should be fine. Mellanox calls this “VPI” for Virtual Protocol Interconnect if I’m not mistaking.

There are EMC branded mellanox 6XXX fdr switches for example, that come with OEM software that only supports infiniband.
Sth has a long thread about those, and how to convert them if possible.
So without software trickery, no 10Gbe or 40GbE on those.

In terms of transcievers, those seem to be very accepting, I had cx2 for point to point 10GbE on the cheapest QDR DAC cables possible without problems, and the same cables are now doing 35Gbit point to point on my cx3s in 40gbe mode.

But transcievers come with a bit of software on board, so it probably depends again.
If advertised as QDR, they should be OK though, as seen in my 40gbe example.


Thanks for your reply, it has given me a lot to think about.

Originally I was thinking of starting with a link just between my NAS and main pc but doing it in a way that lets me add pc’s to that separate network in the future so I could add an exsi host and maybe some other stuff in the future.

I could do that with 10gbe over cat 6a (I have cat6a cabling in my house already) but the switches are way out of my price range so 10gbe sfp+ or infiniband using 10/40 gbe looked like the only solution.

I realised however that as I will have to upgrade my NAS to do this anyway (current specs in new thread) as it currently doesn’t have spare pcie so rather than upgrade that pc to be just the NAS I think i’m going to upgrade it to be both NAS and exsi host with the NAS being a VM or use the VM stuff in freeNAS (not used that before so I need to play with it on a test setup).

As this will get off topic again and I don’t want to break the rule i’ll move it to a new thread again and I’ll link both previous threads in it.

Thanks for all the help and I think i’m getting there regarding my setup.