100Gbps fiber over Thunderbolt 5 (80Gbps) with SPDK side quest

Hi.

Searching gave me a thread from 2020, so if there’s a newer post I would be glad to be pointed in the right direction.

The following questions stems from my research into adding fiber to my Mac Studio M4 Max via Thunderbolt 5 port.

Marketing suggest Thunderbolt 5 supports 80Gbps. Some infographics suggest that this is done by 2x 40Gbps bidirectional. I’m using marketing bitrate here. How can Thunderbolt 5 provide 80Gbps and be PCIe compliant if it only carries PCIe 4.0 x4 lanes? In other words, where does the extra 16Gbps come from? And (how?) can they be utilized by a PCIe 4.0 card in a Thunderbolt 5 enclosure?

I’m trying to figure out which fiber speed actually will be available to me. 10Gb, 25Gb, 40Gb, 50Gb or 100G (as 80Gbps or 64Gbps). And it makes most sense to optimize for whatever the Mac Studio is able to handle. As this will be a 1-1 connection directly to my NAS via fiber cable.

If the 80Gbps is Thunderbolt to Thunderbolt only, and the max I can expect to get to non-thunderbolt devices is PCIe 4.0 x4 (64Gb). Then how/why does ATTO sell their Thunderlink TLNS-5102 100Gb QSFP28? The only “speed test” I’ve found is this, just confirming it’s able to stream 8k video at 4800MB/s. Far from even 64Gbps.

Anyone know if they’re even able to get 64Gbps out of that thing? Or somehow a 100Gb link with 80Gb limit?

Also, which card is that thing using? As it’s dual 100Gbps QSFP28 with LC connector (ConnectX cards with QSFP28 use MPO-12 as far as I can tell?). Would be nice to know which network card I can pair it with on the other end.

If the max I can get is 64Gbps would it make more sense to buy something like the Sonnet Echo SE I T5 which provides a PCIe 4.0 x8 slot and install a PCIe 4.0 x8 dual 25Gbps card inside and get 50Gbps with link aggregation? I know it will be downgraded to x4 but that should still be enough to provide 50Gbps on one port?

Or what are my best options?
Both on the Thunderbolt 5 port and in the other end (regular PCIe card in an AMD EPYC server).

Side quest.
MacOS doesn’t support RDMA or RoCE. But I found this strange post on this nondescript web page, suggesting that it might be possible to use SPDK via QEMU? More than speed, it would be epic to get better latency as most of my stuff is done with many and small files.
Comments? No way to contact the author and I’ve not seen this done by others.

Thanks

Why not? A dock could offer you 64GBit for PCIe and another 10GBit for LAN and still be under the total 80GBit.

Since you will only be able to run a NIC off of PCIe I would guess 64GBit.

IMHO your biggest hurdle will be macOS being pretty poor implementing SMB and finding a NIC that officially supports macOS.

I’m sorry if my point didn’t come across.
What I meant was how can it (the ATTO Thunderlink) provide 100Gb over PCIe 4.0 x4 which clearly is only 64Gb unless it somehow can utilize other parts of the Thunderbolt 5 bandwidth and assign it to extend the PCIe 4.0 x4. That’s what the marketing for the 100Gb ATTO Thunderlink is alluding to and hence my confusion as to how that’s possible. 80Gb combined over multiple different ports is another thing.

Yes, 64Gb limit is what I think too, at least that’s what seems logical given the data I have at the moment. So the 100Gb ATTO marketing seems very misleading and I would end up paying for bandwitdh I can’t utilize unless it’s able to pull 64Gb out of it.

I’m aware of some SMB trouble.

MacOS har support for ConnectX 4/5/6 cards via this .dext as far as I can tell (built in to MacOS at least for both MacOS 15 and 26).
/System/Library/DriverExtensions/com.apple.DriverKit-AppleEthernetMLX5.dext/

And those ATTO Thunderlink boxes are for Mac (I’m guessing they’re ConnectX as well) so getting a card to work shouldn’t be a problem.

So the 100Gb over 80Gb Thunderbolt is 64Gb in reality and that means at most a 50Gb connection. Unless there’s a way to get those extra Gb (from 50 to 64) out of a 2x40Gb or a 100Gb NIC.

I think I’ll end up with a dual 25Gb card and link aggregation unless someone can confirm a way to take advantage of the full 64Gb PCIe speed.

1 Like

Just a quick update here, no reason to get too excited - yet.

Apple has an open source project called MLX
“MLX is an array framework designed for efficient and flexible machine learning research on Apple silicon.”
https://opensource.apple.com/projects/mlx/

They just had a pull request (21. November) for Thunderbolt RDMA communications backend. This is for Thunderbolt 5 only (for now).

So at least the MacOS 26.2 kernel has some kind of RDMA support at this point. That’s a step towards RoCE support in the future as that would be the next logical step after RDMA support.

RDMA over Thunderbolt 5 is one thing, but they need to enable RDMA and RoCE support for the MLX5 drivers for the ConnectX4/5/6 cards as well. (Confusing with MLX the Apple software and MLX5 the ConnectX driver)

Here’s hoping this is in the works at Apple and soon will be available. My guess is that the drivers already has the needed bits for enabling RoCE, so it’s all about enabling this in the kernel. And now the most important step is in place, ibverbs for RDMA support.

When googling this topic some say it’s infiniband over thunderbolt, but it’s really not. It’s infiniband verbs and those verbs are the API used for RDMA which RoCE in turn transports over ethernet with hardware support in network adapters.

RDMA and RoCE opens up for SMB Direct, but further works needs to be done to enable NVMe-oF.

Good timing, just hoping progress isn’t too slow going forward.

1 Like