Does Linux do RDMA? Or does it require tweaking and a modified kernel?
Works out of the box. E.g. if you want to use RDMA with iSCSI (iSER), you just need to set the enable_iser Flag in the targets portal. It’s comparably simple with NFS.
Works perfectly fine here with Infiniband. I get 24 GBit/s over Infiniband 40.
I wonder what the use case is?
Ultra low latency networking. Usually used for networking with stuff that makes 40Gb ethernet look slow, and when the usual copying of packets from user to kernel to kernel to user is an unacceptable high proportion of the total time to shift packets from one machine to another.
This, and there is also less CPU usage. Important for slower machines (NAS) or higher throughput (100G+).
Yeah, but I wonder what the use case is.
100Gbps is somewhat doable with regular Linux kernel / IP stack, assuming you have multiple TCP connections and multiple threads/cores, especially if you decide to go above 1500 MTU.
There’s also DPDK and other things that let you ask the nic to deliver packets directly into userspace bypassing the kernel.
But I wonder what the use case is, e.g. why not just use multiple machines instead.
Usually interprocess communication in supercomputing clusters, although high-end storage systems are using it too. Those using RDMA intensively usually have plenty of machines and want to keep the cores for ‘real’ work.
Things like DPDK are real useful for stuff where the work is the packets - firewalls, routers, etc. RDMA is useful where the work is something else.