How to force SMB to one or another NIC?

Lets see If I can explain this the right way.

I have a working network in 10Gbe with its switch and so on.
But my server has some fast SSD storage and 10Gbe is actually bottleneck, so I got some 100Gbe nics.

The 10Gbe net provides SMB from the server and also Internet.

Now, I want to connect to directly connect my computer to the server ( TrueNAS ) using 100Gbe mellanox cards so I could use that extra speed on my side while working.

My problem is that if I connect everything my client ( windows ) is using 10Gbe to connect to the server instead of the 100Gbe NIC.
Can I force it some how? I can’t disconnect the 10Gbe because still provides internet access. Any ideas wolud be appreciated :slight_smile:

I’m not sure what defines what NIC will windows use.

by the way one is in the 192.168.10.X ( 10gbe ) and the other is in the 192.168.9.X (100gbe) in case has something to do with it.

thank you!

1 Like

Bind to a single interface or IP address on smb.conf:

bind interfaces only = yes
interfaces = lo eth1 192.168.10.100
2 Likes

op is using windows.

Internet nic fill in your router subnet (192.168.10.X/24)
NAS nic fill in another private network without gateway (192.168.9.X/24)

Give NAS ip in 192.168.9.X/24 range not 192.168.10.X/24 range.

1 Like

What if you tried accessing TrueNAS over whatever IP address it has configured on its 100Gbps NIC (the \192.168.9.x…)

Does that work at 100Gbps speeds?

You can also try configuring samba on TrueNAS with
server multi channel support = yes

If everything’s working correctly then whichever IP you end up using for initial access the server should advertise the additional IPs and some communication may happen over the other NIC.

1 Like

Oh thank you guys!!

actually connecting to the 192.168.9.X used the 100gbe NIC!!! so I just needed to map using that ip adress, how easy and fast! cool.

Server multi channel support, made it slower though… It think is because even I could use it there is not a switch in between to trunk the connections? not sure…

anyways, now getting between 2500 ~ 3500MB/s of transfer speed ( 1 core bottleneck and the actual SSD array limit is around there too ) fast enough :slight_smile:

2 Likes