Please help with setting same IP on a NAS for multiple directly connected clients, Bridge / LAGG Bullshittery

Situation

My Setup is rather simple and consists of a NAS running freenas, my Workstation called WS and a server for testing stuff called TS.
They are each directly attached to the NAS via a dedicated 40GbE link that is separate from everything else.
So WS to port 0 and TS to port 1.

Adding a switch is sadly not an option. If it were, that be easy and i wouldn’t need to ask and write here.

What i want

I want to have the NAS reachable on the same IP from both WS and TS.
Remembering that its the …2.1 on my WS and …3.1 on TS is to much for me. :crazy_face:

Since they are on different ports with different macs and different IPs on the clients that will hopefully never ever collide, i think that should be possible! Is it? Why shouldn’t it?

On the WS: ping …2.1 to the NAS should work,
on the TS: ping …2.1 to the NAS should also work,
and from the NAS, …2.10 to the WS and …2.20 to the TS should also work.

Complicating is that Freenas isn’t that accommodating in these edge cases and the performance over the 40GbE links is another concern.

What i have tried so far

Bridging

Was the initial thing that came up searching around that seemed promising. Freenas needed some tunable “bodging” to permanently configure the interface since there is absolutely no UI capability for that.

That solution did its job until i noticed the horrible performance i was getting over it.
For no apparent reason, it was capping me at about 11.4Gbits and possibly way more latency then i would expect, possibly explaining the horrible experience i had network-booting windows about a year ago.
Another thing is that i don’t need the clients to communicate with each other, which they can in this case.

reasons ?

  • Since the NAS is powered by a Xeon E5 2628L V4, with 1.7Ghz boost and 1.5Ghz base, that could be an issue, but i’m talking iperf3 benchmarks here where the client is using multiple connections / threads to mitigate the single thread performance issue at hand.
    All the threads didn’t matter, about 11.4Gbits where it did 32Gbits easily on 4 threads before doing the bridge.

  • i noticed that static ip configs would get stuck to a similar limit once they were reconfigured at runtime, hinting at some software reset or configuration issue.
    If it is i don’t know how to mitigate it though since i only managed to fix it with a restart.
    The new configuration set as the default for the NIC, which then comes up with it first, which seems impossible on the bodged bridge due to the way it is bodged.

LAGGs

It seems reasonable to mess around with those since freenas has some UI support at least.
Beside the NICs to add into the group, one can choose a Mode.

  • LACP is out of the question, it is way to complicated and it should in theory detect a “split LAG” and prevent it from accidentally working.
    Tried it and no, it’s not really working.
  • Failover is working great, once there is only one client active.
    There is a setting to listen on all ports , but that is still beside the point.
    Once both are active, one will be simply ignored.
  • Broadcast simply sends out everything on all the NICs and listens on all of them. This works fine with the only exception that TS can receive what the NAS is sending the WS.
    Not really an issue but there should be a better solution!
  • RoundRobbin is stupid and won’t work at all.
  • Loadbalancing is the most promising.
    It uses some rather not explained “FlowID” and Hash- functions / tables what ever to decide which packet to send on which port.

LAGG Loadbalancing

Is working for me
once in a while
PROOVING that it is possible somehow.

Why once in a while?
Try and error landed me on the hash setting L2 which should mean that only the Layer 2 info in the packet is used to determine the port it gets send out on.

L3 / L4 or any combination of those always changes it so that none, or only one is pingable.

The default the interface comes up with is l2,l3,l4 which i can change easily.
Only issue is that after a reboot, it won’t work anymore, even in L2 setting.

It seems in those moments, the decision where to send the packet is the wrong one and each packet is placed on the link where it shouldn’t be, hence not getting to it’s target.

Once in a while, while initiating a link, it switches over and pots it the right way, and it works.
No packets for the TS on the link for the WS that i could capture with wireshark.
Performance is at least close to the expected 32Gbits that i can get with my crappy QDR cables.
This is a sequencial run, so WS left, TS right.

In parallel its completely fine too.

final

Since this is not working consistently out of the box and I’m a bit lost where to look for settings and tweacks and hacks and what ever else next, i really appreciate anything from you guys.
Even a little flameing that i should just get a switch and that all of this is not how it was intended.

Other solutions would make my day, have searched the web for a while and i have no clue what to search for next. The amount of non related “HOW to X” is infuriating and i have the feeling i’m the only one that wants this. Or i’m to fucking stupid again. Wouldn’t be the first time that i wasted my time for multiple years.

Thanks anyway :smiley:
Hope the read was enjoyable.

1 Like

Do you actually need it to work on the IP level?

I would just edit my host file on TS and WS to point the same name to different addresses.

If you need IP you use a virtual network interface (tun/tap) to bridge on the client side.

A rough config follows:

On the NAS
eth1 x.x.2.1
eth2 x.s.2.2
route x.x.2.10 eth1
route x.x.2.20 eth2

TS
eth1 x.x.2.2.10
route x.x.2.1 eth1

W.S.
eth1 x.x.2.20
route x.x.2.2 eth1
tun0 mode ipip local x.x.2.1 remote x.x.2.2

1 Like

My first thought was to use broadcast lagg, but I don’t actually see that as an option in the FreeNAS GUI. In any case, it is unclear to me why you want the IP to be the same on the two interfaces. It is really contrary to best practices to do something like that. If you were working with a stock Linux or BSD system, you might be able to hack something together, but FreeNAS really locks down the network. It doesn’t even let you have more than one DHCP interface.

I tend to agree with @WorBlux, if you want have some sort of mirrored config on your hosts so that they can both use FreeNAS, I would implement that in DNS via hosts file but also with search domains.

To illustrate, say my NAS’s fqdn is freenas.me.tld. But like you, I have 2 point to point connections between my NAS and 2 machines for fast storage. So on my WS machine, I add 192.168.99.123 freenas.ws.me.tld to the hosts file, and on the TS machine, I add 192.168.199.456 freenas.ts.me.tld to that hosts file. Then I add ws.me.tld and ts.me.tld as the search domains on the respective machines.

In both cases, I should be able to use freenas to connect to the nas, because in both cases, the rest of the fqdn will be filled in dynamically. This allows you to use the same scripts or config files between both WS and TS. For instance, nfs://freenas would successfully connect on either despite the IP addresses being different.

I do this all the time and have found it to be a great solution. For instance, on a client subnet, I connect to nas for file sharing which maps to nas.client.domain.tld, but if I’m on the admin vlan, I can connect to the admin web interface with https://nas:9090 (or whatever) and that maps to nas.mgmt.domain.tld which is the management interface.

Anyway, that’s my 2c. In general, I’ve found that trying to trick FreeNAS into doing something it wasn’t intended to do will yield poor results.

1 Like

A short answer to you both since i have to go in a moment.
@WorBlux the tuntap on the client is an interesting idea.
I will have to try that out.
Thank you very much.

@oO.o

  • Your first idea of using a broadcast lagg, is viable with freenas, it is an option and I have tested it so far to be actually working with the obvious downside that sending to one client, also clogs up the link for the other.
  • yes, freenas is a bitch sometimes, hence why I moved away from the bridge.
  • I will have to look at the fqdn and local settings for that, have ignored it so far, not a good idea I guess.

Thanks for all of that.
Guess I have to do some experimenting later.


The main reason why I want such a setup is “simplicity” on the client side.
Since both clients are changing rather regularly, in OS and hardware, I don’t want to have to keep a local config in check each time before I can use the machine.
Especially since network booting is still a far away goal for the TS and possibly more clients later.

Expanding the situation, I have a VM running dnsmasq as DHCP on the NAS for that “ease” of not needing static IPs, and having the option to network boot at all.
That could also act as a DNS though I don’t want all my DNS requests to go there first and then get out to the world, though thanks to crappy internet, it probably wouldn’t matter.

So in short, simplicity on the clients at all costs, even having a vastly complicated and hacked NAS side.

1 Like

Another interesting observation is that in my current setup in the loadbalancing mode and on L2 it keeps working.

Both Clients were cycled multiple times over the last few days and were running exclusively sometimes and each time one cam up or went down it just workedTM.

So what ever is needed to get this working is some BS on the NAS that is probably not gona survive restarts jet.

Updated to Freenas 11.3 and tried out the now supported Bridge configuration.

The result is underwhelming at about 9Gbits on 2 threads and 15Gbits at 4 threads iperf3 which is better then i can remember but still sucks compared to the expected 32Gbits that were reachable through the LAGG config