Site-to-site VPN using linux and dhcp passthrough

We have an existing device that is currently resides in a LAN. For some reasons the device has to move locations now, but should still reside in the same LAN (logically), so the solution should be mostly transparent. The hardware is a x64 homeserver that resides in the target LAN and a Rock Pi E that will connect to the server and has a dedicated ethernet port that will be able to relay all traffic through the VPN.

The current idea was to use an openvpn server using a tap interface, but the problem with the bridge is that it makes the ethernet port on the server unusable, because it seems to make all traffic go through the bridge to the tap and openvpn. The server doesn’t have a second nic, so that’s kinda bad. We tried use a macvtab to piggypack on the nic, but it can’t be connected to a bridge and openvpn doesn’t want to use it as tap interface either.

How the solution is built doesn’t really matter that much to me, it should only be encrypted, automated and preferably being able to be setup on a single nic for the server (I also don’t like external services, the server is accessible from WAN, so a direct connection would be pretty good).

Something sounds off here. The bridge shouldn’t prevent eth0 from getting traffic. Does it only happen while OpenVPN is running?

It happened as soon as the interface was added to the bridge (we already had added tap0 to that same bringe and OpenVPN was started), which killed the SSH connection (which was on that interface) and prevented further debugging. From that point on the server was unresponsive until we rebooted it (which cleared the bridge).

So this is not supposed to happen? I mean it’s possible to plug the server into a screen and have a look at it, anything specific to look for while debugging that problem?

Edit: we just add it to a new bridge with nothing connected to it and it immediately killed off any communication with the outside. It still has it’s IPs and all, but we can’t even ping the local network from the server, it’s just all gone.

2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-vpn state UP group default qlen 1000
    link/ether [censored] brd ff:ff:ff:ff:ff:ff
    inet [censored]/24 brd [censored].255 scope global enp0s31f6
       valid_lft forever preferred_lft forever
    inet [censored]/24 brd [censored].255 scope global secondary enp0s31f6
       valid_lft forever preferred_lft forever
    inet6 fd[censored]/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 6879sec preferred_lft 3279sec
    inet6 2001:[censored]/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 6879sec preferred_lft 3279sec
    inet6 fd[censored]/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fd[censored]/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80:[censored]/64 scope link 
       valid_lft forever preferred_lft forever

Just for testing, could you spin up a vm, with it’s own IP.
Then run the openvpn inside that vm, to tunnel out to the pi/rock pi?

Then you can play around, without loosing connection to the host?

I get censoring the ip6, but ip4 should be okay, if it is 10.10.0.x/24 for your lab, except a single IP that is going to be routed through the tunnel.

Then on your router, add a specific route for traffic to that ip, over that dedicated link?

1 Like

So turns out that in the VM it’s all working fine… no issues with the accessibility of the interface… Is there any way this could be a hardware or configuration issue? The server currently has a I219-V.

Network info of the VM:

3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-vpn state UP group default qlen 1000
    link/ether 08:00:27:99:b2:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.178.197/24 brd 192.168.178.255 scope global dynamic noprefixroute enp0s8
       valid_lft 862631sec preferred_lft 862631sec
    inet6 2001:[censored]/128 scope global dynamic noprefixroute 
       valid_lft 5834sec preferred_lft 2234sec
    inet6 fdea::b80e:5fd0:82a6:7aa2/64 scope global temporary dynamic 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 fdea::587:defe:1e26:3364/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 2001:[censored]/64 scope global temporary dynamic 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 2001:[censored]/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 fdea::f70a:7860:ed26:e58f/64 scope global temporary dynamic 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 2001:[censored]/64 scope global temporary dynamic 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 fdea::65e0:499e:3b0:7f6c/64 scope global temporary dynamic 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 2001:[censored]/64 scope global temporary dynamic 
       valid_lft 6740sec preferred_lft 3140sec
    inet6 fe80::a566:8ade:9c27:b51d/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Well we now just “fixed” it, by using a second nic. Not ideal, but it works at least.

We’re you still expecting to use the physical interface for IP after enslaving it to the bridge?

Generally, you would be configuring IP on the bridge after adding it. You’d no longer be configuring the physical interface in any way, other than to enable the port and to have it added to the bridge.

This has nothing to do with OpenVPN or other tap stuff.

1 Like

Well, we had it actually working in the VM, just not on the server. And giving the bridge an IP also didn’t work. What did kinda work is adding another veth to the bridge, but it was kind unreliable. (Also works on my desktop, when bridging a VM into the network.)

1 Like

Well end of the project is a separate network interface and has worked great.