Need some routing help

Hi all,

I need a solution for routing a domainname to an external ip on an offsite vm, routed in the offsite vm via either iptables or ufw, into an openvpn server installed on the offsite vm, then on to a pfsense local openvpn client which will then forward/nat the traffic to a nextcloud vm and of course then send it back out the same way.

Thanks for reading.

The Long Story

I have a pfsense router put together in a kvm scenario with an internal openvpn/samba vm and a nextcloud vm. 3 vm's on a host with 3 physical adapters, one adapter is dedicated to the pfsense wan, one is a local bridge for host, vm and network comms and the 3rd adapter is used for macvtap internal connections using bridge mode instead of vepa.

pfsense lan is on the main host bridge [br0] which provides all network connections for the host, other vm's and all the computers and devices in the house so a total of 36 dhcp leases give or take. all the computers, host and vms have no issues talking to each other via the lan and all have direct inet access via the pfsense wan.

openvpn/samba NAS vm server has 2 vnets, one connected to br0 and the other set up on a separate subnet vlan via macvtap for the internal vpn, macvtap accepts internal vpn client connections then dumps the traffic to pfsense and out to a liquid vpn pfsense client connection. the br0 connection can also connect to the internal vpn and handles the samba nas. there may be an easier way of doing things but the only sure thing is when i am connected to my internal vpn the traffic either goes out via the liquidvpn connection or it doesn't go anywhere. so i can choose where i want the traffic to go, out via wan or liquidvpn, as can all other users/devices in the house.

last vm is a nextcloud install with apache and some other stuff. this is where the routing gets to be a problem that i have not been able to solve. i have an offsite/bare metal server that i want to use as an access point from the world back into the nextcloud for file sharing and other cloud type services. I do not want to use my wan ip or my liquidvpn ip's to drop traffic from outside to nextcloud so i have set up a vm on the offsite server with 2 ips and have connected it to pfsense via openvpn. the offsite server is the openvpn server and I am using pfsense as the openvpn client. the nextcloud vm has 2 vnets one on the common br0 and the other a macvtap subnet vlan set up in pfsense. I have internet from the nextcloud vm out to the world but I have not been successful in getting tcp 80/443 traffic from the outside world back into the nextcloud vm. the ufw has postrouting nat for the vpn ip range to one of the vnets [ens3 on the offsite vm] i have allowed forwarding from both ips and adapters to tun0 i have set up postrouting nat for both ips to tun0 and nearly every combination i can think of but still can not get traffic back to pfsense via the openvpn connection.

Okay I'll try to remember how to do this. I'm going to assume you already have openvpn working on your VPS and you can send internet traffic through it from your local network?

So on the VPS make sure you have ufw installed, and delete any rules or configuration changes you've made. Make sure you disable it by running

ufw disable

Then set the default policies

ufw default deny incoming
ufw default allow outgoing

Then allow SSH or anything else you need to access and configure the VPS

ufs allow ssh

Then enable UFW

ufw enable

In your openvpn server config make sure you have configured static routes for your local network. In the server.conf file you should have something like this:

client-config-dir ccd
route 10.1.1.0 255.255.255.0
route 10.1.2.0 255.255.255.0

add a route line for each of your local networks that are behind the pfsense router. Then (if you don't already have it) make a file in the /etc/openvpn/ccd directory with either the client name (this will be the user name or certificate name used by the pfsense client to connect to the server) and add these lines to it:

iroute 10.1.1.0 255.255.255.0
iroute 10.1.2.0 255.255.255.0

again using your networks and subnet masks.

Edit /etc/default/ufw and change this line to ACCEPT

DEFAULT_FORWARD_POLICY="ACCEPT"

Now edit /etc/ufw/before.rules and add these lines at the top bellow the commented out stuff

#START OPENVPN RULES
#NAT table rules
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]

#Port forwards
-A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.1.1.1
-A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 10.1.1.1

#Allow traffic from openvpn clients to eth0
-A POSTROUTING -s 10.1.1.0/24 -o eth0 -j MASQUERADE
-A POSTROUTING -s 10.1.2.0/24 -o eth0 -j MASQUERADE
-A POSTROUTING -s 10.1.3.0/24 -o eth0 -j MASQUERADE

COMMIT
#END OPENVPN RULES

In the port forward section eth0 is the internet facing network interface of your VPS, DPORT is the destination port that you want to forward and destination is the IP of the server, use the actual local IP of the nextcloud server here and not the IP of the pfsense openvpn client. In the next section you need a rule for each of your local networks you wish to have internet access through the VPN including the subnet used by openvpn.

Now you need to add allow rules to UFW to allow this traffic though the firewall.

ufw allow from any to 10.1.1.1 port 80 proto tcp
ufw allow from any to 10.1.1.1 port 443 proto tcp

Again use the IP of your nextcloud server.

On pfsense make a rule on your openvpn client interface. If you don't have an interface for the openvpn client then you will need to make one.

The rules you need are essentially:

Action: pass
Protocol: TCP
Source: any
Destination: Single host of alias: 10.1.1.1 (your nextcloud server IP)
Destination port 80

Save that and make another for port 443.

If you don't already have one create an alias of your local networks. Go to firewall>aliases and create an alias of all of your local networks, including your VPN network. call it local.

Now go back to the firewall rules and on the interface which your nextcloud server is connected to create a rule above the default allow any to any rule, or above any other allow rules.

Action: Pass
Protocol: any
Source: Single host or alias: 10.1.1.1 (Your nextcloud server IP)
Destination: Single host or alias: local (check the box that says invert match)
Destination port: any

Before saving the rule go to advanced options and select your VPS VPN as the gateway and save.

Make sure you restart the openvpn server (if you made any changes to the configuration file) and ufw on the VPS, and hopefully after that it should work.

3 Likes

Thank you, it took a while but i now have packets getting to my vps network and registering a state which is huge progress, now i am getting closed:syn_sent for the state in pfsense on the network. from what i understand that is probably a firewall issue either with the nextcloud vm or with in pfsense. i am able to open the page on the nextcloud vm from a local machine using the nextcloud vm ip address so i will mess with the firewall a bit unless you have any thoughts to offer.

I really do appreciate your instructions like i said it got the requests inside to pfsense so again huge progress from my perspective.

Are you seeing anything in the firewall log?

1 Like

ya there is nothing in the logs of either pfsense or the nextcloud vm, i opened ports 80 and 443 and still getting the closed:syn_sent state in pfsense.

Can you post a screenshot of your outbound nat page?

1 Like

there is one other strange thing, i can ping anything from the nextcloud vm except the ip at the end of the vpn tunnel.

Those look wrong to me, but I'm not sure what's in that nat alias. When I get a minute I'll try to explain how those should be set up.

1 Like

so the nat as you can see is just the groups of nets that usually get put into the outgoing nat individually for each local net. i have 3 external nets, 2 vpns and wan and 3 internal nets, i added the outgoing nats for the internal lans so traffic would move between them along with the rules to allow traffic in to each interface such as the lan. see the next pic which is the lan rule set up and the others are pretty similar.

You only need outbound nat rules for your gateway interfaces. I would just copy the two default ones for each of you gateway interfaces. The alias will probably work for the source but it will probably not work properly on the VPN gateway as it will have a source address within the same interface. Just for being sure I would specify each local subnet. You would have a copy of those two default rules for each gateway interface and a copy of all of those for each local subnet.

Traffic between interfaces does not require nat so you don't need any outbound nat rules for local traffic, only for internet traffic and only if that gateway is using nat.

As for not being able to ping the vps from the nextcloud server, make sure there is a rule that would allow that using the default gateway. This is why having the local alias is useful as for traffic between local networks you need to specify the default gateway so you need to make a second rule if you want to put internet traffic over a specific gateway.

1 Like

thanks again for helping me out with this. I added a rule to net88 and icmp works just like you said it would.

1 Like

Let me know how you go getting nextcloud to work properly

1 Like

So i finally got it to connect i went back through everything, restarted stuff I have no idea why it decided to start working but it did, the instructions you gave were spot on, i must have done something wrong because like the 10th time redoing things it finally worked. go figure. Thanks for all your help.

edit for anyone else along the way:

pfsense does not like allowing redirects from inside the local networks to an external fqdn that forwards back and terminates at a server inside the local network or a subnet. the closed:syn_sent issue was pfsense preventing my local address/machine doing the inside to outside back to inside, i think its called asymmetric routing. the solution i found finally for the local network is a simple nat rule, for which ever interface handles your exiting traffic, with an alias set up for all the fdqn's that are being hosted inside the local net/subnets. so the information above is awesome for any external traffic coming through but you have to set up a separate handler for internal traffic.