Wireguard -- Doable with IPV6?

Wendell what would be your recommendation to set this up in a purely IPv6 setup. Ive not managed to figure out the iptables for it. Right now the only solution is to have a wireguard peer for every setup instead of a managed route in OPNsense.

How do you setup IPv6 routing rules for this properly? Its been something difficult for me and its why I didnt cover it in my own thread. Infrastructure Series -- Wireguard Site to Site Tunnel I know it might have something to do with : NPTv6

Obviously the progression from Public 6 to Private 6 isnt going to be address translation but rather prefix translation. At least this is our logical jump but Im lost on the topic.

We would need this type of deal but on an interface up down deal: https://www.linuxtopia.org/online_books/network_administration_guides/Linux+IPv6-HOWTO/Linux_IPv6_HowTo_x1133.html

This is difficult to think out rn but Id like to figure it out eventually. Routing forward isnt too difficult but routing back feels impossible. My devices recvd the request but couldnt route back to the host on the echo request. We both dont route internet traffic through that tunnel. We both are facing the same issue due to selective routing.

@Novasty and I’s issue was routing. We couldnt get proper addressing in. Maybe you have some nuggets of knowledge on this?

The place I start is the basic concept of the route

Public 6 (SLAAC Linode) → Private linode WG peer → private home lab peer → IP6 addresses of Home Lab equipment

1 Like

its kind of a pain. You have to have some kind of address translation or port forwarding, I think?

I only ever got it to work by using internal ipv6 and using socat to forward ports from the ipv6 on linode to the internal ipv6

Public != Static

But, if you want to: ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE should just work, same as in IPv4.

I haven’t tested whether neighbor discovery works through Wireguard… unlike DHCP/ARP there shouldn’t be anything fundamentally preventing it from going over a wireguard link, since ND works over IP.

See thats just it. There is a issue because ive tried the above command and what occurs is that the Linode and my peers at home instead choose to push the connection through both public ends externally to the tunnel. Unlike in v4 where both endpoints are private and the route could not do this externally.

Does that make more sense what my issue is? I.E if I drop the v4 endpoints in wireguard the tunnel stops working completely. Everything routes externally.

What ends up happening is something comes into the Linode… sees it needs to route through and for all intents and purposes these are made up addresses… It goes to the WG peer dead:beef:dead:beef::3182 but does not know it needs to get to the other side beef:dead:beef:dead::123 and fails right there instead of seeing that as a gateway to the rest of the routable IPv6 addresses in the home lab subnet. Unrouteable. Unless of course you force the route but thats prone to breakages. Lots of breakages.

There are two things that are coming to mind. OPNsense has fixed their Router Advertisement daemon (finally) and then there is prefix translation. At the very least this should make a route more obvious to the internal devices.

However this is an area of networking completely new to me so im unsure if it will work. Im brainstorming this and my questions arent quite framed perfectly. There has to be a way and I would love to make it as turnkey as possible (mainly to help others following the same path).

It looks like I had a strike of luck

I can now ping on OPNsense back and forth. Its about properly allowing the right IP ranges and OPNsense handles that routing fine. Now I have to figure out if the linode end is doing the routing properly

~> ping fd31:ea5a:3182::5
PING fd31:ea5a:3182::5(fd31:ea5a:3182::5) 56 data bytes
64 bytes from fd31:ea5a:3182::5: icmp_seq=1 ttl=64 time=58.6 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=2 ttl=64 time=64.1 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=3 ttl=64 time=59.1 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=4 ttl=64 time=57.0 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=5 ttl=64 time=57.5 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=6 ttl=64 time=56.8 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=7 ttl=64 time=56.8 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=8 ttl=64 time=60.3 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=9 ttl=64 time=59.4 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=10 ttl=64 time=64.0 ms
64 bytes from fd31:ea5a:3182::5: icmp_seq=11 ttl=64 time=56.8 ms
^C
--- fd31:ea5a:3182::5 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 10013ms
rtt min/avg/max/mdev = 56.791/59.114/64.054/2.571 ms

Baby steps

Will see if internal resources might be able to be accessed now.

EDIT: Crap I cant ping back. Ugh back to square one

So folks despite trying a hack around Im back to square 1

If it makes myself feel any better I ended up exactly where @wendell left off. Forcing routes to route correctly.

Now there is one option I still have yet to try… Using NPTv6 and Managed mode in my RA to advertise the routes to take.

Im already in a stateful DHCPv6 mode because of this

Comcast actually gives me 16 networks to play with at home. Which is absolutely retarded amounts of addresses. Thanks for the two hundred ninety five quintillion , one hundred forty seven quadrillion , nine hundred five trillion , one hundred seventy nine billion , three hundred fifty two million , eight hundred twenty five thousand , eight hundred fifty six address

What was tried by both novasty and I much to our own networks complete and total breakage:

Adding my following allowed IPs to dance around accidently looping back around the LAN address of the gateway peer.


# 2601:680:ca80:7731::/71, 2601:680:ca80:7731:200::/73, 2601:680:ca80:7731:280::/74, 2601:680:ca80:7731:2c0::/75, 2601:680:ca80:7731:2e0::/82, 2601:680:ca80:7731:2e0:4000::/83, 2601:680:ca80:7731:2e0:6000::/86, 2601:680:ca80:7731:2e0:6400::/87, 2601:680:ca80:7731:2e0:6600::/88, 2601:680:ca80:7731:2e0:6700::/89, 2601:680:ca80:7731:2e0:6780::/90, 2601:680:ca80:7731:2e0:67c0::/91, 2601:680:ca80:7731:2e0:67e0::/92, 2601:680:ca80:7731:2e0:67f0::/93, 2601:680:ca80:7731:2e0:67f8::/94, 2601:680:ca80:7731:2e0:67fc::/95, 2601:680:ca80:7731:2e0:67fe::/96, 2601:680:ca80:7731:2e0:67ff::/97, 2601:680:ca80:7731:2e0:67ff:8000:0/98, 2601:680:ca80:7731:2e0:67ff:c000:0/99, 2601:680:ca80:7731:2e0:67ff:e000:0/100, 2601:680:ca80:7731:2e0:67ff:f000:0/101, 2601:680:ca80:7731:2e0:67ff:f800:0/102, 2601:680:ca80:7731:2e0:67ff:fc00:0/103, 2601:680:ca80:7731:2e0:67ff:fe00:0/107, 2601:680:ca80:7731:2e0:67ff:fe20:0/112, 2601:680:ca80:7731:2e0:67ff:fe21:0/115, 2601:680:ca80:7731:2e0:67ff:fe21:2000/118, 2601:680:ca80:7731:2e0:67ff:fe21:2400/121, 2601:680:ca80:7731:2e0:67ff:fe21:2480/122, 2601:680:ca80:7731:2e0:67ff:fe21:24c0/123, 2601:680:ca80:7731:2e0:67ff:fe21:24e0/124, 2601:680:ca80:7731:2e0:67ff:fe21:24f0/125, 2601:680:ca80:7731:2e0:67ff:fe21:24f9/128, 2601:680:ca80:7731:2e0:67ff:fe21:24fa/127, 2601:680:ca80:7731:2e0:67ff:fe21:24fc/126, 2601:680:ca80:7731:2e0:67ff:fe21:2500/120, 2601:680:ca80:7731:2e0:67ff:fe21:2600/119, 2601:680:ca80:7731:2e0:67ff:fe21:2800/117, 2601:680:ca80:7731:2e0:67ff:fe21:3000/116, 2601:680:ca80:7731:2e0:67ff:fe21:4000/114, 2601:680:ca80:7731:2e0:67ff:fe21:8000/113, 2601:680:ca80:7731:2e0:67ff:fe22:0/111, 2601:680:ca80:7731:2e0:67ff:fe24:0/110, 2601:680:ca80:7731:2e0:67ff:fe28:0/109, 2601:680:ca80:7731:2e0:67ff:fe30:0/108, 2601:680:ca80:7731:2e0:67ff:fe40:0/106, 2601:680:ca80:7731:2e0:67ff:fe80:0/105, 2601:680:ca80:7731:2e0:67ff:ff00:0/104, 2601:680:ca80:7731:2e0:6800::/85, 2601:680:ca80:7731:2e0:7000::/84, 2601:680:ca80:7731:2e0:8000::/81, 2601:680:ca80:7731:2e1::/80, 2601:680:ca80:7731:2e2::/79, 2601:680:ca80:7731:2e4::/78, 2601:680:ca80:7731:2e8::/77, 2601:680:ca80:7731:2f0::/76, 2601:680:ca80:7731:300::/72, 2601:680:ca80:7731:400::/70, 2601:680:ca80:7731:800::/69, 2601:680:ca80:7731:1000::/68, 2601:680:ca80:7731:2000::/67, 2601:680:ca80:7731:4000::/66, 2601:680:ca80:7731:8000::/65


IP settings were (to begin with):

AllowedIPs   = 10.31.84.5/32, fd31:ea5a:3182::5/128, 10.31.82.0/24
Endpoint = [2601:680:ca80:7731:2e0:67ff:fe21:24f8]:51820

Status: Routable but routing gets locked in strange loops. Reverted settings due to failures

Whats left over?

Potentially using NPT to handle the shortcomings of routing similar to how NAT does in IPv4

Please if anyone has ideas. Chime in. Im really open to them at this point

I’m actually interested, as I’d like to see how it would be to run a pure ipv6 network and maybe make a guide.

I don’t have a way to test it though, I can only do it locally between hosts in the same subnet and maybe at most on hosts on different vlans. I don’t think I even have ipv6 enabled in my current home lab.

Have you tried:
server config:

[Interface]
Address = 2001:8b0:2c1:xxx::1/64
ListenPort = 51820
PrivateKey =

#nexus
[Peer]
PublicKey = 
AllowedIPs = 2001:8b0:2c1:xxx::2/128

and client config

[Interface]
Address = 2001:8b0:2c1:xxx::2/128
PrivateKey = 


[Peer]
PublicKey = 
AllowedIPs = ::0/0
Endpoint = wg.utangard.net:51820

I have seen people online doing

PostUp = IPtables -A FORWARD -i %i -j ACCEPT; IPtables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; IP6tables -A FORWARD -i %i -j ACCEPT; IP6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = IPtables -D FORWARD -i %i -j ACCEPT; IPtables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; IP6tables -D FORWARD -i %i -j ACCEPT; IP6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

Obviously, I don’t know what option you would be using in OPNSense, because no iptables (duh).

Also, if you’re doing routing on the Linux peer side of things, remember to edit /etc/sysctl.conf

net.IPv6.conf.all.forwarding = 1

If you don’t want to reboot your server

echo 1 > /proc/sys/net/ipv6/conf/all/forwarding

Or you may want to change /all/ with /wg0/ or the name of your interface.

I just now found that you can debug wireguard by:

echo 'module wireguard +p' | sudo tee /sys/kernel/debug/dynamic_debug/control

And you can disable it by

echo 'module wireguard -p' | sudo tee /sys/kernel/debug/dynamic_debug/control

I also found this interesting article which uses link-local IPs in the tunnel configs, created a DHCPv6 interface on top of the wg interface and gives the client on the other end an IPv6 address. Really interesting:
http://www.makikiweb.com/ipv6/wireguard_on_openwrt.html

In the above link, it is described that the hosts on the other end of the tunnel have no GUA and there is no return route back to router (server).


What’s your plan? Do you only want them to communicate on certain addresses (split tunnel) or do you want to push all data through the tunnel like a typical VPN?

1 Like

… :arrow_up_down:

Yes. Failed. Assume all the standard internet tips havent worked because Novasty and I have tried them. We are stuck at the point I suspect wendell was at. Dealing with the fact there is a public and private route to the Proxy at that point and it expecting the response on the wrong route.


The issue isnt so much the configuration as it is routing. We had the saving grace on IPv4 of NAT solving a lot of those problems but you get no such prefix translation on IPv6 to boot. Except NPT which I have no idea how it works

Split tunnel

Internet traffic will exit normally

The WG tunnels sole purpose is to fairy data securely from the edge linode to the home lab server and back out.

Make sense?

The only thing I can see doing besides NPT is putting all devices on the tunnel in a sort of SIPRNET Mesh like layout. And handle all routing internally in the tunnel and use the Public network for internet traversal but I really dont want to do that. Thats a lot of peers and endpoints and setup.

1 Like

Once I saw read that article with the missing GUA and missing route back to the server, I guessed you were encountering a routing issue.

I’ll have to cook something myself and see if I can test it somewhere. Maybe even add OSPF routing and see if we can have both end-points know the networks behind each other and finally route traffic through the tunnel.

But I’m going to sleep now, hopefully I can launch a test environment somewhere, maybe between my phone on mobile data and another device on WiFi.

Awesome. Dump the results here if you could. Lets make this a community solution!

The priority is getting it to work in the least hacky way possible. It needs to realistically be maintainable long term.

1 Like

FYI heed what ive done above already when you start setting something up

Try not to play Ring around the IPv6 subnets with a pocket full of CIDRs


FYI for my setup:

Issued full /60 WAN to my own decisions and delegations. Behind CG-NAT on IPv4 WAN
Dedicated static addresses on Linode (IPv4 Public + IPv6 SLAAC)

image

Relevant Subnets:

Home Lab:
10.31.82.0/24
2601:680:ca80:7731::0/64 (Prefix ID: 0x01)

Wireguard:
10.31.84.0/24
fd31:ea5a:3182::0/64 (Private/Link-Local Prefix ID: 0x02)

Docker Home:
(Same has Home Lab Host) (Prefix ID: 0x01)

Docker Edge:
10.31.85.0/24
fd31:ea5a:3183::0/64 (Private/Link-Local Prefix ID: 0x03)

Government Devices Isolated Subnet:
10.31.83.0/24
2601:680:ca80:7732::0/64 (Prefix ID: 0x02)
IDS/IPS Enabled
Firewall and VLAN isolated
Scheduled access controls coming soon ™

Name Server IPs

Recursive Hardened Public Resolver:
192.53.120.164/32
2600:3c04::f03c:92ff:fec6:2030/128
Ports: 53, 853, 443 (v1 v2 QUIC)
Hardened DNSSEC Enabled

Authoritative BIND9 DNS Server 1:
23.239.20.9/32
2600:3c01::f03c:92ff:fece:5fc0/128
Hardened DNSSEC Enabled

Authoritative BIND9 DNS Server 2:
173.255.255.89/32
2600:3c01::f03c:92ff:fe9e:3ef0/128
Hardened DNSSEC Enabled

DHCP4&6 @ Home: Stateful, authoritative, Managed Router Advertisement Daemon Mode.
DHCP4&6 @ Edge: Stateless, authoritative, Assisted Router Advertisement Daemon Mode.

This data is public with minimal effort so I figured you ought to know this to decipher what ive been talking about above. Especially for the DHCP and DNS configuration @Biky

Man, I was so tired last night, I did not even realize that I can’t flipping run OSPF on a mobile phone, reeeeeee. FML, I need a home lab, for God’s sake!

Let me think this through, before I do any rash decisions. I’ll put my thoughts in here if you don’t mind, sorry for going wildly off-topic.

Let me try enabling ipv6 on my home lab in Europe.

Ok, 3 hours later and I couldn’t figure it out. Likely because I’m behind a NAT and I can’t get more ipv6 prefixes.

1 more hour later, I locked myself out of my own network. I should have probably allowed SSH from the internet (I don’t have password authentication, only using ssh key). Oh, wait, for 3 hours I tried to get into my ISP’s router, but I cannot find the password for it (and back when I was in Europe, I couldn’t reset it to factory default by holding the reset button), so no more port forwarding.

So, out goes the Europe base for a while until I can get a human in there. It’s weird, I didn’t even modify IPv4, but for some reason, changing IPv6 settings borked the config or something. I don’t even remember exactly what I changed. I hate netplan, but dang it, I love the fact that if you don’t hit “ok” or a key after a while, it resets back to the previous config, pfsense doesn’t do that.

>Let me think this through, before I do any rash decisions
So much for a fugging plan.

Thankfully I have a backup VPN at another location, but I don’t have access to my home lab.

I need to urgently order a switch, some USB WiFi NICs and some SBCs. And some cat5e cables. I’m hoping not to go over $500, I need to plan this out, for real this time, not like above.

2 Likes

Wendell, am I off base? I think I figured out why this isn’t working after a deeper inspection. So IPv6 without NAT requires neighborhood discovery like anything would. This isn’t working because my server’s gateway assumes the ENTIRE /64 is on-link and in the neighborhood, even with the private address space in the middle. From what I understand, this assumes that every address is directly reachable on the same Ethernet subnet/segment, so the gateway initiates a Neighbor Discovery query for your endpoints MAC address but won’t receive any reply because the peer is not on the same link nor network.

Now I have been reading around and it seems a few people have gotten this reliably working with ipv6 by running a NDP Proxy daemon on each side. It will listen for Neighbor Solicitation packets and send the correct Neighbor Advertisements through the tunnel. I have no idea how this might be accomplished. I see the option on both ends to do so but I’m quite afraid to touch it haha

I think IP4 has the equivalent of proxy NDP vs proxying ARP?

Worth a try ?

1 Like

See what I take from this is when I do this with IP version 4 my firewall and of course the linode and firewall-cmd understand that I want to nat masquerade from it to internal wire guard end to other wire guard end… when it reaches this point my firewall understands. Oh it came from there. It wants to go to my internal subnet and routes it

My presumption from all of this is that I need IP version 6 to do the same thing aka I need it to know on the linode end to masquerade the prefix understand that I need to go through my unique Link local six address in the tunnel to the other link. Local six address on the other side of the tunnel and then I have to make my firewall understand. Oh hey, it came on that link local six address. It wants to go to this prefix in my network?

Is that the crux of it?

Because here’s the real kicker, My opnsense sends firewall can already ping the other end via link local in the tunnel. It’s ONLY my other devices on my lan Prefix that cannot

1 Like