Setting up bridged connections for clients on a container running openvpn

Ok - I'm throwing in the towel for tonight - If that port won't open despite all the open the damn port commands I threw at it - it ain't opening tonight. Tomorrow might be a brighter day for my vpn.

If you're seeing activity on your log when you try to connect to the server then the port has to be open

1 Like

That's from the client - I'm thinking this one is a loser - I'll have to give it another shot later. I really am surprised that it didn't go up.....

And you're not seeing anything on the server log?

1 Like

Literally, there was nothing in there since 2 hours ago. The service appears to be up from grep -w 1194 /etc/service returning openvpn bonding on that port twice. Yet nothing in the logs.

I think there's a problem that would be easier fixed by rolling back my snapshot and trying again or at worst rolling back to fresh OS install and trying again.

This is really bizarre - I've never had this much trouble setting up openvpn.

Got this bit of info:

ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)

Issue occurs when trying to start openvpn service on the server.

My container does not have this directory - looking into why not.

lsmod | grep tun returns nothing, modprob tun and then rerunning still returns nothing. This is kernel level stuff, and since it's a container I'm wondering if this has to do with the prox mox kernel it's running. I've never had this issue before. Lastly, it could be a CentOS 7 minimal issue.

Considering scrapping this as a container for a full VM - possibly Debian 8

I've seen references that you can do a mkdir /dev/net/ followed by a modprobe tun, and the devices should show up. But I like the idea of either restoring the snapshot, or moving to a full re-OS (or moving to a full VM in this case). Especially when shit with /dev gets weird.

1 Like

Quick update - so I think my config files are tight and correct. It appears the reason I can't connect to my VPN is because (drum roll...) the service won't start. The status shows that starting failed. The reason is because it can't find the /dev/net/tun blah blah blah. Ok cool, so I checked to see if the "tun" module was loaded on the kernel and it's not. I cannot get it to load either. (from the container). So - please see the following:

I think the issue is that the tun module isn't loaded in the host (hypervisor/proxmox box). On the proxmox box when I lsmod | grep tun nothing get is returned. I think this means that it's not loaded in the kernel on the hypervisor level. My guess is that trickles down to the container.

When I try to modprobe tun on the container (OpenVPN server) and then lsmod | grep tun nothing returns.

So, I'm wondering the following (and I've posted this in the small linux problem thread)... what's best practice here? Is it a good idea to start enabling modules on a hypervisor level for one off containers or is that dangerous/wasteful of resources. Would it bet better to launch a full VM in that case with it's own kernel?

Overall, I think I can load the module on the hypervisor's kernel and then troubleshoot the container from there (because they share the same kernel... If my chops are up on containers), but I'm not totally sure because I'm new to containers, but that's my hunch.

Any thoughts?

Oh yeah, that's right. Containers. Ugh. I'd say yes, go ahead and try and install the module on Proxmox and see how that affects your container. Maybe even restart your container?

I'm going to give it a shot after lunch here, but I was curious to see what your thoughts are on best practice doing this? In the sense, best practices to load a kernel module for a single container (doubtful any others will use tun)? Would spinning up a complete VM be a better idea? Does that expose all the other containers to trouble - lastly, is it going to eat up a lot of the overall kernel resources shared by other containers?

I'm not sure on best practices for containers. Honestly, I used them back in the day, and I always found that due to being tied into the host kernel, they were always far more trouble than they were worth.

If I were to hazard a guess, I would say that in general, loading a module on the host system to satisfy the requirements of a guest would go against best practices. This particular case could be an exception. Here's an excerpt from the tuntap documentation on kernel.org

https://www.kernel.org/doc/Documentation/networking/tuntap.txt

"2. Configuration
Create device node:
mkdir /dev/net (if it doesn't exist already)
mknod /dev/net/tun c 10 200

Set permissions:
e.g. chmod 0666 /dev/net/tun
There's no harm in allowing the device to be accessible by non-root users,
since CAP_NET_ADMIN is required for creating network devices or for
connecting to network devices which aren't owned by the user in question. ..."

So it seems like they're pretty chill about this module in general. Maybe that and the fact that this is your own environment warrants an exception?

Personally I spin up VMs like they're candy. Are they heavier than containers? Absolutely. But you probably won't notice unless you have very limited resources on your host system.

This is totally in a lab. I guess I ask because one day I may be deploying containers.

It's likely if the host was a for a cluster of openvpn containers it wouldn't bother me as much because multiple systems would be using it. I'm also guessing it's not best practice to have a host with a ton of separate "independent" services.

Anyways - thanks for digging up that info from the site. It was insightful. I'll update as soon as I load the module.

So - getting tun up and configured on Debian is quite the undertaking. It's not just a 'modprobe tun' kind of thing.

So I'm scping the minimal CentOS .iso on my laptop to the server and just going to do a full vm install. I've done jagging around with this container.

Debian disappointed me in this one for sure.

Thanks though for all the insight! I'm exited to get it rotuting from its subnet to the lab network and this whole thing set up once and for all :slight_smile:

1 Like

Had it up.