Advice for Authentication and Authorization based access to home services

I have an Opnsense baremetal setup acting as my firewall with many self hosted services running on separate devices, caddy as reverse proxy. Nextcloud, vaultwarden, gitea, matrix, etc., are used by myself and also by my friends and family. Apart from these there are other services like internal wiki, searx, few guacamole instances. For now, each service is kept isolated and each member is manually set to get access.

Hardware setup:
ISP → Modem (Bridge) → Opnsense → L2 Switch |–> Proxmox1, Proxmox2

As i see now, managing access for more than 6 members and myself (one admin role with full access for management internal only, two user roles with restrictions(one for internal use and one while travelling and access via VPN for accessing services normally) is getting tedious. I am thinking of setting up something similar to ldap (freeipa, free radius) and sso, so that each member can be authenticated and given authorization to specific services only.

I have been looking around learning about the setup and i am confused. How does all these come into play and does it even makes sense to formulate a setup like this ?

Below are my initial questions and i hope this forum can clear my doubts:

  1. If i setup freeipa on a vm in one of my proxmox device sitting behind Opnsense firewall, how is the authentication carried out ? If my understanding is correct, i can setup freeipa and create a server entry in opnsense settings with freeipa ip and port so opnsense can call freeipa when an user attepts to connect and provides access after authentication ?

  2. How does the authorization for each user to services work ? Will opnsense be able to keep track of the user trying to access the services and restrict access ?

  3. If i decide to use MFA/SSO like authentik or authelia, wouldn’t this mean i let some user go beyond my firewall to talk to authentik/authelia, kind of like punching though firewall ?

Please correct me if i have understood the concepts wrong and suggest me suitable solutions i can implement. Thanks.

If everything’s going through caddy already for the certs, and you only need accounts for WebApps, use authp https://authp.github.io/ .

Free IPA, samba AD … are primarily targeting the use case of sharing resources existing on a bunch of network computers that would then enforce resource ownership individually… like e.g. when you want to ensure a users uid is the same on every machine.

For web stuff, you need basically 3 things:

  1. a single sign on login portal

  2. a single place where you can say “these people fall into these roles” … Which would be groups in traditional computing, except potentially lighter weight

  3. access policies, “site x, or url path y, is accessible by user a, or role b”


Theoretically, if you have a single caddy and only 6 people, because everything is https, you could have one snippet with a persons password per user, and then a few combined snippets for roles, and then you could reference this snippet in sites… but to a users browser these would all be unconnected, and they’d be asked to login to each one separately - which is where SSO comes in, and while you have SSO why not 2fa.

1 Like

Thanks for your reply and pointing me to authp. I see the developer of authp has posted a video couple of days agot related to authorisation. Will look into it.

should all the web services be considered as bunch of network computers and enforce access restrictions based on ownership and role ?

For example, person A can have access only to nextcloud and gitea, but no access to any of guacamole applications/os instance. On the other user person B can have root access to one of the guacamole instance, but not to anything else.

With this case, SSO ensures the user logs in once per session but access only those services for which they have permission ?

On quick look i see authp has RBAC which can be used to define roles and enforce access based on it ?

I think you’re on the right track with using some centralized authentication solution like LDAP, radius, freeipa, etc. I’ve used LDAP for UNIX services like SAMBA, Unix shell accounts, SSH, etc. before.

But maybe you could consider two other options as well, SSH and reverse proxy.
They have their advantages and disadvantages. Here is a list of things to consider with these solutions:

  • Use a SSH proxy server and SSH port forwarding
    • Externally only SSH server reachable(very secure)
    • Authentication is just shell accounts(easy and familiar)
    • Your clients need to use an SSH client as well for access(usually not a problem if it’s your device)
    • Should work with all TCP services(port forwarding)
  • Use an authenticating reverse proxy(like nginx + basic auth)
    • very easy to setup
    • accounts stored in a single file, use htpasswd to manage
    • Only works for web services

These things apply to both solutions, because they’re kind of similar:

  • they probably don’t require any additional software(you probably already have an external HTTP and external SSH server).
  • they assume that your services are only accessible via your authentication mechanism.
  • they assume a certain level of trust from the client and shouldn’t be used to host services publicly, but are probably perfectly fine for home use, and have the advantage of a very low external attack surface(only SSH server or Web server externally reachable. Would even protect you against pre-authentication vulnerabilities in your services and web apps).
  • they don’t have fine grained role control(although basic roles could be implemented, you’d probably be better of with another solution).
  • they don’t assume any support from the services you want to run(e.g. something LDAP or authp would need to be supported by all your services).

I’ve also combined these approaches, with an authenticated reverse proxy on my VPS that proxies to a SSH connection to my home NAS. This allows me to access my home NAS securely, with authentication and HTTPS via my regular domain, and SSH tunnel for the backend to my home NAS, without requiring any open port on my home network and without any kind of DDNS! My NAS just has a systemd service that connects to my VPS and forwards a port, which is accessed externally via a nginx reverse proxy with basic authentication.

EDIT: At some point I want to make a post explaining how this setup works in detail.
I think it’s a really nice solution to hosting things like a NAS semi-publicly(publicly reachable for convenience, but only to be used by authenticated trusted users). All it requires is the cheapest VPS capable of hosting a web server and an SSH server. And it traverses any kind of NAT, dynamically-changing IPs, strict firewalls etc. like it’s not even there.

Thanks for your detailed and interesting information. I have been thinking about ssh based access too for sometime and most of what you’ve mentioned seems to be offered by teleport, which includes web applications, desktop, servers, etc., Since teleport handles the authentication, sso and provides RBAC for desktop access, it could fit in my use case.

By this, i believe you mean, the connection is initiated from inside of your home network to your vps and so no port opening is required.

It would be awesome to have a post explaining how this setup works.

While writing this reply, i came across the topic about froward/reverse ssh with -L and -R flags and also dynamic port forwarding. I didn’t knew this before. If reverse ssh enables option for web service access while the tunnel is open, this would add it to the list of advantage for ssh.

2 Likes

Teleport looks very interesting. I’ll definitely give it a try soon.