I want to do some self-hosting. I will put my initial research below, in case there is any advice about that, but my thought was whether I could separate out a machine from the rest of my network - so that if that machine is infiltrated in someway, it won’t have access to the rest of the network?
Setup for self-hosting - advice welcomed!
I have two machines, specs are not really relevant - one is running Truenas one is running nothing right now but will be an Ubuntu 20/22.04 server running Docker/Docker compose. These two machines have no reason to talk to each other - ever.
On the Ubuntu machine I would set up the following in docker containers.
Domain Name (already have not in a container obviously, just saying)
Cloudfare namespaces (already have and properly linked to domain)
Nginx proxy manager
WASM applications I want to give out to the world
From my reading, I think this is a reasonable set up - but I note that I am not including a firewall, and not sure that I need one if I am opening to the world? If I do I can look into that - if anyone has any pointers or recommendations?
But I would like for this machine to just not have access to the rest of the network, it doesn’t need it - but obviously, I would like to SSH into it to do updates, upgrades, make changes, add new WASM projects.
Make sure only to open ports for services you want to expose. Monitor traffic so you know who is accessing and what they’re looking for.
I’ve been hosting my private little website for many years and found that most hits on my webserver are either trying to exploit known vulnerabilities or to connect to bot command servers. These hits originate widely spread from across all international cloud service providers… I assume this is coordinated.
Some kind of firewall would be useful as you can use it for all kinds of grey-listing and logging and preventing containers from reaching out to the internet when they’re not expected to, or talking between each other in specific ways and so on.
Put it into a separate VLAN, and set your firewall outside of it such that you can access it, but host os, or other things can’t reach out towards the internet or towards other networks.
Maybe you want to consider Proxmox or a different VM technology - you get a thinner API between VMs and between VM and host by default (relative to containers, where you can call anything the kernel exposes and anything the kernel can connect you to and it’s up to the kernel to figure out what security semantics apply), which means it’s more secure by default.
I can look at ProxMox was thinking about it as I have seen it around a lot and it looked pretty straight forward to use, I liked the interface and deployment methods I have seen.
When you talk about a firewall, is CrowdSec sufficient?
It appears to be an open source project - if you ever used like Waze for GPS? Where each user of the service effectively updates the service…
So if Person A gets attacked by IP address Y - that blacklisted IP address is updated and Person B is now protected from that IP address, even though they were never attacked by that IP address - is my basic understanding of the system.
Not a firewall, in my understanding of it, but does appear to perform some the function you are talking about - also performs some other functions as well, such as detecting behaviour.
Depending on how suspicious the switch is, it will void your traffic the moment that MAC does anything funny. VLAN wise, the switch should be the one tagging the traffic. On most better brands, you can have pre-tagged traffic lead to the port shutting down completely.
Edit: On Cisco (and probably all the others), switchport port-security mac-address sticky (with a reasonable maximum) would be my default for any connection I do not expect to change during normal operation. I would also have shutdown as the method for handling security violations on that port. That way, someone has to “touch the switch” to get the offending machine back on the network.
Container System/VM host, depends on how good you are with exploiting meltdown, I guess? Not sure here.
DMZ, depends on how much time the security guy spent on the firewalls surrounding the DMZ, and how integrated the switch/router/firewall environment is.
It could be the Firewall telling the switch to kick the offending server off the network the moment the mail server (common in DMZ) sends anything on an unexpected port.
Best Worst case, it is like attacking a perimeter firewall (and servers behind it) from somewhere on the internet.
Worst Worst case for blue team, someone moved the Any-Any rule to the top of the firewall and left it there.
In case you do maintenance on any firewall anywhere and the first rule in the rules list permits any connection from anywhere to anywhere else, shut the thing down! Yell fire while running screaming for your backups and take them home that night!
Admittedly, I have limited experience with containers, maybe locking specific containers to IPs is very commonplace? If IP assignment is based on the containers’ metadata, which the container has no control over, that should be safe.
For an enterprise switch, it sounds like you can ban all but one MAC from each physical port? That would let you bind DMZ status or specific IP to that device, since it cannot change its physical connexion, but it seems like a suspicously convoluted solution.
If you do not have an enterprise switch/router, however, you are mainly relying on the DMZed machine to secure its MAC randomisation toggle, which somewhat questions the while point of DMZ.
If the computer is not sufficiently compromised to change its MAC, what attacks is DMZ actually stopping? I always thought DMZ was for the possibility of exposed machines to become compromised at the OS level; if the OS was not compromused its own protections would make DMZ unnecessary.
Maybe something using 802.1x would work, but that too would need a fancier switch/router.
@Joe_Bloggs1 To keep on-topic, perhaps I should ask,
what kind of router do you have?
My impression was that you were likely dealing with a residential, maybe even ISP-provided one.
If your router has support for vlan routing you could just stick it on its own vlan, then use a managed switch to force all traffic off that port onto it’s own vlan, that way the switch is what’s controlling access
IMO stuff like LXC and Canonical Multipass are the sweet spot between the overhead of virtual machines and the IaaC workflow of docker images (building and customizing containers can be a PITA sometimes).
If I want VMs but all those VMs are all Linux host I’d go with LXC or Multipass. Multipass having the advantage of being Multiplatform (although for Mac and windows it runs over a vm).
Consumer routers misappropriated and overloaded the term DMZ over the years.
On consumer routers, DMZ means forward incoming packets (well, … those that aren’t already associated with pre-existing connections) to some IP - it’s merely a “convenience” feature that saves people the hassle of upnp or nat-pmp and might actually make it easier to use certain kinds of VPNs (particularly those not using tcp or udp).
In corporate networking and in certain network security circles the term DMZ is used to denote a private-ish network separate from users internal LAN that both the external world and internal LAN users could access. DMZ hosts couldn’t access either the internal corporate LAN where higher value stuff is stored, nor would they.
The idea is to limit the damage if hosts in DMZ were to become compromised - and act as a security buffer and a safer alternative to running internet facing services on your internal corporate network / same network where all the valuable stuff is.
VLAN is another weird term in practice, people talk about a separate DMZ VLAN, but in actuality they may or may not actually be be referring to the typical 802.1q encapsulated Ethernet frames with a header to delineate different networks sharing the same Ethernet interface. Instead, they could just have a dedicated port on the router configured with some IP subnet, which will cause all packets coming from there, not coming from some IP within the expected range to be ignored (i.e. reverse path filtering or rp_filter).
Usually what they mean when they say VLAN is just “a separate Ethernet segment with some unspecified security implications” - it could be implemented with VLANs or not.
Typically yes, docker would preconfigure the interfaces before container code runs.
With containers and networking, typically every container will have its own little Linux networking stack. This means each container gets its own set of interfaces and routing tables and policies and firewalls. Your host as well as each container can and will do their own routing and firewalling, it’s really neat.
Docker (incl. docker compose) in its quest to make using containers easy, helps you set up networking for containers. By default, if you don’t specify anything for networking, you get a bridge that both the host and the containers can talk over unfettered, and then it even makes it easy to “expose ports”, which configures the host networking stack to port forward packet arriving at the host, to a particular container. This is great for development, because you can spin up a service and show it to a colleague.
For home hosting most people configure at least one custom bridge, attach all the containers to it, and then have one container with exposed ports to act as some kind of proxy to do http authentication.
Containers typically wouldn’t get direct access to a physical network, they would run a DHCP client, instead on container startup docker would pass in a pre-configured interface, before any of the stuff in the container starts running.
More complicated stuff is possible, too.
You can spin up containers to use/share the host network directly, or with compose you can have multiple containers that share a network stack separate from the host.
You can write your own “network drivers” (docker-ism) that “docker engine” (basically dockerd), can call with whatever parameters are passed to it in order to configure networking, e.g. initializing Tailscale or Wireguard or moving physical interfaces from the host into containers or setting up VXLAN (ethernet over UDP) overlay networks - which is almost equivalent to having these giant private bridges that span multiple machines… and then one has to deal with distributed IP numbering.
Think of using 802.1x as a VPN-ing into a LAN. It’s a way to identify the what’s being plugged into a network port (or connecting to a wifi access point), and it’s a way of securing the traffic going over the cable. It needs a fancy switch because switch needs to have a bit of software that can handle reconfiguring the ports and can handle some of the crypto.
If you’re not worried about people plugging in random stuff into your switch physically, 802.1x is not helpful.
If your host gets compromised while using 802.1x, 802.1x won’t help secure rest of the network.
Fancy switches have this thing they sometimes call “port security”, and that’s a combination of simple per port mac address filtering and some other features; usually it’s used to prevent smarty-pants employees from plugging in random devices without network admins knowing, and potentially breaking the network, causing more work for humans that maintain it. It doesn’t help with sophisticated attackers (e.g. typical Jeremy from Marketing would have no trouble with it)
Really? I never have actually tested it, I assumed the DMZ-ed device was also isolated from the rest of the network; consumer routers already do that for “guest” wifi/wlan.
That seems like a good protection to have; even if only for someone accidentally routing cables for port 3 at 2; in my mind, authenticating a device before assigning it an IP sounds like an excellent idea.
Anyway, 802.1x is too complex for what @Joe_Bloggs1 has in mind, apologies for the digression.
Exactly. That is what I was trying to say, albeit verbosely.
I have mainly dealt with consumer equipment, and read about OpenWRT/DDWRT, so I was not aware such port-specific locking features were available on just the next tier higher of hardware.
Maybe I missed something, but I had the impression that @Joe_Bloggs1 might only have a consumer router. Would OpenWRT on such a device let you implement the kind of protections we are talking about?
For router mine is consumer grade, it is the AX89X from ASUS.
It does have a specific set-up page for DMZ but I think the concerns raised earlier about consumer grade equipment are born out in the documentation for this router - that it appears to open all ports to that machine, not simply 443/80, or whatever ports you designate, and doesn’t appear to isolate the machine in anyway - if all I want to do is web hosting - this appears to be opening up a can of worms that I don’t really want to open.
I figure port forwarding for 80/443 would probably be a better solution than DMZing the single machine with this router.
I plan to put ProxMox on the machine, with Nginx as the reverse proxy and CrowdSec to add a layer of protection.
I also have my ISPs router handling the internet, and I noted while looking at it the other day that there is a block feature within the firewall - so I was thinking about blocking access through an IP Range - which would cover the rest of my own network - when it comes from the IP of the webhosting server - as it needs to pass through this ISP router.
So I may be able to block all internal traffic from that machine, by simply blocking the IP Range when coming from that particular IP - which would be effectively a DMZ on that machine, it would isolate it from the network.
If you have two routers (ISP and the AX89X), and the ISP one has multiple ports, you could plug the AX89X into one port and the Ubuntu machine along side it. Then plug everything you care about into the AX89X only; if it has it, turn the ISP router’s WiFi off. Ex:
WAN IP (ex: 184.108.40.206) → ISP subnet (ex: 192.168.8.x) → AX89X subnet (192.168.9.x)
ISP router would have:
an external 220.127.116.11 address
& an internal 192.168.8.1 address
Ubuntu server would have a 192.168.8.x address
AX89X router would have:
an external 192.168.8.x address,
& an internal 192.168.9.1 address
TrueNAS server would have a 192.168.9.x address
This would let your private machines and the AX89X router treat the Ubuntu server as if was part of the internet, not LAN.
There could be a performance penalty, since NAT is happening twice, but it might not be significant; I have seen many people accidentally running double NAT without knowing it. If you want to compare, I would try something like DSL Reports’ speed test, since it gives more than just bandwidth.
Not quite sure this would do anything.
If AX89X is the only thing connected to your ISP router, and it has NAT enabled, the ISP router will only ever see the AX89X’s IP*.
It would help to know where AX89X is currently connected, and if the ISP router has WiFi or more than one port.
* there are caveats galore with IPv6; frankly, I do not know enough, and I am mainly researching privacy not security issues in IPv6 when I have the time.
Throwing a cheap switch after the ISP-router is not the hardest part.
Edit: To clarify the above:
Having a switch between ISP-Router and Internal-Router is easy. Since that switch will not have to do more than basic switching, any switch will do.
In case that switch can be managed, it may be a good idea to only allow management from the internal router.