I hope I have choosen the right categorie for my question.
My company used a public cloud based on VMWare. A part of the cloud was a KaaS, Kubernetes as a Service, Cluster. Since Broadcom changed the license our cloud provider had to stop the KaaS offering. As alternative we are now hosting a K3S cluster with a NGINX layer 4 Loadbalancer upfront. All Server in the Cloud are accesable through a Wireguard Network. The Loadbalancer has Port 80 and 443 open to the internet.
Now to my Question. I wanted to deploy Rancher on the cluster for management. But I don’t won’t the Rancher UI to be open to the Internet. What is the easiest way to achieve this? If I use the same Ingresscontroller for the public and the private parts, would this allow a bad actor to “fake” the url of the requests on the open IP of the Loadbalancer and get access to the Rancher UI?
I have enough knowledge of Kubernetes to setup and use a cluster for simple app deployments. But I lack the knowledge for advanced usage.
Well you have at least two options, easy one and correct one. Plus unknown other, I am pretty sure exists.
- easy one using ingress options at ingress object level to whitelist internal only client ips to pass. OSS nginx-ingress supports that, so I am pretty sure commercial does too.
- only aspect i dont like here partial commingling of potentially sensitive traffic, but since it happen in the internal network, its ok i guess. Ask your netsec if its ok from their perspective
- correct one is full traffic isolation on network level and deploying two separate ingressesclasses to handle each ingress type on k8s side separately
- each ingress handles only either private or public group, so if isolation is configured properly, request from for internal/private hostname from internet will be correctly evaluated as unknown host.
- request originating from internal network be routed to separate integral ingress by some other mechanism, might it be wildcard dns or something more complex
- i am setting up HA haproxy cluster with dedicated floating service ip per ingress.
Only issue I found and failed to surpass so far is nginx-ingress OSS documentation, I had them somehow misconfigured.
Despite being deployed correctly to separate namespaces, public ingress in namespace A was aware and was routing ingress object specifically created in namespace B and specifically referencing that it is to be handled by ingresclass B.
I.e isolation breakdown at k8s cluster level.
1 Like
You already have following connectivity:
- Public ingress through L4 LB
- VPN to Internal Network
May be I am missing something… with the VPN can you not access the internal Racher UI or Console ?
Are you using the nginx ingress controller? I’m not clear on that one.
If so, you can use the following annotation to whitelist CIDRs like so:
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/8,192.168.0.0/16
Ingress controllers use host-based and path based routing. This means the ingress controller executes rules based on the host header and URI of the request. This means that even if an attacker spoofs the host header, the ingress controller will not forward the request to an internal service.
1 Like
Thanks for all your replies.
@greatnull Thanks for your options. I didn’t know the first option would be possible. Since I work for a small NPO I am the whole IT department. the second options would be the better option in my opinion. I will look into it.
@mailman-2097 Yes, I have these two connectivity options. I don’t have deploied the Ranche at the moment. I first wanted to get a plan on how to isolate the Rancher UI from getting accessed over the internet.
@SgtAwesomesauce Im using an nginx ingress behind the L4 LB. Thanks for the clarification. I will look further into it.
1 Like
I’m currently using the IP whitelisting method that I mentioned, but please do keep us updated, it sounds like the second option greatnull mentioned would be better. (I’d like to see a bit about how you wind up implementing it, since I’m planning to do this on my homelab one of these days)
I will keep you updatet. I don’t know when I will have time to implement this. I have some other more pressing issues to solve.
Hope I can give an update end next week
1 Like
No rush, I’m just curious to hear back!
If you by choose and deploy multiple isolated nginx-ingresses successfully, would you please report back?
I am curious whether its just me and my speed-reading skill at fault here, or if documentation is that deficient.
I used official helm chart and documentation, where as I mentioned isolation failed. But I also encountered other issue that led me to believe there are outright errors in current chart config.
I.e static nodeport configuration is ignored when using official config parameter, had to use old one from two year old forum post that does not officially exist at all 
I will be revising that setup soon, but probably not within few weeks.