Token's lvl1 blog- edit -- Token's rantings

For the life of me I couldn’t figure out why my vaultwarden lost connectivity (more specifically refused SSL traffic).

Checked:
cname/domain
SSL cert valid
pfsense port forward x4 times
NAS proxy settings x 4 times
Docker host firewall settings
Docker network settings

Turned out a simple drop down in the cert manager on the proxy host to move the domain name from a default self signed cert to the domain one…

I was going to turn off internet access to it anyhow and leverage it via NAT/tailscale from now on. But I couldn’t just shut down access until I figured out why it wasn’t working first LOL

Hmmm, can’t seem to get it to work with only LAN access, trying to fiddle with pfsense’s DNS Resolver host override so that the mysub.mydomain.net is still used and should go to the proxy that plays nice with the SSL. Also trying with just https://ipaddressofhost:port used by vaultwarden. No joy…

Very neat, the Google Chromecast with Google TV has the tailscale app, so I installed, logged in. That plus having both Plex and Synology DS video apps as well and now I can easily watch home/self hosted content from the hotel.

From the travel router, I’m pretty much casting a lot of youtube, casual browsing.

Why plex, why?

image

image

1 Like

Greed… :wink:

1 Like

So, looks like my attempted isolation of something on my IoT network (VLAN 40) has failed, layers of failure.

I have IoT “blocked” from all RFC1918:

Yet I see my Win10 laptop on the LAN having a chat with a device on IoT via Ntopng

Looks like that device might be pivoting through an NVR and/or its client software on the laptop-

Whats really interesting and also frustrating was to test I made scorched earth firewall rules AND killed states and I can’t shake this connection.

In pfsense explicitly at the top of the rules blocking the laptop to IoT in LAN rules and the IoT device to laptop in the IoT interface, then nuke states, the device and laptop don’t skip a beat at keeping a link. I wonder if I’d need to reboot the devices to create some break for the states to effectively clear.

My janky “phase 1” of backing up my data begins. Not rsync like last time, but just using Syncthing.

Phase 1: Backup synology share files (not the synology VMs, Docker, config etc itself, just the shares files). Syncthing sharing between the synology share folders and two big HDDs in my desktop made into one drive.

Phase two: Would be nice to do a “full backup” so if the synology puffs the magic smoke, I could get it repaired or another one and be back up not just on data, but on my dockers and such. This might just require getting a HUGE external HDD to use synology’s own software to do this.

Phase three: Setup offsite backup to a friends NAS.

So regardless of watching multible vids, multible times I just could not get my vault warden server to function correctly when not accessible from the internet.

I absolutely hated port forwarding to it and followed Lawrence and other vids to use HAproxy and wildcard SSL to allow an air gapped operation for the app and browser pluggins

SO

Transitioning to KeepassXC

Totally different angle, not as slick and shiny but via syncthing I should finally get my goal of LAN (except when necessary will VPN in) self hosted password management. Actually I’m assuming syncthing is only over LAN but it seems to have a tailscale like element baked into it.

Let’s look at it from a different perspective… How often is your password database modified?

If sporadically or even very rarely, does it make sense to have live synchronization? When a file changes on any device, send it in an additionally encrypted form to a central email inbox and then download the file to other devices to maintain synchronization. :slight_smile:

With rare changes I don’t see a problem to do it manually.

1 Like

With syncthing it’s not even manually, just a bit clunky in comparison. I pretty much did this tutorial

Very good tutorial that I was setup to do as I already use syncthing. Only thing I wasn’t too sure on was the android app, it has way less downloads than the other one I was eyeballing - I suppose I cane try both and delete the loser.

I’m now synched and working on two PCs (+ browser extension) and the phone with the file on a NAS. If I had a more diverse array of machines then this would get a bit messy IMO.

I miss vault warden but it was one hell of a defeat not getting the WAN to LAN “conversion” to work and I was fed up with keeping the port forward up to keep it working. I find it ironic that for “security” such an insecure thing must be set. Their project should have a more turn -key “LAN” mode for self hosted.

2 Likes

Was watching a tech tuber’s vid of an ‘ultimate router’ and he virtualized pfsense, but also pi-hole vs. using pfblocker with dnsbl. I think I get it.

For the last few weeks or so, I’ve noticed that on the hour, every hour, I loose internet for about a minute. I immediately suspected pfblocker and/or dnsbl. Yahoo mail is actually a great GUI to see this in real time as it seems to have a pretty aggressive heartbeat and will let you know the second its down:
Yahoo drop internet on hour every hour

I wish I could remember what I changed because Occam’s razor, but my memory just isn’t that good. Only thing I can recall is I added KeyPassXC to the whitelist, the URL was on a malware list…

I’ve found many a forum thread of people with this issue, none with a fix. I thought I was smart by finding the update cron setting- it was set to every hour. “AH HA! Found it!” Changed it to midnight where it won’t be an issue. Nope, still at the top of every hour the issue persists, as many a reddit thread confirmed with that setting.

Today, pfsense went the distance of actually alerting me to an error.
pfsense  refresh showed error log

And then to my excitement, I saw a pfblocker devel update available, so updated.

Welp, its still an issue. Sometimes you don’t notice because youtube streaming does a great job of caching, or I’m reading a page, dong something that doesn’t require any query to DNS. But when I do need something new right on the hour, it shows its ugly face.

So in theme of this thread (rantings)- I think I’m done with pfblockerng’s dnsbl as I’m a true LVL1, I am not technical enough to hunt down the precise log, issue etc to fix it or contribute to the team fixing it- so I rant lol. I suspected it right away because its just one of those programs that seems to have an issue often enough it becomes the prime suspect whenever I have issues like this.

So to you Hardware Haven- who decided to not use pfsense native DNSBL but pi-hole- I see you. I get it. I’ll probably be following suit.

The error says there is not enough allocated memory, have you tried to increase it? :wink:

Others say the problem went away after uninstalling the package without saving settings, doing a total purge and reinstalling… :confused:

1 Like

There was a time I’d commit hours to googling and figuring out how to do this, my tech path has changed- I’m more a user and not a modder. The modding has never EVER gone well OR gets broken with updates, forgotten etc.

Learning to just use what fits the use-case. In this case something you set and forget. DNSBL has been a pain since day one.

On that note I suck at macvlan and am getting tired of docker port conflicts, docker container update procedures and lack of easy backups. I want to stop with adding additional containers to the synology. I’m on the hunt for a type1 hypervisor- back to good old brute force virtualization with easy network settings/GUI and stupid easy snapshots.

1st attempt with a freebe old gaming setup is a strike-out, the proxmox debian build is not having that hardware- even a debain install with updates and then proxmox installed on top of that. So that box will likely become a blueiris box because again- becoming a user- I want turn key #justworks and not half done half baked projects (especially for home security and incident notification smarts). I have my old rack mount stuff but that needs to get sold, I need something more power efficient.

1 Like

I hear you, I have a similar approach. :slight_smile:
vm, pi-hole, and physical separation wherever possible. :slight_smile:

For pi-hole, I use zeropi as my dns1 and Odroid HC1 as dns2. :wink:
For a home network, it’s ok, unless you have some really heavy traffic, then you need something stronger.

For me, however, the little zeropi is bored and there is no need to keep the cpu at 1.37Ghz and you can easily beat it to 480Mhz, there is no shortage of ram despite 512MB. Onboard armbian(debian 11).

Small, quiet and economical… :wink:

pi

In an extreme situation, it can even be powered by a power bank. Add to that a wifi card/lte modem and you can take your little one for a walk. :wink:

1 Like

Awesome setup and I really like that case. My only issue is the SD cards of Pis and they high rate of failure. I got off the Pi train a long time ago when I started to use them for production, the only saving grace is the recent support (not the older janky support, I mean more mainstream support) of SSD drives.

My second gripe with Pis is ARM- I just hate the state of ARM distros/forks, but that is in regards to making Pis do what nix servers normally do (web server, NAS etc). For things like home assistant and Pihole, I dig it.

So to get pihole more to where I want it, there are A LOT of steps in pfSense to make the pihole logs more usable:

I’m not about to blow up my leases so TBD if the changes had the desired effect.

1 Like

I’m on the hunt for a low power FULLY supported proxmox hypervisor. Unfortunately my current inventory of white box hardware has driver issues, the NICs and GPU namely.

Some of those alli express micro PCs look really tempting, just a lot of filtering of youtube tech channels to validate which ones are good to go with proxmox.

My tech adventure has been like an AC wavelength, going from big hypervisor to small NAS with docker, and now somewhat back but more moderate. IMO the synology can be what I wanted it to be, do ALL THE things in one box (Home Assistant in a VM, lots of docker services, then lots of native synology apps) but I’m really put off with how technical one has to be with docker to go further than a “hello world” type instance/tutorial. I need that good ole’ fashioned ease of administration that type1 virtualization gives (good network GUI, snapshots, different IP address for every VM, ease of updates of OS, ease of VLANs, ease of virtual switches).

Again, can be done on docker but you have to be one of those ‘BTW, I run arch’ types. Maybe one day portainer or rancher will get to the point of allowing me to admin containers like I can with VMs. I doubt it though due to the very nature of those two mentioned being containers themselves. I think it would need something running native on the host with root access…

1 Like

Like so many of my derp projects that is basically band-aiding and bubble-gumming something together, I’ve gotten PiHole to a half-assed state. It appears to be doing work- meaning blocking, but the logging is useless as DHCP is handled at pfsense and the couple of guides I’ve followed to somehow forward/share this info has failed (in one case caused a DNS loop lolz).

I don’t want the PiHole to also be my DHCP so I think I’ll just leave it as is- its blocking, just can’t really use the logs if I feel like really digging in sometime- for example that recent news of an android TV box sold on Amazon having pre-loaded malware was accomplished by a user actually digging into their PiHole logs. I think I’m happy with just knowing ad-blocking is happening.

Maybe I’ll revisit DNSBL later, but the idea of nuking it and the config and installing from fresh is messed up, my whitelist and additional block lists efforts would be gone. Its easy to backup the whitelist, but I don’t think I’ll mess around with additional block lists again.

1 Like

Hmm personally, so far I haven’t had any problems with the sd cards in my sbc.

I’ve been using sandisk A1 16GB and 32GB since 2018 and no issues with my sbc usage model. In armbian, of course, I have a folder2ram plugin that limits sd usage to that swap is off, although I personally set the swap to 4GB anyway.

So sd card for system + pi-hole I don’t see any problem. It will work long, before the card dies.

Failing sd cards in sbc use are either poor cards or extreme write usage that kills them. Just use folder2ram and with maire moderate use there shouldn’t be any problem with the life of the card.

I use sd cards in my sbc be it nas or a small dns and a small server for lan and really don’t see any terrible sd card failures.

And zeropi is from friendlyelec and has nothing to do with Raspberry Pi. :wink:

Yes, the situation with the code on ARM is sometimes worse than x86, but it’s not that bad.
I don’t have any problems with Armbian, it’s stable, most things that debian x86 has are on arm.

Personally, I prefer to have hardware separation for services. One big server with all VMs, when it crashes for any reason or need to update/reboot, it shuts down the whole ecosystem. And having dns on a separate machine, even as small as zeropi, gives you separation and freedom of action.

I use various things on sd cards, from openmediavault, through servers with nginx and lighttpd, pihole, iptables, environment with xfce and firefox, uget, libreoffice, filezilla and many others. I even tested Home Assistant … no problem with sd. Yes, sd is not as fast as ssd/nvme, but personally I do not observe any excessive deaths of sd cards.

Of course, I’m not saying to use sbc with sd as the main server for vm :wink:

The combination of Armbian + Pi-Hole + nginx / lighttpd works out of the box and does not require any effort, just install… :wink:

my PiHole works with domains in the number of…
pi
and even a small zeropi manages so a bit of separation from the big x86 why not. :slight_smile:

1 Like

Do you really need dhcp on pfsense? If it’s not an absolute must, go for OpenWRT :wink:

Personally, I use dhcp to a very limited extent, which I have on OpenWRT, which I treat as a router and not a firewall, pfsense usually I try to limit it to the role it was created for, i.e. a pure firewall with optional ids/ips.

In my LAN everything is added rigidly in dhcp, ip per mac per device. I don’t play with dynamic allocations because it’s just a headache the bigger the network.

For this I have such a philosophy that the firewall should be as tight as possible. The more services on such a machine, the vector of penetration increases.

If you decide to use pi-hole instead of pfblockerng then it shouldn’t matter that dhcp is on pfsense. This does not affect the content of the logs.

As for dnsbl… it’s cool but. It depends on what we want to achieve, Protect the LAN site from the world or the server from the world knocking on our door.

If you want to filter DNS traffic for LAN hosts, pihole will do the job. Additional filtering per IP in 2023 has its downsides. It’s better to just have a policy of blocking everything and only allowing certain traffic. And an application layer firewall per device!

Let pfsense do the dhcp if for some reason it must be, but let it advertise the pi-hole IP address to the hosts in the lan for both dns.

And set pi-hole as stand-alone dns, even on ZeroPi. :wink:

Maybe one of those minis that @wendell showed?

Or one of those mini ones that are on https://www.youtube.com/@ServeTheHomeVideo/videos

1 Like