In part this development process was started because of what I saw in log files, and the many days my hosting provider sent me emails warning me of 10 Mb/s outbound traffic
, 25 Mb/s outbound traffic
and the one that really pissisted me off 82 Mb/s outbound traffic
- those were Mega Bytes per second too, not Mega Bits per second.
So after writing all the manual automation and some web based review pages, I can trigger the new generation of a /etc/nginx/block.conf
file and restart Nginx - result max 25 Kb/s outbound traffic
(and falling, see stats
at end of post) and a reduction of 90%+ of inbound bogus urls.
It seems neither the automation scripts like any ISO being served as HTML, no matter how much they limit the returned data (as little as 4096 bytes), nor the individual users who try grabbing /.env
and give up after 14 Mega Bytes of pure Windows 11 ISO served as text/plain
@1Kb/s.
So I guess its working how I thought it might, now I have also started processing the /var/log/messages
logs, as it appears most of that 25 Kb/s outbound traffic
is actually failed sshd
attempts. Counting IP addresses for invalid user
, where the web page review of that data shows me a 50 hit limit per 24 hours on almost all attempts (I saw only one at 52), which indicates scripted automation of some sort. I recorded each IP address as a file with its content as the number of failed attempts, and I wrote another script to process the folder and insert any IP>39 hits into (eg:)ip route add blackhole 167.99.133.28/32
, where the IP address gets recorded in another folder with the contents as the date (so I can remove them after a month - if I feel like it) - I can use the same ban script to add the odd HTTP intruder who “doesn’t get the hint” - have a total of 2x HTTP IP banned atm
I dont restart the server very often, maybe I’ll start doing it once a month, at least now I can easily add any IP address to ip-rules
after a restart. The only real issue with my server banning an IP address at the front face, is that it can no longer see what sort of threat presence that IP address has (nothing get logged), so its going to be hard to determine a threat de-escalation response.
The other thing that has irritated me is not being able to pipe a name/ip to a file with contents that increment per pipe encounter. This would be a lot easier and cleaner than a shell script having to iterate files, grab their contents, add 1 to it. I decided tocat /var/log/messages | grep $IP | wc -l > $IP
instead, to speed things up, but it still has to do it multiple times, so it might take a while on a large or heavy use system. I would rather make something that can be used in the place of .. | xargs touch
that can be also be useful in a pipe (incl. the next pipe command).
Anyway, I am about to stick this up (modified for general use) on GitHub, and play test it for the rest of the month, see if I find any other edge cases that might break any of the scripts, then I’ll look at full automation parts. But I would like to address the inode thrash that will affect some systems (eg. RPi sd-cards, other SSD & emmc media).
stats:
max 20 Kb/s outbound traffic | max 1% CPU usage | max DISK I/O 1 block/s