[Devember 2021] brutally simple firewall

I just replace 2 ranges with a higher blocked range (3 days ago), because there were 30+ extra IPv4 addresses outside those 2 ranges, and the result (may not be an actual correlation) has been a halving in the per day block rate.

A stats update:

3 days ago: 6-7 blocked IPv4 per day 
currently:  3 blocked IPv4 per day
total blocked IPv4 ranges: 273
total blocked IPv4: 6513

It took 2 months to get the 1st 4000, and 2 months to get the next 2000, while it has taken 3 months to get that last 500. After I blocked a couple of the new Microsoft Azure Cloud Service ranges (there were no MS ranges blocked before that), there were near 0 blocks per day for almost 2 weeks.

2 Likes

hmm… 3 per day lasted 3 days, back up to 10 per day atm, looks like even hackers take christmas and new years holidays (at least for a bit).

just doing some range checking, had no range entries for DigitalOcean’s 178.128 IP address block, out of 41 blocked IPv4 addresses (since late August when I block most DigitalOcean ranges), 16 unique ranges are present (just a couple of mid and high are missing), so I range banned 178.128.0.0/16 instead of the 19 individual ranges (most are /20 address blocks).

While looking up some other ip addresses, I also saw a server address of: mongo.blahblah.mongodb.ondigitalocean.com

Not sure if that domain is bound to the same ASN though, guess there are ways to check, just thought it was interesting (thats either an SSH or a webserver attack coming from a MongoDB server at DigitalOcean).

EDIT: FYI

  • DigitalOcean have over 700 IPv4 ranges assigned to their (main) ASN, mostly in the US & UK, and a large number in NL, but there ranges cover a good part of the planet (country assigned IP addresses).
  • You would think (via media) that most IPv4 address traceroutes would end up in Russia, China, (and the US if you read between the lines) but they dont, most go back to the Netherlands.
  • If attacks come from at least one of every IPv4 address range a specific ASN entity owns (supprisingly not DigitalOcean this time), and those ranges are used by various other (somethimes seemingly unrelated) hosting and ISP services (in 10 different countries), does that show collusion, a government agency, a private agency, or a criminal organisation of some sort?
1 Like

blocked : 13 in last hour : 46 in last 24 hours
blocked : 11 in last hour : 39 in last 24 hours
blocked : 16 in last hour : 55 in last 24 hours

nah, business as usual. (yeah thats 3 days in a row now :slight_smile: )

I must have just caught a break for some reason when I only got 3/24hrs for 3 days, who knows, but its still way higher than the averages I saw last year.

With that many new IPv4 per day being added, chances are there are some more ranges that can be blocked too.

EDIT:
yeah, BIG patterns. the system log files have rotated since yesterday, and all 177 sshd entries are unique, but there are about 12 new ranges that can be blocked.

6898 blocked : 2 in last hour : 64 in last 24 hours

Got my first automated sub domain attack a couple of days ago, 420 subdomains in less than 1 minute.

Because they were all to root of the web server (ie. legit), they are not factored into any IPv4 block analysis, so a manual block was used.

However, it gave me a chance to write an ASN prefix allocation (IPv4 range allocations) scraper, which I can pipe to the range blocker script. This in itself brought me up against something that has frustrated me on the odd occasion, CloudFlare.

This time I defeated CloudFlares JS browser probe with PhantomJS, but I dont want to need that as a prerequisite for a firewall (python, qtwekit, nodejs - all of which are fat), plus the online ASN tool I use streams a single line of content, so its only really useful from my desktop machine. I got the data from somewhere much cleaner, but I do see a difference is their output.

On top of that I realised the extra data I wanted to see (what and why a block occured) is needed to create valid DShield logs (for SANS Internet Storm Centre submissions), so “cool”, gives me something else to think about too (v2?).

And that brought me to the idea of the standalone Gerka’s having the option of placing something funny in the message log, eg:

Jan 25 17:58:00 localhost cron.info crond[2078]: USER root pid 881 cmd /work/www/tools/unique_block-ipv4-redo.sh
Jan 25 17:59:13 localhost ssfw.info ssfw-ipv4-update[32192]: 17.11.0.14 and 4 others shot in the back, buried deep [added](5) 
Jan 25 18:06:25 localhost auth.info sshd[1329]: Failed password for root from 81.150.9.251 port 52661 ssh2
Jan 25 18:06:25 localhost auth.info sshd[1329]: Received disconnect from 81.150.9.251 port 52661:11: Bye Bye [preauth]
Jan 25 18:06:25 localhost auth.info sshd[1329]: Disconnected from authenticating user root 81.150.9.251 port 52661 [preauth]
Jan 25 18:06:35 localhost ssfw.info ssfw-sshd-gerka[11961]: 81.150.9.251 was shot down, Bye Bye urself, pesky haxor [root](1)
Jan 25 18:15:00 localhost cron.info crond[2078]: USER root pid 1807 cmd run-parts /etc/periodic/15min
Jan 25 18:15:00 localhost cron.info crond[2078]: USER www pid 1808 cmd /work/www/tools/unique-svr_cron-jobs.sh 15min
Jan 25 18:16:13 localhost ssfw.info ssfw-cron-nginx[32291]: 179.11.11.142 wiped the floor clean, guts everywhere [sshd](33)
Jan 25 18:16:33 localhost ssfw.info ssfw-cron-nginx[32291]: 6.114.0.141 decimated after barrage, 5kR1p7 kiddie [subdomain](420) 
Jan 25 18:33:05 localhost ssfw.info ssfw-sshd-gerka[11961]: 8.15.19.25 faught hard, died lonely, janator mode [root](16)
Jan 25 18:56:35 localhost ssfw.info ssfw-sshd-gerka[11961]: 1.150.9.51 sniped in 4 seconds, trophy hunter [user](8)
Jan 25 19:15:00 localhost cron.info crond[2078]: USER root pid 1809 cmd run-parts /etc/periodic/15min
Jan 25 19:15:00 localhost cron.info crond[2078]: USER www pid 1810 cmd /work/www/tools/unique-svr_cron-jobs.sh 15min
Jan 25 19:16:13 localhost ssfw.info ssfw-cron-sshd[37290]: 17.1.11.14 sniper on sniper action, keys recovered [kex](1)
Jan 25 19:16:33 localhost ssfw.info ssfw-cron-nginx[37290]: 176.114.0.141 posted death certificate, \0x that [haxor](3) 

well its just an idea, those log files are so dry, it would make sandpaper thirsty. :slight_smile:

Anyhoo

Cheers

Paul

1 Like

I have come across what looks like (maybe) a professional hacker, or someone looking to make money of a results of what they find (I cant see it as just collecting the target results as an end game).

I noticed a couple of /.git/config probes in last moths logs (2 to be exact), and the only thing they had in common was the HTTP_AGENT (python-requests/2.9.1) and the fact they only pulled the exact same amount of data (remember I have all haxor pulls linked to a 4Gb ISO file) being 8192 bytes.

I went back over the previous logs, and the same sort of probes go back to August 2021, when I started the project.

What raised my eyebrows was of all 25 probes only 1 ipv4 came up 2x, and that was the 1st two probes, at the same time (Alpine does not record anything less than seconds).

Why I think there is something worse going on here than just a “script kiddie” being nosey or annoying, is that every single DNS lookup (that I checked) of those 24 unique IPv4 addresses resulted in a _something_.linodeusercontent.com url.

When I did a geolocation lookup, every single one was from a different Linode service location around the world.

My thinking is that they are looking for private origin repos (ie cloned private repos) as opposed to public or cloned public repos.

I am not sure what to do about this. I know the support staff at Linode will look at it, but I dont know weather they are interested in tracking this person down.

I would say they have already been successful at it, which is why they keep trying. Since either interlectual property or money are involved I dont think “a slap on the wrist” is the right thing to do here.

Then there is the problem of “from which country did they sign up”.

Hmm… (just going over the log entries again - I greped them into a seperate file)

It seems like 2 seperate people, one pulls 8192 bytes with an HTTP_AGENT of Mozilla/5.0 (but always from a different platform, with a different Chrome version number) about once a month.

The other person using python agent always pulls 4.5 - 5.5 Mb before disconnect. Actually they are often in groups of 2-3 probes (from different IPv4), and when they are in groups, they just about always start at 1:30am GMT/UTC (1 starts a 10am) and the last one is always between 5pm & 10pm GMT/UTC (when there are 3 probes, the middle one is always bewteen 4pm and 5pm)

FWIW my server is in Texas, GMT is default for Alpine Linux, and its exactly 12hrs behind me (which is why I left it set that way - my clock on my desk is set to GMT/UTC).

FYI there is one Go-http-client/1.1 in there too, just once near 1:30am, but not from a Linode server (but yes, the Netherlands again)

Anyways, something to think about …

1 Like

This is what a coordinated, distributed and controlled, but sustained attack looks like. You can’t call it a probe, because the blocked IPv4 Addresses are coming from failed sshd username attempts. You might call it a soft attack as each SSH attempt comes from a different IP Address, and there is (usually) a descent time difference before the next attempt.

Having checked, I had already range blocked 43.128.0.0/15 and 43.154.0.0/16 , which is why none of those ranges are present. I am not 100% sure if this coordination is limited to this network, but ALL of these 43.* addresses list [email protected] as their “Abuse contact”, and they ALL list location as either US or SG (2 of Tencent international Cloud locations). There are 122 unique IPv4 here. For every 10 addresses 2 were for root user attempts, the rest are a random collection of known possible Services or Users username attempts.

( date -I seconds | PID | sleep # | sshd Gerka | ip route add blackhole * )

2022-02-01T03:25:29+0000#28607#10#monitor#added: blackhole 43.155.106.140 
2022-02-01T05:30:53+0000#28607#10#monitor#added: blackhole 43.155.74.70 
2022-02-01T05:56:24+0000#28607#10#monitor#added: blackhole 43.254.156.42 
2022-02-01T09:17:01+0000#28607#2#monitor#added: blackhole 43.155.68.119 
2022-02-01T10:20:03+0000#28607#10#monitor#added: blackhole 43.134.201.169 
2022-02-01T15:45:00+0000#28607#10#monitor#added: blackhole 43.155.74.252 
2022-02-02T00:35:19+0000#28607#10#monitor#added: blackhole 43.134.187.246 
2022-02-02T05:19:31+0000#28607#10#monitor#added: blackhole 43.153.6.100 
2022-02-02T07:30:29+0000#28607#10#monitor#added: blackhole 43.155.60.53 
2022-02-02T22:26:39+0000#28607#10#monitor#added: blackhole 43.130.45.123 
2022-02-02T23:19:30+0000#28607#10#monitor#added: blackhole 43.155.68.111 
2022-02-03T00:45:10+0000#28607#1#monitor#added: blackhole 43.134.202.54 
2022-02-03T02:13:24+0000#28607#10#monitor#added: blackhole 43.156.42.138 
2022-02-03T08:22:08+0000#28607#5#monitor#added: blackhole 43.155.100.182 
2022-02-03T08:38:09+0000#28607#10#monitor#added: blackhole 43.155.92.208 
2022-02-03T16:03:26+0000#28607#10#monitor#added: blackhole 43.155.101.118 
2022-02-03T16:04:21+0000#28607#10#monitor#added: blackhole 43.130.44.186 
2022-02-03T18:26:40+0000#28607#10#monitor#added: blackhole 43.156.54.220 
2022-02-03T21:40:02+0000#28607#10#monitor#added: blackhole 43.155.68.187 
2022-02-04T05:17:23+0000#28607#10#monitor#added: blackhole 43.156.46.96 
2022-02-04T21:49:12+0000#28607#10#monitor#added: blackhole 43.155.115.152 
2022-02-04T22:38:49+0000#28607#10#monitor#added: blackhole 43.155.64.136 
2022-02-04T23:24:51+0000#28607#10#monitor#added: blackhole 43.134.202.87 
2022-02-05T00:39:33+0000#28607#10#monitor#added: blackhole 43.156.46.55 
2022-02-05T03:02:00+0000#28607#9#monitor#added: blackhole 43.155.93.82 
2022-02-05T03:05:05+0000#28607#10#monitor#added: blackhole 43.135.158.214 
2022-02-05T21:01:31+0000#28607#10#monitor#added: blackhole 43.155.60.117 
2022-02-05T21:48:07+0000#28607#10#monitor#added: blackhole 43.155.60.206 
2022-02-05T21:51:32+0000#28607#10#monitor#added: blackhole 43.156.48.174 
2022-02-06T05:03:12+0000#28607#10#monitor#added: blackhole 43.155.78.35 
2022-02-06T05:22:14+0000#28607#10#monitor#added: blackhole 43.155.72.149 
2022-02-06T08:02:39+0000#28607#10#monitor#added: blackhole 43.134.205.14 
2022-02-06T13:55:19+0000#28607#10#monitor#added: blackhole 43.155.97.128 
2022-02-06T14:34:55+0000#28607#10#monitor#added: blackhole 43.156.46.178 
2022-02-06T14:55:01+0000#28607#10#monitor#added: blackhole 43.155.118.244 
2022-02-06T15:27:57+0000#28607#10#monitor#added: blackhole 43.155.65.167 
2022-02-06T16:30:21+0000#28607#6#monitor#added: blackhole 43.134.207.150 
2022-02-07T14:46:46+0000#28607#10#monitor#added: blackhole 43.156.45.112 
2022-02-07T16:46:54+0000#28607#10#monitor#added: blackhole 43.153.33.120 
2022-02-07T21:24:42+0000#28607#10#monitor#added: blackhole 43.153.5.122 
2022-02-08T01:08:18+0000#28607#10#monitor#added: blackhole 43.134.237.89 
2022-02-08T05:20:49+0000#28607#10#monitor#added: blackhole 43.155.63.36 
2022-02-08T10:08:18+0000#28607#10#monitor#added: blackhole 43.155.114.143 
2022-02-08T15:33:46+0000#28607#10#monitor#added: blackhole 43.132.135.222 
2022-02-08T18:50:17+0000#28607#10#monitor#added: blackhole 43.153.32.58 
2022-02-09T06:08:47+0000#28607#10#monitor#added: blackhole 43.132.251.88 
2022-02-09T19:53:35+0000#28607#10#monitor#added: blackhole 43.132.251.145 
2022-02-09T20:30:41+0000#28607#10#monitor#added: blackhole 43.155.64.249 
2022-02-09T23:10:56+0000#28607#10#monitor#added: blackhole 43.156.47.247 
2022-02-09T23:12:31+0000#28607#10#monitor#added: blackhole 43.132.180.108 
2022-02-10T14:18:37+0000#28607#10#monitor#added: blackhole 43.155.71.41 
2022-02-10T18:03:23+0000#28607#10#monitor#added: blackhole 43.155.107.219 
2022-02-10T22:27:32+0000#28607#10#monitor#added: blackhole 43.153.15.29 
2022-02-11T03:36:08+0000#28607#10#monitor#added: blackhole 43.134.202.107 
2022-02-11T04:42:21+0000#28607#10#monitor#added: blackhole 43.155.67.153 
2022-02-11T07:40:35+0000#28607#10#monitor#added: blackhole 43.130.61.158 
2022-02-11T10:46:10+0000#28607#10#monitor#added: blackhole 43.155.94.254 
2022-02-11T11:19:56+0000#28607#10#monitor#added: blackhole 43.134.199.32 
2022-02-11T15:59:48+0000#28607#10#monitor#added: blackhole 43.132.246.223 
2022-02-11T16:20:44+0000#28607#10#monitor#added: blackhole 43.134.194.179 
2022-02-11T16:27:24+0000#28607#10#monitor#added: blackhole 43.153.21.119 
2022-02-11T18:31:58+0000#28607#10#monitor#added: blackhole 43.155.67.205 
2022-02-11T23:39:30+0000#28607#10#monitor#added: blackhole 43.131.91.178 
2022-02-11T23:40:06+0000#28607#8#monitor#added: blackhole 43.135.165.22 
2022-02-12T00:13:02+0000#28607#10#monitor#added: blackhole 43.155.60.208 
2022-02-12T02:37:13+0000#28607#10#monitor#added: blackhole 43.155.72.11 
2022-02-12T10:22:39+0000#28607#10#monitor#added: blackhole 43.134.224.138 
2022-02-12T16:48:40+0000#28607#10#monitor#added: blackhole 43.135.160.246 
2022-02-12T19:54:41+0000#28607#10#monitor#added: blackhole 43.255.30.192 
2022-02-13T00:01:43+0000#28607#10#monitor#added: blackhole 43.133.201.165 
2022-02-13T00:04:39+0000#28607#10#monitor#added: blackhole 43.130.227.235 
2022-02-13T00:44:30+0000#28607#10#monitor#added: blackhole 43.155.111.188 
2022-02-13T06:44:47+0000#28607#10#monitor#added: blackhole 43.153.30.8 
2022-02-13T07:02:02+0000#28607#10#monitor#added: blackhole 43.133.188.254 
2022-02-13T12:01:22+0000#28607#10#monitor#added: blackhole 43.155.102.117 
2022-02-13T12:14:03+0000#28607#10#monitor#added: blackhole 43.135.155.97 
2022-02-13T12:47:18+0000#28607#10#monitor#added: blackhole 43.135.166.27 
2022-02-14T06:33:23+0000#28607#10#monitor#added: blackhole 43.156.42.20 
2022-02-14T08:06:00+0000#28607#10#monitor#added: blackhole 43.135.166.170 
2022-02-14T15:30:30+0000#28607#6#monitor#added: blackhole 43.155.95.49 
2022-02-14T20:47:00+0000#28607#3#monitor#added: blackhole 43.134.195.243 
2022-02-14T23:17:31+0000#28607#10#monitor#added: blackhole 43.156.47.186 
2022-02-14T23:55:52+0000#28607#3#monitor#added: blackhole 43.156.42.69 
2022-02-15T01:44:05+0000#28607#3#monitor#added: blackhole 43.135.160.150 
2022-02-15T02:11:46+0000#28607#10#monitor#added: blackhole 43.134.29.71 
2022-02-15T03:33:29+0000#28607#10#monitor#added: blackhole 43.254.158.205 
2022-02-15T04:41:58+0000#28607#10#monitor#added: blackhole 43.130.40.251 
2022-02-15T04:51:19+0000#28607#3#monitor#added: blackhole 43.152.201.119 
2022-02-15T05:44:47+0000#28607#10#monitor#added: blackhole 43.134.201.159 
2022-02-15T06:44:34+0000#28607#10#monitor#added: blackhole 43.155.74.159 
2022-02-15T06:46:28+0000#28607#10#monitor#added: blackhole 43.155.105.151 
2022-02-15T08:43:06+0000#28607#10#monitor#added: blackhole 43.155.63.228 
2022-02-15T09:09:32+0000#28607#10#monitor#added: blackhole 43.155.65.87 
2022-02-15T09:58:32+0000#28607#10#monitor#added: blackhole 43.155.89.111 
2022-02-15T12:01:44+0000#28607#10#monitor#added: blackhole 43.152.197.68 
2022-02-15T13:56:03+0000#28607#10#monitor#added: blackhole 43.134.189.8 
2022-02-15T14:08:38+0000#28607#10#monitor#added: blackhole 43.156.46.58 
2022-02-15T20:30:07+0000#28607#8#monitor#added: blackhole 43.155.69.61 
2022-02-15T22:45:19+0000#28607#10#monitor#added: blackhole 43.254.158.247 
2022-02-15T23:19:56+0000#28607#10#monitor#added: blackhole 43.135.155.141 
2022-02-15T23:44:58+0000#28607#10#monitor#added: blackhole 43.156.46.175 
2022-02-16T01:01:08+0000#28607#10#monitor#added: blackhole 43.132.247.122 
2022-02-16T01:03:34+0000#28607#10#monitor#added: blackhole 43.153.33.202 
2022-02-16T01:07:39+0000#28607#10#monitor#added: blackhole 43.153.1.208 
2022-02-16T02:19:23+0000#28607#10#monitor#added: blackhole 43.130.45.216 
2022-02-16T02:39:19+0000#28607#10#monitor#added: blackhole 43.132.253.248 
2022-02-16T03:41:07+0000#28607#10#monitor#added: blackhole 43.155.115.206 
2022-02-16T03:48:23+0000#28607#5#monitor#added: blackhole 43.155.86.60 
2022-02-16T06:44:27+0000#28607#9#monitor#added: blackhole 43.134.201.127 
2022-02-16T07:24:34+0000#28607#10#monitor#added: blackhole 43.135.160.40 
2022-02-16T08:48:07+0000#28607#10#monitor#added: blackhole 43.155.74.56 
2022-02-16T17:50:14+0000#28607#10#monitor#added: blackhole 43.155.118.222 
2022-02-17T00:04:16+0000#28607#10#monitor#added: blackhole 43.153.31.162 
2022-02-17T07:17:57+0000#28607#10#monitor#added: blackhole 43.135.155.245 
2022-02-17T09:05:59+0000#28607#10#monitor#added: blackhole 43.255.29.95 
2022-02-17T10:45:12+0000#28607#10#monitor#added: blackhole 43.131.196.209 
2022-02-17T17:49:47+0000#28607#10#monitor#added: blackhole 43.155.84.158 
2022-02-17T19:28:04+0000#28607#10#monitor#added: blackhole 43.132.147.235 
2022-02-18T03:55:32+0000#28607#10#monitor#added: blackhole 43.155.92.145 
2022-02-18T06:22:16+0000#28607#10#monitor#added: blackhole 43.153.5.123 
2022-02-18T07:39:09+0000#28607#10#monitor#added: blackhole 43.153.9.186 
2022-02-18T10:35:26+0000#28607#10#monitor#added: blackhole 43.135.157.13 

( date -I seconds | PID | sleep # | sshd Gerka | ip route add blackhole * )

This is only for the current month. Even at 2/10 thats still 24 seperate attempts to crack root user (easy to tick off from a password list), and we are only halfway through the month. These sorts of corodinated, but distributed attacks are NOT seen by Automated Analytics or Firewall Protection Algorythims.

Its these sorts of attempts that is the reason why I posted the original thread Automated Network Threat Response with an upstream component, because I can guarentee that my server is not the only one getting hit inside the network where this server is physically (location) and logically (network) located. This is the current failure of End Point Service Providers Security Protection, whereas most other attempts can be blocked or protected against based on Maths, Time, Repitition, Patterns, or known block-lists.

These attempts are only a pattern AFTER a “Network Whois” has been done on each individual IP Address (which is quite time consuming and resource intensive).

The other (more common) pattern of this type is to not even have the IP Address either look the similar, or come from the same AS-ORIGIN’s. Then only the timing and username attempts can join together those data points, as they are often much more condensed in attack form.

As a Service provider at their source, both of these “attack forms” are easier to see, because there is a master controller they are coordinating with, so there are bursts of traffic coming and going to the same external IP Address, something that analytics could easily flag as “potential abuse” at least (and often happens already, to a certain degree) - but obviously not in this case.

Anyway, after adding the new range block , I’ll also add an over-range block, probably an /8 or /9 (depending on who else gets allocated a 43.* range - I already had another 105 blocked previously before analysing these ones).

1 Like

UPDATE:
after further analysis I would say that, currently, everything (98%) coming into sshd is from the same controller based on the logs (13th - 18th). Normally you see multiple entries for the same username when the attempts are not from the same controllers, OR different usernames and multiple root attempts from the same IP Address.

That would put the last 60-odd entries (listed above) mingled across 491 sshd entries for the same period. Here is the analysis of the usernames, so you can see what I mean, most are 1, and there are 272 unique usernames there (after root is removed):

      1 123456
      1 3
      1 ADMIN
      1 Guest
      1 abc
      1 adm
     11 admin
      1 ahmed
      1 alvaro
      1 amg
      1 anaconda
      1 anup
      2 api
      1 appadmin
      1 apple
      1 appltest
      1 apps
      1 arjun
      2 ark
      1 asu
      1 auditoria
      1 backend
      1 backup
      1 backupuser
      1 ball
      1 beatriz
      1 bianca
      1 bitrix
      2 bot
      1 bounce
      1 bran
      1 bruno
      1 business
      1 caja01
      1 carlos
      1 ccs
      1 christian
      1 client1
      1 cod4server
      1 coder
      1 csb
      1 culture
      1 daniel
      1 danilo
      1 danny
      1 data
      1 debian
      1 deploy
      1 deploy2
      1 deployer
      1 deployment
      1 deployop
      1 dev
      1 developer
      1 devteam
      1 devuser
      1 dieter
      1 dk
      1 dms
      2 docker
      1 docs
      1 dp
      1 ds
      1 ec2-user
      1 elasticsearch
      1 element
      1 elias
      1 els
      1 emil
      1 emily
      2 erika
      1 es
      1 felix
      1 felomina
      1 finance
      1 frederick
      1 ftpadmin
      2 ftpuser
      1 g
      1 gaurav
      1 gay
      1 gerencia
      1 getmail
      1 github
      1 gnats
      1 gp
      2 gpadmin
      1 guest
      1 guest2
      1 hadi
      1 hana
      1 haoyu
      1 hh
      1 house
      1 icinga
      1 infowarelab
      1 invent
      1 invoices
      1 irina
      1 itadmin
      1 ivan
      1 jasmin
      1 java
      1 jefferson
      1 jenkins
      1 jesse
      3 john
      1 johnny
      1 join
      1 jordi
      1 jt
      1 julia
      1 julian
      1 kfserver
      1 krishna
      1 ldm
      1 lehrer
      1 leo
      1 leticia
      1 lg
      1 libuuid
      1 linaro
      1 lisa
      1 local
      1 lxy
      1 magento
      1 mailserver
      1 mami
      1 manu
      1 manuel
      1 mario
      1 matlab
      3 mc
      1 mh
      1 michelle
      1 minecraft
      1 mingyang
      3 mob
      1 mobil
      1 mongod
      1 mos
      1 musicbot
      2 musikbot
      1 myftp
      1 mysql
      1 nagios
      1 ncs
      1 netflow
      1 noc
      1 nq
      2 odoo
      1 ok
      1 online
      1 opc
      1 open
      1 opu
      9 oracle
      1 orauat
      1 osm
      1 ov
      1 patrick
      1 payroll
      1 phpmyadmin
      1 portal
      4 postgres
      1 prakash
      1 prof
      1 project
      1 q3server
      1 qce
      1 qq
      1 quentin
      1 rachid
      1 ramesh
      1 rd
      1 reboot
      1 rex
      1 rik
      1 rob
      1 rolland
      1 romeo
      1 ronald
      1 samba1
      1 samir
      1 sdtdserver
      3 server
      2 sg
      1 shijie
      1 shiva
      1 silvia
      1 simon
      1 sk
      1 slave
      1 sms
      2 solr
      1 sonar
      2 sonarqube
      1 soporte
      1 splunk
      3 student
      1 student1
      1 student3
      1 suo
      1 super
      1 suporte
      3 support
      1 teacher
      2 teamspeak3
      1 temp
      1 template
      1 tes
      8 test
      3 test1
      1 test10
      1 test123
      1 testbed
      1 testdev
      1 tester
      1 testftp
      1 testmail
      3 testuser
      1 tiago
      1 todds
      1 tomas
      1 tommy
      1 toro
      1 trial
      1 ts
      1 ts3
      1 ttf
      1 tuser
      1 tv
      1 twang
      1 ty
      8 ubuntu
      1 uftp
      1 uno50
      1 urbackup
      4 user
      1 user02
      1 user1
      1 user13
      1 user15
      1 user2
      2 user5
      1 username
      2 usuario
      1 util
      3 vbox
      1 vboxuser
      1 viewer
      2 vision
      1 vsftpd
      1 webconfig
      1 weblogic
      1 webs
      1 whmcs
      1 wordpress
      1 www-data
      1 wy
      1 x
      1 xbmc
      1 xm
      1 yan
      1 yoko
      1 ys
      1 yx
      2 zabbix
      1 zhang
      1 zhangyt
      1 zhu
      1 zoom

FWIW I got that information from this command line, where messages.sshd contains pre-culled sshd log entries, and 146 root attempts (including 16 in previous post) are “grepped out” of the incoming results:

grep -v root ../logs/messages.sshd | cut -d \: -f 4 | cut -d \  -f 7 | sort | uniq -c > ../43.usernames.txt

messages.sshd is generated every 3 hours from the messages & messages.0 log files (the shortest time I have recorded for the log rotation on this server - every 200Kb), before a cron job uses it to check for anything that the Gerka might have missed, as that only scans for duplicate IPv4’s in the last 10-20 lines of the actual log file (which has a bunch of cruft in it as well).


result based on previous posts log output:

  • 1st automated range block adding 21 new 43.* ranges (sweet)
  • and over-range block is 43.128.0.0/9 (bye bye Tencent’s qcloud.net)

(this is not specific to SSFW:FragWhare, so I thought I’d post it for those who might find it useful)

added a utility script ldom.sh to make it easier to automate backups:

#!/bin/sh
# ldom.sh - created with 'mkshcmd'

if [ "$1" = "--help" -o "$1" = "-h" -o "$1" = "" ]; then
  echo "Last Day of Month"
  echo " outputs the last numerical day of the month"
  echo "usage: ldom.sh [now|_date_]"
  echo "options:"
  echo "  now           as of the current month"
  echo "  _date_        either '2022-02' or '2022-02-25'"
  exit 0
fi

D="$1"
if [ "$D" = "now" ]; then
  D="$(date -Iseconds | cut -d \T -f 1)"
fi

X="$(echo $D | cut -d \- -f 2)"
if [ "$X" = "" ]; then
  echo "error: not a valid date '$D'"
  exit
fi
if [ "$X" = "$D" ]; then
  echo "error: not a valid date '$D'"
  exit
fi
if [ $X -lt 1 ]; then
  echo "error: not a valid month '$D'"
  exit
fi
if [ $X -gt 12 ]; then
  echo "error: not a valid month '$D'"
  exit
fi
if [ $(echo $D | cut -d \- -f 1 | wc -c) -ne 5 ]; then
  echo "error: not a full year '$D'"
  exit
fi

isLeapYear () {
  if [ $(($1 % 4)) -eq 0 -a ! $(($1 % 100)) -eq 0 ]; then
    return 0
  elif [ $(($1 % 400)) -eq 0 ]; then
    return 0
  fi
  return 1
}

lastDayOfMonth () {
  Y=$(echo $1 | cut -d \- -f 1)
  M=$(echo $1 | cut -d \- -f 2)
  M=$(($M + 0))
  case $M in
    1)  DoM=31 ;;       # January
    3)  DoM=31 ;;       # March
    5)  DoM=31 ;;       # May
    7)  DoM=31 ;;       # July
    8)  DoM=31 ;;       # August
    10) DoM=31 ;;       # October
    12) DoM=31 ;;       # December
    2)  DoM=28          # February
        isLeapYear "$Y" && DoM=29
	;;
    *)  DoM=30 ;;       # April, June, September, November
  esac
  echo "$DoM"
}

lastDayOfMonth "$D"

exit 0

Besides leap-year checks, it does century checks as well


FYI: at some point (years ago, while working on another project) I was creating so many .sh scripts per day, that I found it easier to create a mkshcmd command to touch, edit (via e link) AND chmod the new command, plus supply generic “shell command” content (the 1st if block) , generated into ~/bin/..., automatically for me, all in one go (ie after editing the command is usable).

Cheers

Paul

8980 blocked : 0 in last hour : 34 in last 24 hours

With almost 9000 blocked IPv4 addresses, the “redo” script that runs at 58min past the hour to ensure all “processed IPv4 blocks” are actually present in ip route show, now takes minutes to finish, AND it is only ever going to get exponentially longer, because TWO lists of 9000 are cross-referenced.

I just made a tweak to stop grep -m1 after the 1st match.

I could cull the initial output reference (which goes to file) so that the IPv4 is the only thing present per line, AND do a “pre-lookup” on the 1st number match (eg grep -nm1 ^192.), AND even “sub-cull” those out to a seperate input file for grep. But this all adds to processing time, even if it reduces it overall.

A better way would be to not cross-reference at all, and rather just “add the IPv4 block” again, redirecting errors from ip. ATM it will generate at least 9000 error text per run, if I were to do it this way, but at least it would be infinitely faster .

The only good thing about this script is that its using an “inline commandline grep match”, which means that even with 9K plus lines of input, its still only taking up “one line” in bytes of memory (which will never get over 24 bytes per line match) on top of what grep and the current shell uses.

In the longer term, it would probably be simpler and faster to cross-reference blocked IP address if they were present in the sysfs (/sys/) somewhere. But that in itself may or maynot present its own drawbacks (RAM?, Kernel Time?).

Oh the Joys of Firewall Fluff

Its not needed for the current (and still only sshd) Gerka, just the hourly Web Server Log scanner, because that’s run as the webserver user and ip (by default) only works for the root user (which is how the Gerka’s also run).

updated blocked lists:

9862 blocked : 0 in last hour : 12 in last 24 hours

Made a couple of tweaks over the last week, mostly more grep “.” related. But I did expand the webserver log “crawler”, which is an hourly check for what URL’s have been captured. And I did add the sshd kex check before the hourly IPv4 check (that everything logged was actually instanciated/blocked in IP).

The kex blocks really only need to be run once per log file just before rotation, but I have seen an increase in them, and it was anoying knowing I would miss any if I missed a log rotation (which is about every 10 days atm, but can be every 3 hours when getting hammered).

Did a couple of range blocks on MS Azure 40.* and an over-range block on an Afrinic 156.224.0.0/11 that was being used by a few different (world) locations to perform sshd attempts.

Its a bit un-nerving when the Gerka process stops creating (sshd) blocks, but its only been just over 12hrs, and those ranges are not part of any Linode network, so I guess they are doing there job (for now), but the webserver log “crawler” is still picking things up.

Could be that Linode have added some upstream protection of some sort, but I doubt that would be the case without an announcement.

A couple of weeks ago I sent 2 “abuse” emails, both happened to still have active servers at the end point at the time of writing. The automated responses were “rotton” to say the least. One was “GoDaddy” where you need to “sign-up” to post an “abuse report” and both provided no way to report the type of abuse I was seeing, their best assumption being “abusive content”.

I know that some places are serious about abuse, often I dont even have to make a report as the IPv4 has been “de-listed” (it goes no where), and the server no longer present (ping, tracert, or DNS).

Might get a change at Internet Storm Center too, they had listed some IPv4 that are used as “master infection servers” (as opposed to being source originators in log ouput), but the “comment” contains “- none -”. They have the same info I have, and although these particular addresses change based on who is doing what, they do hang around for extended period, and they never get used to do probes, only as “bounce” or “response” to a specific URL request, and mostly for certain types of routers that have not been patched yet.

It would just be nice (for most people) to know what such a low ranked security rating (2) was getting captured for (they are normally Mozi.a, Mozi.m and jaws “GET” servers that will connect the device to a BotNet or similar command-and-control network).

9933 blocked : 0 in last hour : 10 in last 24 hours

sshd attacks have dropped right off, 2 blocked every 3 days, but the caputered web urls (haxor-access.log) are producing 10 a day (which makes sense, there are more checks now). And no kex entries either.

Overall, I would say (besides what I have blocked already) sshd attacks worldwide have dropped off, as I can’t see ALL of them being routed through Azure. ISC showed a slowdown too, when I was there last (which I thought was odd at the time).

It maybe that “higher up the router foodchain” (internationally) more blocks have been put in place, but I dont believe that either, as alot of the “one off” sshd attacks were from random Asian sources.

Who knows, maybe its just the “quiet before the storm” …

10023 blocked : 0 in last hour : 11 in last 24 hours

well the end of last month saw a severe slowdown in the number of sshd blocks produced by the Gerka, this month (12 days so far) see no sshd blocks by the Gerka. So I checked the logs and sure enough, no sshd entries for root or invalid user, but still some kex (key exchange) failures.

However what I do see are the odd banner exchange: invalid format entries. These are new to me, but I think they are backdoor overflow attemps (thats a guestimate, based solely on the fact they fail, and there are no user attempts).

Again, I am puzzled by this lack of attack presence, especially considering the uptick in webserver blocks. I cant believe I have managed to fully block all sshd attackers . Remembering that those 10k blocked IPv4 dont include the 700+ ranges. Then again, maybe those nearly 800 ranges is enough to cover most compromised botnet devices and virtual machine services (but I highly doubt that).

EDIT: hmm… those banner entries match the pid (process ID) of the kex failures - banner line contains invalid characters. However the blocked kex IPv4 dont equate to those associated messages …

10138 blocked : 0 in last hour : 10 in last 24 hours

1 sshd attempt before log rotation 2 days ago, and 1 today, so the Gerka is definitely still working. A couple of kex blocks as well.

Still, a wierd month with more regular kex blocks than sshd password attempts, and those dont include the malformed banner either.

POST /scripts/WPnBr.dll HTTP/1.1

Thats something new in the captured haxor log. Guess they are looking for windows based webservers? Shame the logs dont capture the POST data, but then again it could be megabytes long, so no point in worrying about it, just nice to know it still captures new stuff properly.

A note here, basically the only reason I can capture these is because I dont have POST setup on the downstream server, so it always returns a 405 error. I guess I could add a .dll check to the URL checks for when I do finally have POST set up. Not sure how well that would go down on a WSL server, would definitely have to test before making further presumtions.

10449 blocked : 0 in last hour : 8 in last 24 hours

Well its been a while (46 days since the last post) but the sshd attacks are finally back in full swing, since 11am UTC on the 9th on June. They were not gone, the were more than a few kex entries blocked in that time, and that 1 sshd attempt, might have been close to 60 days of near silence in total.

EDIT: just checked the end of the block list and its all quiet again by 6am UTC on the 10th June.

Nothing else to report atm, except that the web server log generated blocks are “raging a storm” . The power situation here is fixed but the weather has caused issues, so no further work done yet …

1 Like

10803 blocked : 7 in last hour : 69 in last 24 hours

still racking up the web sever blocks, but since the 27 July sshd attacks have ramped up again (from 0-ish), with the last week of logs being 90% sshd based blocks. Over the past month there has been an increase in attacks from Serverion, a Netherlands based server farm operator that also hosts non-european IP address (eg Russia), and has a connection direct connection to OOO SibirInvest in several countries. It almost looked like someone was targeting their servers for staging, as there were only one or two attempts recorded before another address from a different location (but same operator).

The over-range blocks (besides various mentioned above) are mostly new DigitalOcean ranges and new Microsoft Azure/Cloud assigned server addresses (ie more from the usual).

The only other real news was that the ISC (Intenet Storm Center) stopped being functional a couple of weeks ago (maybe the weekend of the 4th July), which I only noticed after trying to contact staff with a possible new Mozi.m style infection server url, spreading something called aqua.mpsl (with multiple architecture binaries found at the url), very similar in size and attack type (presumption) to the jaws vectored z0r0 binaries found at the url supplied in those web attacks.

EDIT: I did spend a couple of days putting together a page that makes it easier (visually) to decide if a block-range or over-range block is nessecary, as it references everything in the log file (which gets 0’d each month), not just one type of block, which is what I have been using for xref up until now. This allowed me to examine possible control panel integration with other more commonly used web hosted applications or suites (eg pihole). (ED - this is working out really well, highly useful, even if the process is still manual)

EDIT 2:
I did find find a bunch of 0 length files floating in various places on the file system, but research showed it was (probably) due to the Apline Linux Web Administration being started by default (I turned it off by default after I found a couple of entries). Its protected by password, but it appears that a valid user is not checked until after the file is created (and hence no content is recorded). I need to look into this with a new Alpine Server release as this app is quite complex in how it hides actions from the web user. (Knowing me, I just found another bug, but we will see).

1 Like

so its been about a week since the last post, around the beginning of the month, which is when the logs are scrubbed (copied to archive folder, then emptied or deleted - depending), and even though things where starting to change before then (sudden uptick in sshd attacks), it has become even clearer that something else is going on, as the webserver blocks are way, way down, about a 3rd of what they normally are, while the sshd blocks are up about 300% (if 100 blocks a week were regularly performed).

So whats going on here, is there any way to “devine” what is happening? Well maybe… but its not due to anything related directly to whats in the log files, its more about whats not there, and what we have learned from the last 12 months (yes I did mention starting the project before #devember2021 ).

Most of web server attacks are either:

  1. from an infected source (part of a command and control network ), or
  2. command line tools (aka script kiddie attacks )

As it happens, a certain percentage of (normal?) sshd attacks are of a similar groupings (where script kiddie attacks try multiple usernames in rapid succession), along with the regular “security probe” (web) services (note that alot of this data ends up in the wrong hands for monitary gain).

Without taking into account the possibility of the hosting service applying some form of screening, service blocking based on analyrics, or IP region blocks, it appears there was a re-organisation of co-opted “command and control” networks.

Basically, whoever was running their own C+C either lost control of their network, possibly sold their networks, or were going through an organisational change, either way “they are back in business”.

On the webserver side, it is obvious when a “script kiddie” is trying to get in - 80 different url paths from the same IP address. Webserver attacks from compromised units or C+C networks are not like that, they know the urls they are trying to access are device specific so there is at most only one probe (possibly 2) from the same IP address.

Although I say above that webserver attacks are down to a 3rd there regular rate, 20 different IP addresses for a 7 day period is “quite normal”. Those are the sorts of numbers (20-25 per week) I was seeing around the time after I started applying range-blocks.

As mentioned in a couple of previous posts, there looked to be some sort of organisational use of hosting services, particularly rotating them, which may indicate that over the previous 2 months they were using those services again, and since their IP range is blocked, I dont see anything in the log files of the SSFW server.

As a side note, this leads to a problem covered in the initial couple of posts on both this project page, and the original post thread - how to detect “threat level” at the interface, after blocks have been applied, so that “threat response” can be lifted, or made permanant (ie dont check it again).

As you can see there are no real answers here, we are “guestimating” based on previous data, and research of that data, and that is the point at which some form of high level (or AI) analysis-over-time application would come in handy. Without a bigger data set, like that collected by the Internet Storm Centre (ISC), a visualization tool like those used for repository commits might be more practical in the interim.

It is worth noting the a similar “down time” appeared over the Christmas - New Year period. School holidays have just finished in certain parts of the world, but that only accounts for a 2 week period (over the last month), not 2 months.

Well, I guess you might classify this as more of a “gut feeling” assessment, rather than a thoroughly factual based assessment, and some might even say thats just a “knee jerk reaction”.

But it is clear there was a shift about 2 months ago, and there has been another shift again.

EDIT: with the research required for IPv4 range-blocks, there are some details that come through over time, that portray a certain scene .

  1. certain attacks from South American countries are performed by the same person.
  2. certain attacks from Southeast Asia are performed by Chinese nationals.
  3. certain attacks from Ukraine, Crimea, Estonia are performed by Russian supporters.
  4. Dutch / Holland / Netherlands is still the highest individual attack source vector.
  5. England and USA are right up there with China and Russia as “threat antaganiosts”.

EDIT2:
I suggest any tool more than visualization, needs to assess not only a larger data set for any given time period, but also need to xref geolocation news story corelations, as it may show why there is an associated shift, when that occurs.

EDIT3:
after checking 50 IP addresses of the 300 newly blocked sshd spam, it appears that the majority of them are from “dial-up” service providers all around the world (200 services in 150 countries) , and those that are not are “hosting” service in Russia or Hong Kong, and to a lesser degree Korea, Sweden, Netherland, UK & US (yes there are a few universities in there), as well as multiple from Isreal, Iceland & Iran.

Bumped with Brutality Bonus updated in the OP

get your “MWHahaha” shirt ready, and be prepared to asume the position chanting “I’m not worthy” if you are on the end of one of those entries :wink:

Gees, this would have been so much more effective if everyone was still on dial-up, but then again no one bothered to waste there valuable bandwidth hacking via web urls (not to mention the months it would have taken to upload the 4.5Gb file to the server for the Brutality Bonus to be of use).
:slight_smile:

ok, its only been a few days since the previous 2 posts, but the new (consolidated) IPv4 block list view is making range-blocks and over-range blocks really easy to “see”.

What’s to note from today’s range-block and over-range session is (12-ish), yes from many places around the world, but there is often a pattern to “probes”, 3-5 days apart, 1 sshd and 1 webserver, both from different IP within the same service (sub-net), AND the majority (50-60%) are from mobile (ie cell phone) service providers.

In the US, there is another round of FranTech / PonyNet attack sources. And more international Amazon AWS attack sources, this time India (last time it was Hong Kong).


For those who are interested:

I use (linked in output) CentralOps Domain Dossier to do lookup and cross reference IPv4 haxor sources.

If I want to see some history of that same IPv4 (also linked in output) I use ISC Internet Storm Center (click IPv4 in resulting page to see current history list).

If I want to check what ASN is assigned to a sub-net I then use DNSlytics IP2ASN API and find other (sometimes associated) sub-net’s assigned to them with Dan’s BGP Lookup Tool .


so within 2-3 clicks I know where the IPv4 is (as well as if its still active, and if so, what basic services are running), who owns the sub-net, and who runs that part of the sub-net, and what the hostname and any alias to that same server (if any) exists, and where the domain name owner is, and its location (if different), and (visually) wheather a sub-net over-range block (110.36.0.0/13) should be applied, instead of just a sub-net block (110.39.34.0/22).

This is the part (combined with the IPv4 block-list and what it was blocked for) that I want to apply some AI or other algorythmic automated block analysis (and in my case, automated over-range blocking).

This is really quite complex, not due to the data that need to be analysed, but due to the fact that different whois nic services (RIPE, AFRINIC, etc) have differing data sets, or differing data construction sets.

Some of the US, UK and EU also apply privacy tags to the data, which makes tracing and tracking more differcult, and sometimes its impossible to get a direct result without doing a search engine query.

For example: AKAM.NET this is Akami (new owner of Linode BTW), but there is no server or IP address assigned to that domain, however it is used for sub-domains, most of which are nameservers. And you can’t get that info without doing a search engine query, as “Akami” does not appear in any of the network records.

2 Likes

just spent a couple of hours blocking another bunch of ranges and over-ranges. This time they were just about all hosting services, and looking at the previous 2 sessions of range blocking, there is a concerted effort by individuals who are abusing network service owners (as opposed to compromised routers used in botnets).

What to I mean by this:

  1. OVH is an Australian hosting service provider, today it was their Canadian network, yesterday it was their French network, last week it was their (can remember) network, last month it was their Singapore network, etc

  2. I have not blocked any Japanese ranges for a while (since the sshd slowdown 2 months ago), but today I blocked cnode.io in Japan, and cnode.io in Thailand.

Again these are not multiple probes from a single IP address, they are multiple single probes from different IPv4 inside the same sub-net (inside the same server farm and/or DMZ - either way, same physical service location).

I think today there was only one mobile related range block, and the IPv4 sources were dated yesterday (I ran out of time halfway through the list last night).

So there is definitley a conserted effort, and I would say coordinated as well, in an orcastrated way, as opposed to an organised associated way, IE. more like how cells work, common goal, specific non-overlapping tasks, but without collusion between them (except maybe in the collecting of the resulting data).

On top of the obvious Russian (Eastern block) and Chinese (SE Asia) efforts, there seems to be another group that has (possibly) spent the last month (lack of sshd attacks) developing a more indepth strategy based on previous experience (eg. I’ve mentioned blocking just about all digitalocean near the begining of this thread). The various Serverion and Hostinger and some other Dutch sources starting to show up gave me a heads up to something like this happening in the background, but based on the number and dates in my logs, “its fully under way”, whatever that strategy is …


I mentioned DigitalOcean above, not only because of the original abuse pattern I saw (at least 1 IPv4 from nearly every range they owned and assigned to different locations around the world), but also because today I found a new range of which one IPv4 was registered to a .ru service location. I saw a similar network owner abuse at the begining of the year with FranTech and their sub-net partner services (another PonyNet range was blocked yesterday) .

EDIT: I found another new DigitalOcean range in the logs, but the sub-net is spread all over the place. One IPv4 traceroute’s through New York City, another through Telstra (AU) Singapore-China router, another through different Telstra (AU) Singapore router (not China). I’ve seen this before, but I can’t remember if it was a DigitalOcean IPv4 addresses I was researching.

FYI :
These are TelstraGlobal SouthEast Asia backbone and peer routers, some of the biggest in the area. Telstra is Australian owned, and Australia is part of Five Eyes .

The Nginx web server at the end of the Singapore-China connection has a Quisk SSL certificate, “Quisk is a global technology company that partners with financial institutions and others to digitize cash and provide safe, simple and secure financial services and cash-less transactions for anyone with a mobile phone number.”

1 Like