Tool for Monitoring Linux Servers Logs and Access?

The title is a bit generic, but ok. At work, i’m in charge of managing all of our customers Linux Servers.
I’ve grown into this position over time and a lot of those Servers where not configured by me. Since attacks are ramping up and exploits get tested for ever faster (sometimes minutes after discovery) and a lot of those servers are, by design, accessible from the internet, i really want to step up my game in terms of keeping those things safe.

Now, i don’t want to talk about security best practices here. I’m working to retrofit a lot of stuff on existing servers and have an extensive list of stuff i do on new servers to keep risk to a minimum. But this only goes so far.

I’d really like to put a system in place, that monitors my Servers for certain security relevant stuff.
I already maintain a Nagios System to watch for Resource usage and Services. This works as intended and makes sure the servers are up and healthy. A coworker installed Nagioslog Server a few months back, but the trial is expired and my company isn’t sure yet, wether to pay for the license.

What i’d like to have is a tool that Reports any and all login attempts to a server (ssh, ftp etc.), maybe reports on installed versions of packages that are relevant (apache, nginx, php etc.) and maybe even alerts when there is a newer Version in place. Additionally i’d like to be able to log/monitor access to certain sub-sites. Say, the server has wordpress installed. In that case i’d like to see the amount of logins to server.com/wp-admin.

What would you set up to manage all this stuff? I’m looking around for various options and am willing to test a lot of those. I have enough resources available to test a lot of stuff. I’d prefer something that offers at least a free tier or long’ish trial for testing purpose. Open-Source would be nice, but i’m not to picky if it gets the job done nicely.
Anything you can recommend, or have personal experience with?

1 Like

Not sure if anyone is interested in this kind of stuff, but here goes.

I’ve quickly set up a small docker test system (4 cores, 4G Ra, 100G HDD).

I’m now running two systems for Testing: An ELK Stack (Elasticsearch, Logstash, Kibana) and Graylog.
Took some Firewall Configuration and such, but so far, so good. On 3 Servers i have deployed filebeats, for ELK and rsyslog to send to Graylog. Both are recieving data and i immidiately got notice of an error in one of the mysql tables. Nice.

I’m now looking into what either of those can do and how i can tear through the data most efficiently. Even with only 3 Servers and basically just syslog set up i’m averaging 500-1000 messages per Minute. The Nagios Server logging every check isn’t helping here.

2 Likes

I don’t have any recommendations (sadly), but seeing as how I’m hopefully growing into a security position at my current job, I’d like to see what’s available too. I hope you can get some suggestions soon!

It also sounds super awesome what you’re doing with that Docker image, keep us updated!

@Novasty was talking about a web based monitoring system last night. Not sure if it will do what you want and I can’t remember the name off the top of my head. I’m sure he can chime in here.

Cockpit, @oO.o was playing with it also.

Cockpit is like if webmin and netdata had a less capable baby. I like it occasionally but it’s not appropriate here.

Im actually curious about a good solution as well because I don’t really have one other than manually configuring email notifications.

1 Like

this question might be better off asked in the sysadmin thread where more knowledgeable eyes can see.

1 Like

Yeah maybe link the thread in there. @SgtAwesomesauce or @Eden might have better ideas.

You’re basically asking for a SIEM but with no budget? :smile:

The answer to that is probably down the road your already travelling, ELK and co. It’ll take pretty much anything you can throw at it, but the problem that you probably already know is actually doing something with the data.

It doesn’t really do notification ootb though, you can set up triggers and throw them into your alarm system.

check_mk can do some stuff, but elastic is honestly probably more flexible, if you’re happy putting the pieces together yourself.

We’ve been using McAfee for our SIEM stuff so ive not really touched much else recently.

On your package question. You want config management in the long run in my opinion.

im sure @AnotherDev probably has some good insights.

4 Likes

Never said no budget. Budget isn’t determined. I just would need some kind of trial to sell the company on the product.

In terms of packages: I’m working to consolidate configurations with ansible, but with a lot of existing systems, it’s a lot of work.

Not sure how to monitor for versions with vulnerabilities other than to maintain a “database” of those myself…

2 Likes

A vulnerability scanner can keep an eye on vulnerable packages and systems.

On Linux systems you can also routinely check for security updates via the package manager and have it report back to you which packages have security related updates. You could use yum-cron for example or similar method, or via central management like foreman, katello, and co.

You could probably go more complex. But you might only need to keep an eye on major internet facing software for vulnerabilities that need manual fixes before a patch is out.

You could check this automatically as well via your own methods, or probably another tool.

You can also place a WAF or similar protections in front of your web facing services this can mitigate some issues as well.

How much you apply really depends on the risk, whats there, what its for, data behind it etc.

edit: we use McAfee SIEM for security monitoring, its… McAfee, i guess it does the job, ive not heard anyone whos in love with it.

1 Like

I would do ELK, absolutely. Elasticsearch is super powerful. Just be prepared to allocate lots of resources to shudder Java.

As far as access goes, it’s hard to monitor this, it will really depend on application. We can go more in depth.

As far as monitoring vulns go:

  • openSUSE: zypper lp (grep security)
  • RHEL: yum list-sec security
  • Ubuntu: apt-check
  • Gentoo: glsa-check

I’d throw that into a cronjob and log it, set up a filebeat and set up a Kibana filter.

That said, I wonder if we can’t make our own tool, if you find those to be lacking. I feel like that’s a noble undertaking and my company would probably be willing to let me work on that in my spare time.

4 Likes

Just keeping in mind that this will only list security updates for packages, it wont identify vulnerabilities in configuration or deployed code.

1 Like

Right, well, better than nothing.

Ok, sounds awesome. So i’m on the right track and “just” need to figure out what i want monitored and how to tear through the data of around 40 odd servers in production.

1 Like

Honeslty, my dude, that’s the fun part.

If you set up your importers correctly, this won’t be terribly difficult to filter.

Yes absolutely. Do the things that can be done now, i was more meaning to literally just keep it in mind. You can document what you haven’t covered and why.

1 Like

Nah breh, just build up a three node cluster and then have a forwarding node. Or use Logstash as a load balancer. :metal:

As @Eden implied, you can set up alerting through ELK. But I would get Greylog or Splunk or Datadog ( bleh) if you want some serious alerting and log monitoring.

2 Likes

Yeah, that’s still a boatload of mem.

Too expensive for my blood.

1 Like

Maybe. We do 2GB across the board, but the forwarder doesn’t ever hold data.