Tool for Monitoring Linux Servers Logs and Access?

It’s also ELK as a Service lmao.

We all dislike java but literally all the tools we use are in Java :smile:

1 Like

My company’s deployment is closer to 32GB of memory, but we’ve got 8 nodes per region and approx 500K/hr entries in our whole system.

I really need to ask the devs to turn down the verbosity.

2 Likes

No shit dude. Elasticsearch, Jenkins, Solr…

2 Likes

You shouldn’t be using jenkins when Gitlab-CI exists.

1 Like

I think that’s the key. We grab Tomcat, IIS, and custom logs from muh proprietary apps.

Java vs Ruby :stuck_out_tongue_winking_eye:

Plus, I like how extensible Jenkins is compared to GitLab-CI (from my limited exposure to it, anyway). I can write everything in .sh or .ps1 for Jenkins whereas I think you always have to use YAML for GitLab.

Also Groovy is… well… Groovy :wink:

Resource usage isn’t a major concern. Our current Nagios VM is 12 Cores and 48G of RAM. Our Nutanix cluster has a bit of spare room. Since i’m planing the whole stack on docker, scaling with demand shouldn’t be a big Problem.

Luckily with most servers, i can do that myself. And certainly should.

Also, i highly appreciate you all chiming in here. That’ll be super helpful!

2 Likes

Sideline question: what would be the best (least terrible?) combination of tools to accomplish this entirely FOSS?

ELK or Graylog with Grafana.

3 Likes

I think id agree with this. Its probably the most ‘complete’ solution without having to put serious effort into clunking different pieces together.

The elastic backend does also give it a bit more flexibility as well in terms of what can pull data out of it, basically, anything.

1 Like

@domsch1988, what variety of distros/OS’s are you working with across your 40 servers?

Literally everything.
Because Nutanix only certifies CentOS and Ubuntu, CentOS is most of them. Some are Ubuntu and some are “older” Servers we took over when customers came to us. Mostly Suse and RHEL.
So far, i’m pretty happy with CentOS.

1 Like

That’s not too bad. Any outlier BSDs or esoteric Linuxes?

Not really. At my company we are 50 people. I’m the one who braught linux knowledge into the company and with that, we started aquiring customers for this. We make sure, that everything that gets set up new is as close to standard as possible. And most customers don’t know enough about linux to care.
The wildest thing is probably my now and then arch linux workstation :wink:

1 Like

as other said for logs hard to beat ELK stack, very expensive, disk and cpu ram requirements are high. specially if have high log volumes.

for vulnerability scanning, vuls is pretty decent.

and probably OSSEC for hids

1 Like

If you can’t afford Splunk, like others mentioned I’d do ELK. I have not built an ELK stack but I imagine there are some constants/common “do’s” such as deployment apps to be pushed. For example installing a splunk indexer and forwarders is just part one, you need apps/add one/deployment app classes etc so those nix machines get crons setup, audit paths established, execute bits, audit levels set so you actually get valuable data. Not going to catch that security relevant event if the OS isn’t even set to log it and to a place the SIEM can monitor.

frankly after using splunk for a long time in the past i would prefer to hire someone with the expertise in elasticsearch before paying more money to splunk.

also Kibana+grafana kicks most pay system ass in terms of search and graffing.

I need to homelab some ELK to diversify, I’ve likely been on the Splunk train too long.

2 Likes

Grafana is great. I’ve been using it for a good year now to replace the stock Nagios Dashboards.
I also to more advanced business dashboards for our Sales team based on our CRM Data.