I wouldn’t call it massive nor complicated, you can check for yourself by looking at official kwin sources on KDE’s gitlab.
OVH: “VPS vps2020-essential-2-4-80”
Correct.
My clients are mostly me and a few people on a casual discord, like I said before it’s nothing “serious”
I know it’s not making a difference, but it’s kind of about principles, I’m not interested in doing anything with cloudflare unless I’m paid for it.
Too expensive for me, the entire point of using gitea and fooling around with this stuff is so I can have a playground to work with linux on the server side and not have to pay for github premium.
Anyway, thank you everyone for help and tips, I think we can assume that sthis issue is solved
If you don’t mind, could you update with the outcome? I have some private gitea and other services that I want to open up, so I would like to hear if robots worked for you or if you needed more steps (rate limiting, crowdsec, fail2ban)?
Welcome to the internet in 2024. Crawlers are the least of your issues. Once this stops you will see a trickle of real attacks coming from across the world that is systematically testing known vulnerabilities of web apps and web servers.
If you don’t want to spend the time required to host a web service publicly, then don’t do it. Do what sensible people do: host privately and share with friends via VPN (tailscale).
Just to add a counter point. Security best practices don’t always matter. We’re just talking about a small VPS for playing around with some git repositories that had a bit of trouble handling normal bot traffic. Not mission critical infrastructure managing private information under attack from a sentient adversary.
I’d encourage anyone to grab a cheap VPS and run some open services. Experiencing issues along the way first hand and resolving them is a great way to learn.
To the people saying (and thinking) idgaf, this is not critical infrastructure. I just want to raise that the maximal potential damage a threat actor can do post-compromise is leveraging your infrastructure for their own agenda (and a three letter agency comes knocking). It’s not only someone else having access to your data that is at risk.
Do whatever that doesn’t keep you awake at night - but I’d recommend keeping the web server logs in case it’s needed down the line for any public facing infrastructure.
Do you ever watch those youtube videos where someone tries to see what happens when you take a Windows 98 OS and expose it to the internet?
Automated bots from across the world discover it within seconds and start hammering the OS with known exploits until litetally 30 seconds later your OS begins to grow like a cancer.
With sprcialized A.I. potentially being developed or already deployed to do this it gets even more important to prevent zero-day exploits by not compromising any step of the way.
It’s brutal but this is the world we’re now living in
Denial of Service (DoS) is effectively that, when a service becomes unavailable for any unforeseen circumstance. If there is a multiple number of IP addresses, then it is Distributed. You don’t need a botnet to run an attack for it to be a DDoS, nor that the attack focuses on leaving connections open. I’ve tested for DoS conditions on many websites and the most interesting ones were through HTTP queries.
With all of this, I would say that Amazon isn’t really trying to target you in particular, so it’s not an “attack” in particular, but you’re being left with a DoS condition. As people have suggested, limiting access through robots.txt and firewall-blocking IP addresses will help you a lot on this situation.
Security practices apply on the context of what you’re doing. Would you walk out on a construction site without a helmet because nothing’s going to happen? A helmet IS a security practice. I wouldn’t walk with it on the street, but I’m quite sure I’ll put it on once I get to that construction site.
That definition is so broad that it renders the term utterly useless… If two regular users visit a site and a third times out because the server is busy is that a DDoS? no… The term DDoS implies a sentient attacker using multiple systems in a deliberate attempt to overwhelm a service. Not at all what was happening here which is why DDoS mitigations as a solution were inappropriate.
Sure… but that’s not the context. Anyone who rents a VPS will be handed at least a semi current setup or the vendor would be out of business in short order. Same for most / all software that listens for incoming connections.
I just don’t see any point in spending time setting everything up to be air tight rather than focusing on whatever it is that you’re interested in. If and when security issues are encountered the motivation to deal with them will arise naturally and the first hand experience gained may or may not motivate future pre-emptive security measures as appropriate, rather than just to comply with dogma.
DoS attack is an attack. DoS itself is not an attack. The term is mostly used to describe an attack, but it’s also common to say that service X “DDoSed itself” e.g. when a shop starts a sale that garners so much interest that normal users flood and exceed expected traffic, leading to DoS.
This quick writeup from JL Aufranck nicely summarizes how can bot spam or real ddos be handled by SaaS proxy like cloudfare.
But what is really nice is goacces analysis tool, I really wish I had that month ago. Much nicer overall assesment tool of whats happening to your poor server than mishmas of plain bash scripts for log parsing.