The product is not really aimed at me, but I think I get it. I don’t know how much cacheing and slop/over provisioning in linux there is, but I’m sure it is measurable…
Personally I don’t have many memory worries, but I just dumped too much ram at my system, so I would have plenty.
(4G should be fine, 8G to run all apps; I use ZFS, so 16GB should be enough, but I have 32 to remove worry)
But, my all-in-one router/nas/server runs VM’s and containers.
Almost all of them are over provisioned, and wasteful of resources, because “just in case” like, the pihole doesn’t need 2GB of memory etc.
My wasted box is just a couple sticks of ram, already paid for. If I were more efficient, I might be able to run a bunch more machines on it, but don’t really need to.
What the dude’s product does, is allows one to min/max the memory of a machine, and like advertised, save costs by needing to deploy fewer servers, with smaller memory demands.
The app is a product, and allows remote monitoring, graphing, (possibly even over-writing each deployment’s learned limits?)
But these remote tools require data extraction.
And the product needs to be licensed, so checks need to be made one doesn’t buy once, use on all the machines.
Like, I am fine with paid open source products, they can be forked if price excessive. Simply working on Linux is rarer on desktop than server, but I would not be surprised if the guys could legit save companies real money with it.
I know systems can be slimmed down to run on potatoes, but this might legit be a way to run more slim, without having to specifically tune for each of a dozen different tasks a score of temporary VM’s might have been spun up for in some corporate cloud?
Title is clickbait tho. (Small boo&hiss)