I wrote a thing for my startup and I am proud of it

Here’s the article: How I Saved $5,000/mo with a $5 Droplet - Earthly Blog

But seriously, Docker was hot on our tail for that sweet, sweet service account money, claiming we were doing 120k+ pulls per day. Never mind that this was mostly manifest requests…

After the cheap cache, we only show a couple hundred per week to Docker and the mafia (I mean sales) guy backed off. Wont even return my emails now! Success!


Wow, good read. That’s a great strategy and some impressive savings!

About 18 months ago, I actually implemented something similar at my company, but we did it per environment, because we were concerned about regular dockerhub outages and image pull times… The first container was slow, but we got wire speed for all the rest, and that significantly improved our performance.


You reminded me of the time we set a pruning policy on our active repository… and then deleted most of the images used in production. We had a panicked couple hours ssh-ing into all the k8s nodes hoping they had the pieces we needed so we could put them back in the repository.


Oh, that’s horrifying. We’ve got a script that handles pruning, which accounts for active images at my current company. Jenkins kicks it off once a week, and a typical report is 45-60 images and ~90GB removed.

That sort of stuff is really small potatoes when it comes to our datasets (we just removed 2.3PB, yes PB, of data from our S3 buckets this month)