Internalizer for Chocolatey packages

Here is a script I have been developing on and off for the past couple months.

It internalizes/recompiles chocolatey packages from chocolatey.org so they have the software installer included rather then downloaded separately during install.

It more or less functional, although not stable or reliable yet. Also, the selection of packages supported is somewhat limited.

3 Likes

Seems interesting but out of curiosity, is this for the Free or Paid version? Because the paid version also has local repositories you can manage on the network, which seems to me solves a similar issue?

Free.

The chocolatey local repository is free, and you can also use a network share.

There are also other options for free local repositories.
https://inedo.com/proget



This is intended to work alongside a local repository. Some chocolatey packages do not have the installers in the chocolatey .nupkg file, and download them during install. This downloads the installers and puts them into the .nupkg, which is then put on a local repository.

1 Like

Huh interesting… I always thought the local repositories were a Pro/Business Edition thing…

On the topic of this, I made a compact list of directions and cutdown template files to make local packages easier and setting up a local repository. Would actually welcome some feedback on it (I’m going to go through @TheCakeIsNaOH repo in a bit)

1 Like

I’m new to chocolatey, in smaller networks (up to 10 or up to 50 installs), does just putting the temp dir on a Samba share do the job of saving most of the bandwidth? Or would squid setup work.

Do any of those previously mentioned choco servers do request deduping … e.g. when more than one machine starts to fetch a new version of something, does it only get fetched once?

The term generally used is caching proxy, but yes, all but the chocolatey simple-server do. So Proget, Nexus, and Artifactory all do. Strongbox is still under development, so it technically exists but it buggy ATM.

If you are setting up chocolatey a bunch o, a local repository is great. Also, there is some software that does not allow redistribution and does not have public download links, so chocolatey.org does not have it. For instance adobe CC, some office versions, etc. Then with a local server, packages for those can be created and used.

Proget is what I use. It just works, and was very easy to setup for docker. It is not open source, unlike strongbox and nexus. It has a native windows version, and a Linux version running on mono in a docker container.

Also, all of these need databases, so expect 1.0-1.5gb of free ram to be required to just start any of these up. The ram requirement will scale with packages and feeds.

Cool.

I was actually just wondering purely about the cache, and redistributing those adobe/office files internally.

I’ll try sharing TEMP caches first.

Because choco will validate the file hashes, the cache itself doesn’t necessarily need to be trustworthy all that much, and something simple like. e.g. nginx with a webdav directory could probably handle the job in 20MB of ram on your openwrt router, or running in a container alongside home assistant on a raspberry pi (both are capable of serving HDD data at tens of megabytes a second to a client or two).

Except for maybe the coordination aspect of deduping fetches or “singleflight”-ness. e.g. machine1 choco does a cache lookup, finds its missing some adobe redistributable, machine1 starts to download. machine2 choco does the same lookup, finds same missing, machine2 should wait on machine1 to populate the cache.

I also found an ipfs support issue: https://github.com/ipfs/package-managers/issues/34
which would be nice except for the bootstrapping problem, how do you mount ipfs without a package to install “mount ipfs” and vice versa.

That is probably going to the be the sticking point. Also, there are some packages that do not download to the cache and instead do their own thing, for instance the nvidia driver package downloads to %temp%/nvidia or something.