Thinktank away!

I’m having problems at home right now. I can’t take the usb from one RPi4 to the other, apparently it doesn’t work even if it’s the same hardware. "/ and I can’t use the same RPi4 in another case, because the one I use has some extra features that the other hasn’t, so it’s not registered.

So I thought, why not install something newer on the other then and start to copy over files to that system from the old one, that might work… OR NOT. It’s older files, and doesn’t work with new updates of the applications it’s used for. So I have a big problem here. I thought why can’t Linux have it’s base install and the application in boxes that can be copied and applied to another computer?=! It would be so damn simple. "/

The only thing I can think of is if I install the applications in a virtualbox environment, and save the file for later use, but then again, it’s going to use more resources then a Raspberry Pi can handle. What am I supposed to do?! My stationary computer is basically always on, so it could act as a server, but I don’t have any space left on it, and I thought, well, I could reinstall it with linux on it and just install the things that I need to use as a background application, maybe that would be a good thing, but then again, if something happens, I’ll have to reinstall everything again. So maybe I could create a VM after all… Seems to be the best alternative, but how many cores to I have to dedicate for a terminal based ubuntu installation? The Raspberry Pi 4 was kind of fast enough for one bot, but then later on I had six bots on it and it can without any problems handle a couple of info bots, but when it comes to command bots…

I really hate this. When I’m on Linux, I want to play, and then there’s some game that seems fun, but then I want to stream at the same time and I don’t use OBS, I use SLOBS, which does not work in Linux yet, since they prioratized MAC… "/ So I end up sticking to windows. "/ When I’m on linux, I want to windows, when I’m on windows I want onto LInux… "/

Anyway, I need an easy way to handle my bots, so easy that when I reinstall the computer, I don’t have to reinstall the bots, but just start them easy. I don’t want to waste time with the reinstall and configurations and error messages.

So… your primary use case for a raspberry pi is some “bots”?

…and you wish they were containerized to begin with, but because they’re not, you’re having to do extra work to migrate them over to a different machine?

1 Like

Btw, there’s also docker for windows that’ll run Linux containers in WSL2 VMs.

1 Like

I’m sorry but I think you need to heavly reword your post because I couldn’t figure out what’s the issue here.

When it comes to porting software between different architectures, as risk said, Docker is your best chance. Especially because you can backup your containters and deploy them with a one line command.


(From what I can gather from your post, it’s a mess, please correct me if I’m wrong)

You have two rPi4’s - One with some USB devices, and the other one with some setup.

Your first problem is that the rPi’s behave differently with that USB device(“It only works on one of the Pi’s”).

The you tried to copy the filesystem from the other rPi to the one with the USB device(Didn’t work).

You also run some Bots on the rPi’s, and you want them to be portable(to be able to switch hardware etc., like in the scenario above).

Solutions and suggestions

I don’t know about the USb devices, but it they require a little bit of power that might explain the difference in how the rPi’s handle them. Try adding a power adapter to the USB device, or adding a powered USB hub in between.
Without more information(such as dmesg output when it works/doesn’t), it’s hard to say.

Next up, cloning a Linux installation.
I’m not entirely sure what files you tried to copy, or how. Please elaborate.
In general, it’s not a good idea to copy individual “system files” with different distributions/versions of Linux. You can however easily copy the entire system by copying all files(It is just files after all. Just make sure to preserve permissions etc.).
With application data it’s somewhat dependent on the specific applications you’re using.
But usually you can “upgrade” easily from different version, but often you can’t “downgrade”.

There are various ways to archive what you’re trying to do.
First of, there is this concept of “containers” , separate Linux installations(or just parts of one) that can be launched, copied, stopped etc. just like virtual machines, but don’t have any hardware cost(e.g. you don’t “give” RAM to a containers; A container is more like a process that “uses” RAM like everything else).
This is very close to what you want, you should read up on LXC or Docker.

Alternatively you can always just write a shell script that configures a fresh machine to your liking on Linux(Install and configure software automatically). This combines well with containers, and is what Docker compose does, although you can often archive the same with just some shell scripts.
This makes you less dependent on specific software-versions, or a pre-created and never updated root filesystem, but is a little harder to do.

Both solutions require minimal resources. Linux containers have no CPU overhead, and just minimal RAM overhead. If you really want to you can easily run 50 containers on a Pi.

Also keep in mind that using a package manager might be all you need if your applications are available on your distribution. Then you just need to copy a few config files, if at all(can be done by script).

I would refrain from using VirtualBox on the rPi(Linux in general), it won’t run nicely, and if you’re running Linux inside the VirtualBox, it’s completely useless, because containers are just better.

1 Like

It’s not only bots, I want ALL applications to be movable. Much like I think that the future of games probably is that it’s either streamed or bought on a flash memory medium. As a m.2 with top of the line writeprotection, meaning you can only read from the medium, cannot format it, can’t write, can’t copy, can’t move.

ATM. It’s bots YES, but in this case it’s not linux or the computer that is the biggest irritation, it’s rather the applications that use the bots that is even a bigger problem since they keep changing and I just want the bots to work and do their part. One of all the bots actually broke in the process of copying information, since it was regestered upon another computer or something and because of that it doesn’t work as it should.

If I want to move a ready package between computers, and servers etc, then the best choice is probably to use VM’s right? That’s the only movable medium that doesn’t change regardless of which host it’s on, but since a raspberry pi is already very resource intensive when it comes to the usage of the bots, it’s instead a whole other problem since it’s not movable from pi to pi because of the hardware id’s that change even if it’s the exact same machine, proven by the fact that I can’t just copy the crap I already have on one pi to the other without getting a very negative response from the other pi.

I can’t use VM’s on the Pi since it’s too resource intensive, but I could create a LiveCD and move that around though, because that would go barebone on the Pi, but that’s too late now since the bots broke yesterday and I have been trying to figure out what happened to them.

Then I’ll have to look into docker and learn about that. Thanks.

I will then try to learn about Docker, and see what I can do.

This is Linux, we don’t to that type of bu*****.

There also is no registry, or other “hidden global state”, or built-in DRM.
Yes, your raspberry Pi has some identifiable features like hardware serial numbers, MAC addresses.
But your OS doesn’t care at all.

I don’t know what kind of applications you’re running, but most applications won’t either
(With the exception of some copy-protections schemes, but I doubt somebody wrote that for some Bots running on a rPi4).

1 Like

I tried to upgrade the distribution, but somehow I can’t install anything on that version it’s blocked from new applications, internet works fine but it’s a no to even upgrade the system that’s why I bought two pi’s so that I could migrate everything.

Yes it’s a mess, I noticed while writing it that it’s badly formulated since I’m trying to not talk about the pi as the centre piece. It’s the applications that I want to have migratable. In windows it’s kind of easy, it’s called a portable installation it doesn’t connect to the system, it just uses it’s resources and you can have it on a USB or laying around anywhere in the system.

Some of the bots are ‘partly’ portable, they can be moved about from system to system, but the other one’s cannot be moved in the same manner, noticed when even trying it broke the bot connection, which I have yet to fix, but I’m thinking, maybe it’s better to skip the pi entierly and create a VM that is then movable to another system as long as it’s the ‘same’ version of say virtualbox.

People have been pointing me in the direction of docker, so I will look at that, but back when I was 17 and really enjoyed linux, I started with Slackware and I installed home separate from the rest of the computer in order to upgrade and install freely, but newer linux distros are a bit more interconnected now then they were almost 20 years ago, and I barely use Linux anymore, which is why It’s a hazard for me to try and fix with the bots, it might be once or twice per year, and I end up forgetting all the commands for the bots and linux and have to relearn everything since goldfish memory…

This would then be kind of like Windows Portable applications? Sounds like it.

It barely runs nicely on my computer, so I have no plans on even trying it on a pi that is much less resources.

Are the

Are the containers like sandboxes in the system or is it actual containers like a zipfile? because if it’s like a sandbox, then it’s probably the same problem anyway not very movable. I would very much like to have a … I like game emulators, they can take a zip, rar, tar or whatever other containment file, and read it as is even if it’s 1 or 50 files in it, it probably unpacks it for the time being, but I would very much like it that a container is just that, act like a emulator, because that would truly be movable then without any hazard.

I’m going to read up on docker and LXC. Thanks.

… You can change some config files basically, but not how the bot works, or if you are a really good programmer, you could probably rewrite the whole thing, it’s one of the most extensive bots out there which has self-hosting and the dipshits think that help people with it won’t even help me because it’s installed on a RPi4… “It’s not supported”… as if hardware has anything to do in how the damn bot works in linux or windows… I figured out how to make it work on a raspberry pi, and I got it working, but with LOTS of tweaking, so much even that I had to write a guide while learning myself. Which was really damn irritating, since I messed up and missed to write down some of the steps regarding the changed files, because yes, not only did I have to create a custom environment for the damn thing, I also had to change the bots files in order to have it work as I needed it to work.

Here’s why I think it’s a hazard. I didn’t save a unchanged version of it, and even if I did, it wouldn’t work… 19.10 → 20.04 LTS Ubuntu Server isn’t really my go to environment, since I hate fixing things that could go really fast with a graphical interface in a damn terminal, and the group creating the bot, is really good at adding and changing stuff, which means if you have made changes, you can’t have it autoupdate since it’s not going to add text it’s going to overwrite features and files and the old files is for an old version and it’s very different now, and while I was changing things yesterday, I fugded up by updating the original bot, so It’s no longer connecting to what it was doing, and while that happened, some of the other bots also felt like crashing since they got connected to the new system, so I have half the bots working on each pi now. So I’m not that very happy since I thought it would be a quick intervention and then I could start using the other pi for something else that Is much more fun then an autonomous brick.

EDIT; The case I have for the old pi is an Argon40:One and the new uses an Argon40:Neo. Why I have to switch systems is because of the chassi, the fan in the ONE is not working as it should and since the ubuntu server is 19.10 it’s to old to get the fan install file re-downloaded. Which means that computer has a case that could have been used for my new pi which I have extra m.2 parts for. This is why I’m thinking in the ways of movable applications between different hardware, docker might be the answer, I don’t know. On top of this I have a neigbour that disturbs me in my thoughts of this problem with his personal problems that he is incapable of taking care of himself.

… Say what ?! I started this…

You should have at least 3 month’s of programming. I think that I found the wrong tutorial…

Docker seems to be an image emulator for applications without an os, how does it work with the application dependencies then? Do I need to install node and npm and pm2 and all of that? If I have to build the environment for the application in the os, then it’s still counter intuative. VM is probably the best option, as I said in an earlier post it’s probably easier to move a VM file from system to system then anything else and all you need to install is the VM and import the image and nothing else. I only need one anyway for the 6 or so bots. And I can change the resources based on the computer hardware, since it’s not ram intensive it’s CPU intensive.

I find that everyone internalized the word “container” slightly differently depending on their own circuimstances and reasons why they started using them.

Linux kernel supports “namespace” for various things, like filesystem namespace, network namespaces, user namespaces and so on.

Most “container” things end up with some files based payload and some metadata for things like e.g. Docker to read. Docker then manages fetching / deleting files and ordering things around in some staging directory where it creates an environment for software to run, and then pokes the right syscalls in the right order to end up running the software in this environment (basically it’s just chroot on steroids).

For server based things ( services ) … . Docker containers (run via docker run) are the “standard” thing to do these days.

The model forces you (heavily incentivizes you) to explicitly declare and think about, how you’re keeping state and where. And usually this is separate from software itself, or the host os. This makes backups and restores, moves across machines much easier. The software running in the container usually doesn’t need nor does it have access to entire host os. It runs on the same kernel, but kernel maintains usually a whole set of separate namespaces for the software running inside.

Docker also makes people ‘layer’ the software.

Typically folks would end up with a Dockerfile in a git repo, that starts from some ubuntu / debian or alpine linux base (alpine is fairly compact) and then they add various things from various places on top in order to produce an image.

This means linux kernel can share the in-memory copy of a library across containers if there’s any sharing of layers across containers.

When running said container image (bunch of layers of files that docker manages) - the expectation is that the end user will map directories from the host machine that are backed up into some place within the container where container image expects it. People check those instructions into a next to Dockerfile usually.

It’s virtually impossible to guarantee forwards compatibility of your dockerfile / software / your own changes without some kind of continuous automated testing … people have these setups too – they’re kind of easier to setup if you don’t need to make VMs.

For desktop apps, there’s snap by Canonical and flatpack by RedHat. I don’t use linux on desktops so I don’t know how these work. From what I heard there’s a spec, and the expectation that package maintainer will maintain/support yet another “standard” runtime environment.

There’s also distri ( – Stapelberg’s toy distro. Here’s an interesting blog post that might go into some detail: Hermetic packages (in distri) (2020) - Michael Stapelberg

For plain old bare metal distros, the approach that usually works is some combination of package lists, files to overlay on top (config or data or whatever…) and scripts to tie everything together in whichever order. … Sometimes these “scripts” come in the form of puppet/ansible/salt/whatever config files – sometimes people build their own packages (e.g. gentoo overlays :slight_smile: ) – sometimes it’s just a bunch of “instructions”/“wrote my own guide” type of things.

Taking the first step here is usually the hardest - looking at your carefully tuned machine and finding diffs compared to some “standard” setup…

Luckily, most package managers offer some way of checking installed files integrity and listing which files come from where, So if you have files scattered around that don’t come from any package, or if you have files that differ you scan probably start with a list of explicitly installed packages + a backup of these modified files.

Maintenance of software has traditionally always taken as much effort as initial development. The more you fork away from upstream, the less you can benefit from upstream maintenance.

1 Like