Poll - How do you host your Docker/Podman containers/services?

  • More VMs/LXCs hosting one or a few services each
  • A few bigger VM/LXCs hosting lots of services each
0 voters

Question for the community here – how do you tend to host your Docker/Podman containers/services?

Do you tend to have separate VMs/LXCs running one or a few services each or do you tend to have one big(ger) VM/LXC, where it runs lots of services?

If you run a bigger VM/LXC where you run lots of services inside of it, do you worry about the risk that if the VM and/or LXC goes down, it takes all of the services that you’re running down with it?

Conversely, do you think that running lots of smaller VMs/LXCs is less efficient?

I’m trying to think through the pros and cons of each approach where I want to avoid the idiom of “putting all your eggs in one basket” vs. being more efficient with the use of resources (CPU/RAM/disk space (even if LXCs are tiny, relatively speaking)/IPv4 address space).

Right now, I am running separate LXCs for the services that I am hosting, but there is a thought experiment about merging them all into one big(ger) LXC, and quote “putting all of those eggs into one basket”.

I don’t know if that’s a good idea or if that’s a terrible idea such that if the LXC has a problem, then it will take all of those services down along with it.

Thoughts?

You need a another category, No vms everything in its own container. Did you know you can have a container that runs other containers. Its containerception. Why waste resources virtualizing a second host when one is all you need with an army of containers.

So for instance you could run docker in a lcx container, then have docker run more containers.

1 Like

I don’t know how many people actually use LXCs, in practice/production environments.

I would surmise that for cloud providers, chances are, they are deploying VMs for the clients, and then the client can do whatever they want, inside that VM (vs. running LXCs directly on the host itself).

Yup. As I mentioned, I am running a Linux Container (LXC) that runs the Docker (application) containers for the services that I am hosting.

To this end, I’ve also thought about skipping the LXC part and using Docker relatively on “bare metal” (i.e. on Debian 12 that underpins Proxmox), but the Proxmox middleware at that point, may or may not needed at all, to be honest.

Of course, the downside with this is that if I am running the Docker (application) containers on “bare metal” (directly on the Debian 12 that underpins Proxmox), then I can’t back it up easily using the Proxmox Backup Server (because PBS still doesn’t backup the Proxmox host).

And then at that point, I can also run TrueNAS Scale, or really any other Linux distro, as a bare metal install, if I’m not going to be using any element nor aspect of Proxmox, if I am deploying Docker containers straight/baremetal on the host itself.

That’s also technically another option. Backing it up to PBS would be significantly more difficult though (vs. having the Docker (application) containers inside the LXC where I can backup the LXC) and if I want the data to be on persistent storage, I can also create a bind volume mount, for the container that points to a location outside of said LXC.

That’s what I’m doing now. (Proxmox host → Linux Container (LXC) → Docker (application) container(s).)

I have a few LXCs that are only running maybe like 5 Docker containers, if that.

But again, the thought process is “what would be the advantages of migrating those Docker containers over to a single, big(ger) LXC, vs. running a bunch of smaller LXCs?”

The two advantages that I can think of would be:

  1. It would technically take up less disk space (especially since I tend to deploy the Ubuntu 22.04 LXC over and over again. For the most part, the disk space usage is relatively minor/trivial as it just needs to be big enough to host to LXC, and all other data is passed into the LXC via lxcfs (Proxmox LXC bind mounts) which are then passed into the Docker (application) containers as a volume bind mount. So, I’d save a few GB with each LXC I get rid of/consolidate.

  2. It would save on IPv4 address space, as I’m not pulling a separate IPv4 address for each LXC. Where this becomes more challenging is if I want the open-webui to be accessible via port 80 and have InvokeAI also have its web interface be accessible on port 80, then I won’t be able to do that, if I consolidate the two LXCs together into one.

I have been taught that many applications/DNS servers don’t really handle/resolve ports all that well, and a part of the original idea with me pulling individual LXCs for services is so that the web interface for whatever service I’m hosting – can have its own port 80 to make getting to it easier for the rest of my family. (So that they only need to remember the service name rather than a bunch of IPv4 addresses and the port numbers).

Cloud providers do both you can rent a container or a vm from most cloud providers aws, google, digital ocean…

What??? docker lxc all create a network for containers, so that each container can have its own ipv4 address, suffice it to say there is plenty of private ipv4 adress space. so outside of nat considerations, i don’t see this as an issue.

Is not this the job of applications like apache and nginx in conjunction with dns servers if setup as reverse proxy to make many services appear as all part of the same web page. you have a domain name example.com then you can set up a local dns to register your domain name and any cname’s you want like jellyfin.example.com or immich.example.com, that all point to your nginx, appache, or proxy of your choice to proxy out the cnames to the correct ips and ports of your service. There is no need to combine any services manually into custom containers. Especialy in a cloud setting where when doing high avalibity you need to scale the number of each services containers based on the load of that service.

Idk what your using but most containers i run are the size of the application in question + any support files. Most container images are not bigger than 1gb in size and you only need one image to deploy multiple instances of said application like Nginx. Also most container images use overlay images to effectively share any common components.

I didn’t know that. Interrresting.

(When I’ve priced out a GCP instance, it tends to state that it’s a VM rather than a Linux container.)

Sure. If you’re only think about the internal Docker network.

What this statement shows is that you’re not think about the --network host option.

(cf. Networking using the host network | Docker Docs)

In the example that is shown in the Docker example/documentation, if you want to have MULTIPLE nginx hosts serving different web pages on port 80, per your statement/comment, you can deploy multiple nginx containers – that’s not a problem.

However, to get to each of the containers that you’ve deployed via http://localhost:80, it will only connect to one of said nginx containers that you’ve deployed. That assumes, by the way, that you managed to deploy multiple nginx containers, all pointing to port 80, simultaneously (which you probably actually can’t do, but for the purposes of this discussion, let’s assume that you can) – if you type in http://localhost:80 on the host system where said nginx Docker container is running, it will only connect to one of them.

Therefore; if you want to host different web pages via multiple instances of nginx, the host LXC will need a separate IP address.

Of course, you’re more welcome to provide the deploy script if I’m wrong (where I can point multiple webui’s to the same http://localhost:80 address) – that would be awesome!

You can try deploying four nginx containers, all with the --network host option and then try and see if you can get to all four nginx containers at the http://localhost:80 address.

I’d be interesting in learning how one would go about executing this.

I’ve done that.

But if you’ve deployed jellyfin.example.com on port 80 (cf. -p 80:8096) and immich.example.com also on port 80 (cf. -p 80:2283, Source: Environment Variables | Immich), then going to http://<<IP_address_of_LXC>>:80 will only serve ONE of the two services. In fact, even if you have defined the local DNS for jellyfin.example.com and immich.example.com, if you typed that in the address bar of your browser, you will likely reaching one of the two services, but unable to reach both, even though you have your local DNS defined.

I think that you are confusing Linux Containers (containerised Linux distros) vs. Docker (applications) containers.

I’m not quote “combining any services manually into custom containers”. i.e. I’m not dumping all of the services that I am running into one giant docker-compose.yml file.

I’m not certain that you understand the difference between starting multiple Linux Containers (which in Proxmox, would be pct start <<CTID>>, if doing it via the host’s shell) vs. starting multiple Docker (application) containers either via multiple sudo docker run commands vs. writing (multiple) docker-compose.yml file (as an example, you don’t have to name it that, but that’s an example), where you can separate it out into different directories, one for each service that you want to run.

I’m not sure that you understand this distinction between running multiple Linux Containers vs. running multiple Docker containers, all within one Linux Container.

Again, you can literally try deploying multiple nginx containers, all within the same Linux Container, and you can try and see if you’re able to reach all of the nginx Docker (application) containers, where you could be serving different web pages via each of the nginx Docker (application) containers.

You’re more than welcome to try. If you’re able to pull this off, I’d love to learn from you in terms of how you managed to reach multiple nginx Docker (application) containers from one http://<<IP_address_of_LXC>>:80.

I thought that with scalable services, it will spin up more containers and then load balance, by hostname, no?

Isn’t that what like Docker Swarm and Kubernetes, etc. are supposed to be able to do or help you scale up and/or out? (cf. Docker Swarm: A Tale of Vertical and Horizontal Scaling | by Avnish Kumar Thakur | Medium)

Again, it is important to distinguish between Linux Containers (LXCs) vs. Docker (application) containers.

Varies.

The kasm/firefox Docker container itself is around 2.5 GB in terms of disk space and the number of vCPUs and RAM can be user defined, depending on how you’re deploying it. (i.e. if you deploy said kasm/firefox via Kasm Workspaces), then you can specify the amount of RAM that the firefox container/application will get (and the max that it’ll be able to use). (cf. GitHub - kasmtech/workspaces-images)

If you skip deploying said kasm/firefox container via Kasm Workspaces and you just deploy it via either regular old sudo docker run and/or you convert said sudo docker run command to a docker-compose.yml file, then feeding in the amount of vCPUs and/or RAM will happen slightly differently, but generally produce the same result.

The kasm/ubuntu-jammy-desktop was something like 3.4 GB. Something like that. I don’t remember exactly anymore. (I started skipping the deployment of the kasm containers via Kasm Workspaces because I want the containers to be more elastic with dynamic RAM allocation/consumption.)

It depends (greatly) on what you’re doing.

Docker (application) containers, yes.

Linux Containers, less so.

(But Proxmox has the shared kernel samepage merging (KSM) so I don’t know if it will use that for LXCs as well and/or as efficiently as it does with VMs. cf. Kernel Samepage Merging (KSM) - Proxmox VE)

I have four separate instances of Immich running, right now, in different LXCs (it crashed trying to index about 10 million pictures, so I had to split it up) and therefore; for me to get to each instance, I go to the LXC’s respective IP addresses to be able to log in.

I haven’t found a (reliable) way for me to be able to have the local DNS handle going to the same IP address (of the LXC) and being able to log into all four instances in four separate browser tabs.

Conversely, by having the four Immich instances deployed in separate Linux Containers, where each Linux Container gets its own IPv4 address, this means that I can log into them separately and simultaneously.

Again, if you have a method where I can shove all four instances into a single Linux Container, separated by the port number, I’m all for it.

I did read that I could use a reverse proxy, but it was also my understanding that even with a reverse proxy, if I type in http://immich.example.com into my web browser, it still won’t let me connect to all four Immich instances simultaneously.

(I haven’t deployed the reverse proxy yet because I’m still trying to sort out whether the four instance should stay separated (i.e. each one running in their own individual Linux Container) vs. all four instances running within one Linux Container.)

One of the instances is currently consuming about 30 GB of RAM. Instance #2 is consuming about 9 GB of RAM. #3 is consuming about 6 GB of RAM, and #4 is consuming about 3 GB of RAM.

If I shove them all in the same Linux Container, then I can set up the Linux Container to have upto 384 GB of RAM (as the RAM consumption is climbing, during the indexing process), and that way they won’t end up crashing during the indexing process as a result of not having enough RAM.

And I am also purposely allowing it to use upto 384 GB of RAM because I had started it with 16 GB of RAM initially, and then when the Linux Container was starting to run out of RAM, I increased the maximum amount of RAM that the Linux Container can use, but that also meant that I had to restart Docker for Docker to recognise the change sudo docker restart, which also meant that it had to restart the indexing job/task.

So, to prevent needing to restart, I just give the Linux Container lots of RAM so that if it needs it, it’s available to the Linux and/or Docker container.

Ok lets start at the begining.

To again to answer to this i generally would run docker or podman directly on host, not in a lxc or a vm, the only reason to do this is because you either need a separate ipv4 or ipv6 addresses for dealing with multiple A or AAAA record domain names, or some other highly technical reason, or lastly becuse its how im renting cloud computing. If were talking about proxmox at home in a homelab, unless your playing with the things i just mentioned you should probably have exactly one lxc or vm with all your docker podman containers in it, why shot yourself in the foot with extra overhead. The fastest options is remvoing as much virtualization as possible and runing as close to bare metal as possible.

This is all misguided. Your thinking about this the wrong way. One nginx config can contain the information for multiple servers, and how to proxy multiple applications. This is not to mention you can simply map the ports to differently, and have nginx instances proxing to another nginx instance on another port that then is proxing an application. Assuming your domain is governed by a master nginx container that is then proxying other nginx containers or applications, this is a non-issue. You don’t have to run everything on port 80. thats the whole point of a reverse proxy, to redirect traffic to the correct place.

No this is wrong, a dns server for example.com contains a record of what ip adresss example.com is sent to this is called a A or AAAA record depending on if its ipv4 or ipv6. All cname records get sent to the same ip address as they are a subdomain of the A or AAAA record. Domain name servers don’t deal in ports, only ip addresses. Whatever host is at said ip address is then running a instance of nginx directly or in a container tied to port 80 that then proxies to any applications or further instances of nginx. it does not matter what port the application is running on. Again your dns server → (load balancer if you have one nginx can do this ->) main proxy → sub proxy on some other port or ip and port → to eventually some application serving a specific request on any ip and port. immich does not need to run on port 80, nor does jellyfin. the proxy sends the request to the correct ip and port such that to the user they all seem to be on port 80 and the user of the web page is none the wiser. The user would not be able to tell how many containers or vms or hosts it took to host the entire page.

No im not i don’t choose to run lxc containers typically because of the extra overhead, but that is not the point as long as each service is available through some unique ip and port combination, when using a reverse proxy it is irrelevant what port the service is running on, or if its on bare mettle, a vm, a lcx or docker container, or some other container technology. How its hosted is not relevant to the concept of proxying the request to the correct ip and port.

you can also deploy docker containers in docker containers. again not the point. As long as the multiple services are accessible on some port regardless of how you transverse the nat and what port the service is actually at it doesn’t change how proxing works.

Ok both nginx and apache can act as a load balancer, my example above assumes it was doing this. However, if you have a dedicated load balancer your dns record may point at it which then points at your reverse proxies, or applications directly.

in one browser tab you cannot connect to more than one immich instance at a time, and immich is not built for high availibity, so really each immich instance should have its own cname record. So for each instnace you would have immich1.example.com immich2.example.com immich3.example.com. It’s that or you have to have multiple domain names which is silly, which would change how it looks such that its immich.example1.com immich.example2.com immich.example3.com not sure how this is better or less confusing. Assuming your a cloud service with a good load-balancer unless you needed multiple domain names for a different reason your probably doing the first, so as not to pay for multiple domain names.

This really begs the question of why you have 4 instances of immich. Each instance can support multiple users. How many users are their across 4 instances?

i had chatgpt generate an example:

Got it! Here’s a basic example of an NGINX configuration for a setup where your main domain is example.com, and you have three applications running under subdomains (CNAMEs):

apples.example.com

oranges.example.com

pairs.example.com

Each app is assumed to be running on its own internal service (like via Docker or on different ports). Let’s assume:

Apples app runs on localhost:3001

Oranges app runs on localhost:3002

Pairs app runs on localhost:3003

Here’s an example NGINX config file (/etc/nginx/conf.d/example.com.conf or similar):


# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name example.com apples.example.com oranges.example.com pairs.example.com;
    return 301 https://$host$request_uri;
}

# Apples app
server {
    listen 443 ssl;
    server_name apples.example.com;

    ssl_certificate /etc/nginx/ssl/example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/example.com.key;

    location / {
        proxy_pass http://localhost:3001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Oranges app
server {
    listen 443 ssl;
    server_name oranges.example.com;

    ssl_certificate /etc/nginx/ssl/example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/example.com.key;

    location / {
        proxy_pass http://localhost:3002;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Pairs app
server {
    listen 443 ssl;
    server_name pairs.example.com;

    ssl_certificate /etc/nginx/ssl/example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/example.com.key;

    location / {
        proxy_pass http://localhost:3003;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
1 Like

But would this be related because you’re charged per LXC/VM instance, correct?

Therefore, if you don’t want to pay for the extra LXC/VM instances, then this would be why you’d want to cram everything into one LXC/VM, correct?

But this still also means that if that LXC/VM that you’re renting from a public cloud provider goes down for whatever reason, then it will take all of the services that you’re running on it, down along with it (vs. having it run on multiple LXC/VM instances), right?

Running it on multiple LXCs doesn’t incur any additional overhead vs. running it all in one LXC.

For the most part, either it runs the same (whether you’re running one LXC or two), or if there is a difference, I can’t measure it/it’s neglible/imperceivable when serving the services.

Agreed, but again, how much overhead is it really introducing by running a LXC where you’re already sharing the kernel with the host OS anyways?

But again, my second point that’s a con against this strategy is that when it comes to trying to back up your services – if I have it running inside one (or more) LXCs, then I can back it up using PBS.

If I run it on baremetal, I can’t.

If I run it in one LXC, I can back up that singular LXC - yes, but that still gets back to my earlier point about that running one LXC vs. running two (for example) – two LXCs as far as I can tell, doesn’t incur any additional overhead above running one LXC.

Again, I can be wrong, and if you have a way to be able to measure this reliably and repeatibly, I can test it.

But now you’re actually incurring additional overhead for the system to need to process the reverse proxy so that request can be directed appropriately.

Maybe using nginx is a bad example because of the capabilites within nginx itself.

Again, going back to my example where I am running open-webui and InvokeAI - right now, I run them in separate LXCs so that I can go to http://openwebui.example.com and bring up the open-webui web interface and I can also open a new tab, and go to http://invokeai.example.com and bring up that interface.

I don’t need to specify different ports for this.

If I understood your point about using a reverse proxy, at that point, I will have to give the two services different ports because both can’t occupy port 80, at the same time and the reverse proxy processing is additional overhead for processing said reverse proxy request.

Again, using these examples, tell me if I am still thinking about this the wrong way.

(To the best of my knowledge, neither open-webui nor invokeai can proxy itself to each other or vice versa, unlike nginx.)

Therefore; trying to deploy the nginx idea, won’t work with these services/applications, right?

If I understand you correct here, suppose that I am running http://openwebui.example.com on port 123 and http://invokeai.example.com on port 456, my understanding is that in the reverse proxy, you’d be redirecting the traffic such that (basically) a port 80 request on http://invokeai.example.com would be “mapped” to port 456 whilst a port 80 request on http://openwebui.example.com would be “mapped” to port 123, and both are running on the same host LXC.

(Like I said, I briefly looked into running a reverse proxy, but I didn’t dive too deep into it.)

Yes. This is my point.

If both services are running on port 80, and you’ve defined the CNAME A record for each of the services such that it resolves to the same IPv4 address, then you’re still only going to be able to reach only one of the two services that’s running on port 80. (Yes, I am purposely ignoring the fact that the second service likely won’t even run, if port 80 is already in use. I am assuming that you’ve managed to somehow jam both to run on port 80.)

I know that DNS doesn’t deal with ports. That’s my point.

But you’re introducing overhead by introducing a reverse proxy to sort this out (vs. just running it on different LXCs where each LXC has its own IPv4 address) and therefore; you don’t need said reverse proxy anymore.

You stated that running multiple LXCs will incur overhead. But if you’re running a reverse proxy, you’re also, incurring an overhead.

I’d like to see the data which shows which method incurs the least amount of overhead, between the two, especially when immich is indexing 2.5 million files.

(For reference, AdGuardHome, right now, is taking 20002 ms to resolve mozilla.com.)

(Current load average on the server is 156.97 with 388.6 GB of RAM (out of 768 GB) used. And the server only has a dual Xeon E5-2697A v4 processors in it (16-core/32-thread each).)

Wouldn’t it take less overhead, to go to a different IPv4 address than it would take to process the reverse proxy request, especially when the server is this busy (indexing)?

Your statement here proves otherwise.

(e.g. What custom containers are you talking about then if you aren’t mixing Docker containers vs. Linux Containers?)

What extra overhead?

(LXCs has less overhead than a VM, but for some of my limited testing, < 0.04% performance difference between running an app baremetal vs. running it inside said LXC (probably due part because it shares the kernel with the host OS itself).)

I also can’t tell if you’re running Proxmox or not, but if you are, then how are you backing up your Docker containers that you’re running directly on the host, with PBS?

LXC is a lot closer to bare metal than a VM is.

These two statements are internally inconsistent with each other because you argue that it should be as close to bare metal as possible (which LXC meets this criteria), then you argue against running said LXCs because of the overhead, where a LXC is closer to running a service/app on bare metal than running the same service/app, in a VM.

Presumably, you’d to build a custom image for this, no?

I don’t doubt that you can. I can envision a relatively limited number of use cases as to why you might want to, but given that you have docker build commands, I’m sure that you can cobble an image together to execute/deploy this.

Agreed, but I’m not a cloud provider, so this doesn’t really apply.

(And I’m not renting from a public cloud provider neither.)

Right, but again, if I want it to all be collapsed under a single immich.example.com, then I would need to add the overhead of a reverse proxy to be able to split the traffic back out to its respective instances, for me to be able to access each.

Therefore; I’d have to tell my 4 year old, the port number of the instance, and/or if my 4 year old ends up uploading pictures to each of the separate instances, then my 4 year old would need to either be told or need to remember what the different port numbers are for each of the separate instances, if all 4 instances run in a single LXC, pursuant to your statement near the very top of this reply.

See above.

It’s not an issue with the number of users.

But again, this thread isn’t about why I am running 4 immich instances. This thread is about whether people tend to run more LXCs/VMs, but a few services in each vs. running more services in fewer LXCs/VMs. Your argument for not running LXCs (at all)/running fewer LXCs is because LXCs incur overhead. But you haven’t been able to show that running two LXCs incurs any more overhead than running one LXC. (or running ten LXCs incurs more overhead than running one LXC.) I’d like to see the data that provides the evidence behind this to substantiate this claim of yours.

There is no doubt that running on bare metal is faster.

That’s not the question.

The question is regarding your statement about how you think that running multiple LXCs will incur a great amount of overhead than running one LXC.

Furthermore, I’d be interesting to see what the overhead picture looks like, when the system is busy indexing 2.5 million files and it has to process the reverse proxy request, at the same time.

(Hint: it should not take AdGuardHome 20 seconds to process mozilla.com, and yet, it does, when my system is busy, running the indexing job.)

Cool.

Now how would I apply this to the four immich instances that I am running, on the same LXC?

(This assumes that even with nginx as a reverse proxy, you won’t still only be able to “map” one CNAME record to multiple localhost:port1, localhost:port2, etc.)

Like you said, either I’d need to pull 4 CNAME records to be able to split out the four instances, at which point, it’d be largely an academic debate as to whether it would be better to run 4 LXCs vs. 1 and a reverse proxy where the reverse proxy would need to process the request, to route the traffic to where it needs to go.

I’d like to see the data for both scenarios as that’ll give us something concrete to talk about vs. talking about it from the theoretical perspective. (because we all know that theory and reality can end up being very different.)

(Sidebar: tally so far is 3 for more LXCs/VMs running a few services each and 1 for fewer LXCs/VMs, but running more services in it)

I just ran stress-ng (stress-ng --cpu 16 --cpu-method matrixprod --metrics-brief -t 30) and these are the results:

Top is an Ubuntu 22.04 LTS LXC. (104183 bogo ops or 3472.50 bogo ops/real seconds or 219.27 bogo ops/(usr + sys) seconds)

Bottom is the bare metal (Debian 12) Proxmox host. (102443 bogo ops or 3414.45 bogo ops/real seconds or 215.14 bogo ops/(usr + sys) seconds).

It would appear that the LXC is actually a little bit faster than the bare metal host (by 1.7-1.9%).

(This was perform on my AMD Ryzen 9 5950X, with 128 GB DDR4-3200 RAM on I think is an Asus ROG Strix X570-E Gaming WiFi II motherboard. I ran this one of my compute nodes because that way there is nothing else running that can interfere with the results.)

Ok lets start with lxc containers are like docker or podman containers, all three are OCI compatible. All have a similar level of overhead. Alll docker images for the most part are OCI compatible, if your running docker in a lxc container its simply because you wanna use docker for some reason in a lxc container.

If your using proxmox, which i do not. why are you runing OCI compatible docker images in docker in a lxc rather than just directly in a lxc, however if you were to as i said before stick them all in the same lcx so your not runing multiple docker or podamn dameons.

Again i would not run docker or podman in a LXC as i answered the question before.

Yes, but running docker or podman repeatedly in multiple containers would. And back to my above answer if i was to run docker in a lxc or vm i would make sure all docker containers are together in that lxc or vm. Furethermore this would not be my first choice.

I have no clue how you have your storage setup, however each container image can have volumes pointed at any type of underlying base storage, which can then backed up by your backup software of choice. This sounds like a limitation of your setup imposed by proxmox and PBS.

Personally i make a zfs dataset for each service i run, and the containers relevant to that service store their data in that dataset. then i just use zfs snapshots and send receive to backup to a separate pool.

Ok in my homelab i use exactly one nginx instance for everything. I don’t have a need for more. Each service has a cname and nginx proxys to the service via the cname. Some services run on seprate machines and again my singualr nginx instances just proxys to that service to the correct ip and port of the machine hosting that service.

What how, i use no custom containers in my setup, my goal is that i don’t want to be doing any funny business with containers in containers, or building custom images.

Yes it does if your trying to host services in your homelab with nice names you will be doing one of these, just like a cloould provider does unless you intend to forever be using ip:port to access them.

Multiple users??? But for real you dont need more than 1 nginx instance for your homelab as i said above. Nginx is lighting fast. If your implying each insistence needs its own nginx that’s wrong, each immich instance should be bound to a different port then your main nginx instance should proxy to that different port to get that instance.

WTF. a load aver of 156 on a system with 64threads implies your thrashing a lot with a lot of context switching, or have extremely high io wait. Both imply either extremee load and or poor configuration.

What all are you running to get this type of load?

I’m running all the services natively on TrueNAS Scale:

Best regards,
PapaGigas

1 Like

I’m running proxmox as a hypervisor and one LXC per NIC for docker containers, portainer as the docker manager, and one instance of NginX for the reverse proxy. There are 6 proxmox LXC in total with around 4-5 docker containers in each.

1 Like

No, they’re not.

“LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.”
(Source: Linux Containers - LXC - Introduction)

No, it’s not.

(Source: Using OCI templates in LXD - LXD - Linux Containers Forum)

Given that your first two statements are fundamentally and principally incorrect, which forms the basis for this statement, whilst Docker may be/is OCI compatible, as noted above, LXC is not (per Stéphane Graber, LXC/LXD Project Lead).

As I mentioned above, I can use PBS to back up a LXC. If I run Docker directly on the host, I can’t.

See above.

Podman is daemonless.

image

(Source: What is Podman?)

There has been no adverse effect that has been identified with respect to running multiple Docker daemons (the Docker daemon itself is actually very lightweight and barely even registers in top nor htop).

Your assumption behind this answer is incorrect. See above.

Not really.

I can repeat the exercise that I ran last night in order to provide the data which counters this claim.

But you can still run multiple LXCs which run multiple Docker (application) containers via that LXC’s single Docker instance/deployment. (i.e. you’re only starting the Docker daemon once). But you can run multiple Docker containers within that. But I can, just as easily, run multiple LXCs (as the current polling results show), where each LXC has its own Docker daemon running, where it runs its own group/subset of the total services that are provided via the Docker (application) containers.

There’s a reason why people use PBS (because you can define it as one of the storage locations and then with a few clicks, tell it to send the backup to said PBS).

Why run/create something else when the Proxmox team has already developed a backup solution that works for backing up Proxmox LXCs and VMs? Why re-invent the wheel or use another piece of software for a problem that has already been solved??? Seems redundant.

If you’re not using Proxmox, then yes, you would need the other software/backup solution.

But if you are running Proxmox, then like I said, Proxmox already has a backup solution - Proxmox Backup Server. So why would you reinvent the wheel for a problem that someone else has solved already? Seems redundant (and frankly a waste of time and energy) when you can just deploy the off-the-shelf solution.

You could do that.

Is that how you have your cloud LXC/VM set up that you’re renting?

Even if I were to be running a single nginx instance to deal with the reverse proxy, it’s still work that needs to be done (overhead) by said nginx reverse proxy, to be able to split the traffic back out between either: immich.example.com:3001, immich.example.com:3002, immich.example.com:3003 or immich1.example.com (which maps onto port 3001), immich2.example.com (which maps onto port 3002), immich3.example.com (which maps onto port 3003).

Either way you deploy it, the nginx still needs to process the reverse proxy request, which is overhead that you’re adding.

I’m not building a custom LXC container image neither.

I just deploy it from the template and then install whatever I need to, post-deployment.

(i.e. The Docker engine isn’t built into my Ubuntu 22.04 LTS LXC template. I install that post-deployment.)

Again, you’re still confusing between Linux Containers (which you’re calling containers) with Docker (application) containers (which you’re also calling containers). (The emphasis that I have added to your quote is what gives me the hint that you’re still confusing Linux Containers with Docker containers where/when they’re very different things and completely separate animals.

“a container template is just the rootfs as tarball”
(Source: How to create a new CT / LXC container template ? | Proxmox Support Forum)

Docker containers/images isn’t just a rootfs as a tarball.

And if I had to guess, you are probably assuming that the process for building LXC templates is the same process for building Docker container images.

And whilst they might be very similar, and I’m sure that some people build their own LXC templates, there’s a difference between deploying Linux Containers (LXCs) vs. Docker (application) containers.

(cf. Linux Container - Proxmox VE)

(i.e. you don’t use docker build to build LXC templates.) (cf. GitHub - lxc/distrobuilder: System container image builder for LXC and Incus)

I currently circumvent the ip:port to access services by running multiple LXCs so that each LXC has its own IPv4 address, as I originally stated, and therefore; I can get to my four separate instances of immich by going to its respective IP address.

No port required (since I am running it on port 80, because, of course, as you well know, when you type in http://<IP address> there is already and automatic and implied :80 appended to that.

Therefore; no overhead is incurred, by the system, since it doesn’t need to resolve the IP address. If I give it an English/human readable/“nice” CNAME on my local DNS, it literally takes milliseconds for it to resolve the CNAME back to the respective IP address (i.e. http://immich1.example.com resolves as http://192.168.10.1).

It takes milliseconds for it to be able to do that.

If I collapse the four separate immich instances from four separate LXCs down to one LXC, then I will need to have the reverse proxy so that it will redirect the traffic to where it needs to go (e.g. http://immich1.example.com is reversed proxied to http://192.168.10.1:3001 and http://immich2.example.com is reverse proxied to http://192.168.10.1:3002 etc.)

By running it all in one LXC, then I will have to use ip:port and a reverse proxy to direct the traffic to where it needs to go, which is an additional overhead that this method will incur.

Read the part above where I wrote about how using nginx is probably a bad example because of the inherent capabilities of nginx that doesn’t apply to other applications (e.g. jellyfin/immich/etc.).

(Furthermore, it is interesting how in the block quote that you cited of me, I explicitly talk about immich.example.com and you’re still talking about using the nginx example.)

You’re not even talking about the same thing.

Literally.

^Read above.

The answer to this question has already been provided.

(Current load average is 175.43. I/O wait is actually really low at only about 4.6%.)

And pretty much all of that I/O wait is because there is a ZFS scrub, running in the background, but that’s not the cause of the current load averages. I’ve already written about what’s causing the current load averages. ^Read above.

Yes, my “do-it-all” Proxmox/homelab server is very busy. A lot of tech/homelab YouTubers will talk about how most people’s homelabs tend to sit very close to idle, but I purposely built my “do-it-all” Proxmox server to “do-it-all”. (I wasn’t kidding.)

I upped the system from 256 GB of RAM that I had, ever since I executed on my mass consolidation project of January 2023, to now, where I bought another 16 sticks of 32 GB DDR4-2400 ECC Reg RAM so that I can take the total installed RAM capacity up from 256 GB to its present state of 768 GB. This way, I can shove even more services onto the system, as I am significantly more RAM limited than I am CPU limited, despite the ridiculously high load averages.

(The CPU will just slug through it. It takes a while, but it’ll chew through the data.)

I’ve tried offload the CPU loads onto my micro HPC cluster (my 7950X and two 5950X compute nodes), but the problem that I ran into with that is the data wasn’t local to the CPU anymore, so it ended up running way slower than if I just pushed the load averages up towards 400 (which it’s hit before). Even with a load average of 400, it was still faster than having it try to access the data remotely by one of my micro HPC compute cluster nodes.

But again, this is completely irrelevant to my OP.

People have talked about how sometimes, whenever they update TrueNAS Scale, it’ll end up inadvertently breaking how TrueNAS has implemented Docker.

Have you found this to be the case?

1 Like

I’m decently close to that.

I only have one LXC which is running portainer, that has I think like 13 Docker containers running right now.

But most are about a handful (5-6, like you said).

I also run dockge so that I can use it to convert the docker run commands into a docker compose file for me.

There are some things that I like about Portainer (e.g. it application/template library), but I probably don’t even touch more than 5% of everything that Portainer can do.

I think that one of the containers that I have running in my Portainer inside of my LXC, is supposed to have GPU passthrough, but I haven’t actually checked to make sure that’s working correctly.

I have found, for example, that it’s been actually easier/faster for me to deploy Docker containers (e.g. Collabora Online Development Edition) (which is needed for ownCloud) via dockge than it is to have to click on “Stack” then dump the docker compose file in their web editor, and then deploy.

I’ve also found that once I have a stable docker compose file working, then I can move that to Portainer as a stack. But since Portainer doesn’t have the ability to automatically convert docker run commands to a docker compose file, this is another reason why I end up using dockge because after converting it, then the button to deploy the docker compose file is right there.

1 Like

LXD is not LXC Yes LXD containers are not OCI compatible.

However, LXC are, and you can create an LXC container using the OCI template to make the container based on a OCI image.

Again, you do not need to run docker in a LXC to run an OCI compatible docker image, but on the other hand if using a LXD container you do because LXD have no OCI template.

Futhermore i’m very aware all three podman docker and lxc operate differently and are not the exact same, i only said they are alike. Which they are considering they are all containerization technology.

I did your response is about LXD not being OCI compatible, while this is true, that has no bearing on the fact that LXC containers are OCI compatible.

I am not incorrect, are you sure your not the one confusing containerization technology.

So far I haven’t had any issues… :wink:

Best regards,
PapaGigas

1 Like

is there a reason not to use docker compose?

To each their own but I don’t want to learn a new syntax when I can really just use podman pods. Plus podman works with kubernetes files, systemd unit files, or just plain jane bash scripts. If a container is going to be running long-term on one of my systems its going in kubernetes or systemd container file