Poll - How do you host your Docker/Podman containers/services?

Okay. Let me ask this then – are the LXC templates from Proxmox OCI compatible?

Like I said, you can deploy Docker directly on the Debian 12 host and just run it from there.

Per your statement, in either way, you’re running Docker. Therefore; pursuant to your statement, if you’re not running Docker inside the Linux Container, then you’re running Docker somewhere. It doesn’t matter whether its on the Debian 12 host that underpins Proxmox or if you’re talking about running Docker somewhere else/somehow. But either way, you’re running Docker somewhere/somehow.

I fail to see what benefit the OCI template will bring when you can just deploy Docker straight on the Debian 12 host that underpins Proxmox.

The evidence provided above calls that into question.

The method (and extent) that they’re containers is very different.

Furthermore, if you’re running an Ubuntu Linux Container, it works very differently vs. running the Ubuntu Docker container image. (You can connect a Linux Container to Infiniband, for example (cf. Type: infiniband - Incus documentation). A simple google search for “Docker infiniband” doesn’t turn up any results from the Docker documentation that talks about how you can enable Infiniband in Docker containers.)

You have literally boldly and proudly declared that you don’t use nor deploy Linux Containers (LXCs).

Therefore; how could you possibly imagine that you’d be able to speak from a position of experience and knowledge (based on said experience) in regards to something that you’ve declared that you don’t even use?

This cannot be any more abundantly clear and obvious based on this response of yours:

LXCs do not create their own network. (cf. How to create a network, How to create instances)

Again, how would you know this (or have experience with it) given that you’ve loudly and proudly declared that you don’t even use LXCs?

(Because if you are actually using it, then you’d know, from your own, actual experience with it, that what you said isn’t true.)

It’s true that Docker will create a network, but it’s not true for Linux Containers.

But you’d know that if you actually use it, which you’ve declared that you don’t.

(From literally the Linux Containers introduction, it literally tells you that quote:

“…something in the middle between a chroot and a full fledged virtual machine.”

“A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.” (Source: https://www.docker.com/resources/what-container/)

Docker (application) containers are not “…something in the middle between chroot and a full fledged virtual machine.”

You’ve stated yourself that you don’t run Linux Containers, and therefore; I can only surmise that you are talking about it based on what you think you know about it, rather than based on your actual experience from using and deploying said Linux Containers.

(Note that out of everything that I wrote, this is the only thing that you’re left to comment in regards to. This provides further data/evidence that given that you have declared that you don’t use/run/deploy Linux Containers, therefore; how can you speak from experience (about LInux Containers) that you don’t possess? Wouldn’t it be better to speak, based on your experience with the technology?)

That is encouraging.

(I am currently running TrueNAS Scale as a VM because I am (still) testing how much extra space I need to allocate (or more importantly, how to predict how much extra space I need to allocate) for snapshots.)

1 Like

I use docker compose a LOT.

It’s easier to make changes to the docker compose as a text file than it is to try and edit the docker run command.

2 Likes

Apparently, I was reading somewhere that there’s podman compose so that you can use docker compose files.

I’ll put it to you this way – when I look up apps/services like jellyfin/immich/etc., lots of them have either a docker run command that basically make it so that idiots such as myself, are able to run it (which you can convert to a docker compose file).

And many of them, also will have like the basic, docker compose template so that it will have everything that you will need to be able to deploy the application.

There are many where for a podman deployment, they might only have a paragraph about it.

I was also looking up how to deploy kubernetes and quite a few tech YouTubers talk about how for a homelab, you can deploy it so that people can learn how to work with it, but it isn’t for the faint of heart.

(That’s where stuff like Talos comes in to play which tries to help make deploying a container platform, across multiple nodes, easier. Both Jim’s Garage (https://www.youtube.com/watch?v=TP8hVq1lCxM) and Sidero Labs (the makers of Talos, https://www.youtube.com/watch?v=VKfE5BuqlSc) have videos that are < 20 minutes long, that talks through the deployment process of Talos. In fact, Sidero Labs have been able to cut their video down to a 1 minute short (https://www.youtube.com/shorts/d3lCFM6WQDU).)

I haven’t dove into Talos nor Kubernetes yet as there’s an internal debate raging about whether I want to create an entirely virtual Kubernetes/Podman cluster to try it out, but I haven’t settled in on the underlying hardware where I am going to be running/testing this out on yet. (The Minisforum MS-A2 is supposed to be out sometime this month, so I am anxiously waiting for them to announce the price.)

1 Like

I used to run it as a VM too, in Proxmox, but now I run TrueNAS Scale on bare metal (and install all the services I need via Docker compose)! :wink:

Best regards,
PapaGigas

1 Like

oooh…so you aren’t using TrueNAS Scale’s (community) “app” store?

(I used to run TrueNAS Core before my mass consolidation project of January 2023, where I shoved everything into my current “do-it-all” Proxmox server. The result of my said mass consolidation project was that I cut my power consumption by half (going from 1242 W running everything now down to around 600 W).)

I virtualise my TrueNAS Scale instance/VM only because if I want to run multiple/different AI workloads, trying to get TrueNAS Scale to play nice where I can share a single GPU between Linux Containers is a PITA to get set up.

(It’s significantly easier to get that set up with LXC in Proxmox.)

1 Like

No, I’m using this guide I’ve created to set everything up:

It’s still a work in progress, but it’s getting there… :slight_smile:

Maybe with Incus that might change… are you aware of the changes? :roll_eyes:

Best regards,
PapaGigas

Oh cool!

Maybe.

If you’re running Docker (compose) on TrueNAS Scale “natively”, then maybe it might work.

I think that it’s silly that TrueNAS Scale - if you create a VM and you want to pass a GPU through to said VM, that it wants you to reserve a GPU for the TrueNAS Scale host itself (i.e. according to that, you can’t run TrueNAS Scale headlessly (yet)).

That’s just insane to me.

On an unrelated topic to my OP – the other issue that I had with TrueNAS (Core or Scale) was that if you wanted host ↔ VM communication, according to the TrueNAS dev team, you have to go through a network bridge rather than via virtio-fs. (And since TrueNAS Scale (so far) doesn’t natively support LXCs, therefore; lxcfs isn’t a thing in said TrueNAS Scale, which means that host ↔ LXC communication also is required to go through a virtual network bridge.

Proxmox doesn’t have this limitation/restriction, which is how I ended up choosing it when I was having a three-way drag race between Proxmox, xcp-ng, and TrueNAS.

1 Like

I don’t even know where to start or what to say. This has become extremely adversarial, which is not my intent. I simply tried to share a different opinion way of doing things, an you keep trying to prove its wrong, or doesn’t work.

Ok your original post does not mention proxmox and my discussion about lxc containers is not specific to proxmox. As i have previously state i don’t use proxmox, an if proxmox has gimped lxc in any way not my problem. Normally you would do this which is how you use a docker image to create a lxc container:

lxc-create <<name>> -t oci -- --url docker://alpine:latest

Honestly don’t understand your deal here, i previously share my opinon you should only be running containers form one place, i also expressed i think it should be on the host on bare metal. If you do things differently great, if you disagree that’s fine. Don’t have more to say here.

Never said i have never used lxc containers, you keep making leaps and assumptions. Only said i choose not to use LXC containers.

Ok not to argue semantics but weather you define the network or the docker dameon defines the network, and weather it uses a virtual or physical adapter does not invalidate what i said.

Also don’t ever remember trying to knock lcx containers(i did say i use/prefer docker), if the fact that you can bind a physical adapter to a lxc container floats your boat, more power to you, you use lxc containers. My original point was in terms of having a large address space and lots of networking tools software and hardware, why be concerned with ipv4 addresses all containers get an address.

not sure of your point here, docker is a containerization technology the semantics of it being lightweight vs heavy weight, hold no sawy in dis-proving the statement lxc, docker, and podman are all containerization technology.

Again if lxc floats your boat and fits your needs use it be happy, and don’t worry that i use docker.

It works just fine, I moved everything to Docker and I couldn’t be happier! :wink:

Best regards,
PapaGigas

1 Like

The data and the evidence is what forms the basis that counters the claims that you are making.

Your claim:

(emphasis mine)

As the data shows, by running stress-ng for 30 seconds, running said stress-ng inside the Linux Container is between 1.7-1.9% faster than running it on the native, Debian 12 host that underpins Proxmox.

The evidence is what forms the basis where the data disproves the claims that you are making.

If you want to explain why/how running stress-ng inside the Linux Container ends up being 1.7-1.9% faster, you’re more than welcome to do so.

Furthermore, the fact that it was relatively easy for me to run apt install -y stress-ng on the Debian 12 host and in the Ubuntu 22.04 LTS LXC after a cold boot further provides additional data/evidence which speaks to the ease of disproving your claim.

In other words, I am not even trying to prove you wrong. The data that I collected, with a cold boot, and running apt install -y stress-ng and then running stress-ng itself, speaks for itself.

Pursuant to your claim, I was actually expecting the LXC to be slower, at least by upwards of 1-3% (because that’s what it says the expected performance degradation should be, when you’re installing Proxmox).

However, without even trying, that turned out not to be the case.

Agreed, but I did mention it after your first reply though. So, you were told about it, quite early on, in the discussion.

I wouldn’t necessarily call it “gimped” just because the templates that Proxmox provides doesn’t use OCI.

OCI isn’t the be all/end all of anything.

In fact, if I were your department head, the question, based on your replies (about OCI) that’s still in my head is “so what? What’s the ‘so what’ of OCI?” (i.e. if I was your department head, why should I care (about OCI)?)

Just because you can create a LXC image that uses OCI:

a) doesn’t mean you have to and

b) if you’re not going to, that doesn’t automatically make it “gimped”.

the -t oci is an optional, not required flag that you can add to the lxc-create command.

See my remark above re: “why should I care about OCI?” (i.e. “what’s the ‘so what’ about OCI?”)

Where did I say “never”? Cite me.

Yes.

Which is paraphrased in what I said here:

You said that you “choose not to use (LXCs)”. I wrote “…that you don’t use nor deploy LXCs”. I didn’t never in there.

My statement is literally a paraphrase of your declaration.

If you create the LXC, but you don’t define the network (that you’re going to attach said LXC to), then you won’t be able to access said LXC, over the network.

When you deploy a docker container, yes, it will create the network for you* (assuming that you’re leaving the network option unspecified as that is the default behaviour of Docker).

When you deploy a LXC, it does not create the network for you, as the default behaviour. You have to create the network first and then attach said LXC to said network, otherwise, you can still create the LXC without attaching it to any sort of network, but I surmise that would of very limited use.

In Proxmox, you have to assign the IPv4 address, to the LXC.

If you’re using something like LXD and/or Incus, then it might self-assign an IPv4 address. (I don’t remember anymore. It’s been probably like 2 years or so since I last played around with Incus.)

(And of course, there are additional interactions between how you set up the networking for your LXC either within LXD or Incus, such that if you put it on the host network, then it won’t be bridged/NAT’d. And then on top of that, if it is using DHCP, and you’re hosting a service on it, then you’d need your reverse proxy to talk to your DHCP server so that it will know what the new IPv4 address is, for the service that you’re hosting, unless you mark it as a reserved address, on your DHCP server. But that’s beyond the scope of this topic/discussion here.)

…and how they go about it is very different.

That’d be like saying that laptops, desktops, and servers are all computers (which is true), but there are things that you can do with laptops that you can’t (necessarily) with servers and vice versa.

(My server, supports I think it’s upto 3 TB of RAM with 3DS LRDIMMs. There is no way that you’re going to be able to get that with laptops (now nor probably 10 years from now), no matter how hard you try.)

As I said, I use both.

That’s great to hear!

I might revisit this some other time.

(There is current a study/proposal that I am working through, where I might end up picking up a dual AMD EPYC system with 1 TB of RAM so that I can put four 3090s in there, along with virtualising my Proxmox HA cluster, all entirely in one system. I dunno. It’s making its way through its assessments. I’m trying to compile a list of all of the services that it runs and the maximum resources that they need to run so that I can try and see if I can re-balance/re-distribute things or if I end up pulling the trigger on this dual AMD EPYC system, then I’ll end up shoving it all onto this new system instead. I dunno. We’ll see.)

1 Like

This is as far as i made it through your reply, just not interested in the tone of the conversation. Hopefully someone else will debate the finer points of container technology and the pros and cons with you. I have said my piece im out.

1 Like

Niceeeee… I have an EPYC 32c/64t, 128GB ECC, Opttane’s (7) for system/apps, a 4070 and a P10 for small/simple AI stuff, and a bunch of NVME’s (4), SSD’s (10) and HDD’s (2), running on a supermicro MB… for my needs it’s more than enough! :wink:

1 Like

I haven’t pulled the trigger on it yet because well, primarily - cost, as even used, it’s like over $3500 USD for the kit. (which is CPU, motherboard, CPU HSFs, and RAM – no chassis, no PSU(s).)

It’s a pity that I haven’t been able to get Nobara Linux 40 to work as a LXC because if I could, then that means that I will be able to share the GPUs. But right now, as it is, I have another system that’s my “gaming” system, where I’m running Nobara Linux 40 as a VM, so that I can give it a GPU (shared only if I shutdown Nobara and fire up my Win11 VM which shares the same GPU).

Right now, with my two 3090s running in my 6700K system, it has just enough VRAM to be able to run the DeepSeek R1:70b model, but that also means that I can run anything else when that model is running (so it can’t do like “in-place” image generation where the LLM creates the prompt and then integrating Automatic1111 directly into open-webui where it will generate the image right then and there as well).

The other (vastly cheaper) option would be for me to use something like a 9900K or a 9960X as those were the last generation of systems that had 40+ PCIe 3.0 lanes which means that if I drop in three of my four 3090s, then the 3rd one will be able to run the “in-place” Automatic1111. I dunno. Still thinking about/analysing it.

1 Like

A common theme given the questions that you’re asking and how I’ve already answered said questions, before you even ask them.

Only dish out what one can take.

Based on assumptions that were easily be disproven with little to no effort.