You’d be best off utilizing a single container for each task, so if you want multiple of the same game server, it’s best to run them on a separate container.
Mixing them on a single container means you’ll need to shell into the container in order to manage it, whereas when using a 1:1 deployment, you just start/stop the container as needed.
Are you running so many machines that you’ve exhausted a /16 subnet? Address space is cheap.
The resources it consumes by duplicating them is negligible, but if you really want to optimize, you’re best off building minimal containers with the entrypoint being the application itself, rather than an init system.
I totally get it. I’m just coming from the perspective that separation of duties is super important. There’s a lot of reasons for the benefit, and frankly, it boils down to control and ease of management.
When you move towards a containerized infrastructure, the smaller role each container can play, the better, within reason.
That said, I can completely understand your concerns. If you’re worried about squandering resources, you could make a virtual switch within the hypervisor and proxy sockets from a gateway VM or container. That way they’d all use a unique port and only take up a single IP.
But that would come at a small cost of memory and cpu time.
I’m still getting my head around Containers, and what they can be used for. I’m probably too fixated on them being like VM’s with bits missing.
It’s not so much of a worry, more like learning what the “right” way is. I’m used to running several instances of a game server on one machine/VM sharing a single IP, and using different ports.
It’s amazing how many IP’s a 4 person household can use with all the phones, AV gear, consoles, PC’s and VM’s. The unassigned IP blocks between different device roles is getting a lot smaller now.