Return to

How do I access MariaDB from other Docker containers?

Trying to set up MariaDB for Nextcloud and Wordpress (in future for even more). Right now I want to have each of them in their own container (“isql” for db, “next” and “wp”). I didn’t find a solution startpaging, but I understood it’s probably something to do with networks, but I was unsuccessful getting anything running correctly.
If next’s and wp’s config is similar, I’d need only one. Could someone please guide me how would I need to run MariaDB and say only NextCloud from seperate instances? Note: also using Traefik.

In your Nextcloud config.php you just point it to the network address of your MariaDB instance, specify the db name, user and password, if you leave port blank, it assumes the default port of 3306. My config for example:

$CONFIG = array (

  'dbtype' => 'mysql',
  'version' => '',
  'dbname' => 'nextcloud',
  'dbhost' => '',
  'dbport' => '',
  'dbtableprefix' => 'oc_',
  'dbuser' => 'ncuser',
  'dbpassword' => '***your password***',



I think the default MariaDB install binds to all interfaces, but you can make sure in your mariadb config. Mine for example (/etc/mysql/mariadb.conf.d):


# * Basic Settings

port            = 3306
bind-address            =

And in mariadb itself, you just have to make sure that the user you supplied to the nextcloud config has all the appropriate permissions to modify the db.

This doesn’t seem to be docker? I expected examples of docker configurations. If you think your answer still applies, could you elaborate?

I would recommend docker-compose it creates a private network for you automatically with all the containers you specify there and container a can connect to container b using container b’s name. Otherwise you either have to do that manually or give your mariadb a public ip, witch you probably should not do.


A docker-compose.yml would then look something like this

version: '3'
                image: "postgres"
                restart: always
                container_name: postgres
                        - "5432:5432"
                        - POSTGRES_USER=admin
                        - POSTGRES_PASSWORD=password
                        - PGDATA=/var/lib/postgresql/data/pgdata
                        - /home/max/docker/postgres/postgresql_data:/var/lib/postgresql/data/pgdata
                image: "dpage/pgadmin4"
                restart: always
                container_name: pgadmin
                        - "8081:80"
                        - [email protected]
                        - PGADMIN_DEFAULT_PASSWORD=SuperSecret
                        - /home/max/docker/postgres/pgadmin_data:/var/lib/pgadmin

Thats one I have running sorta like this. postgresql database with a pgadmin container to get gui access. I also have the postgresql container forwarded, since I connect to in from other machines in my network. Its a testing db, not necessarily how Id deploy it in production.

Though the cool thing about this is that you have a config file that will stick around when you kill your containers. docker-compose up -d will start them and detach your screen from the instance(s). docker-compose down will stop and delete the containers (though deletion does not necessarily mean your data is gone, containers ideally should not carry state, but state should be saved outside the container). And with docker-compose pull you can pull the newest ‘:latest’ image (if thats what you r using. That’s about all commands I used so far with compose lul. It’s really simple to get started and is a much more sane to manage that compared to copy pasting huge docker commands. The pgadmin would then connect to postgresql using http://postgresql:5432 in this example, but you cant use the name like that from outside the docker network. Thats just for connecting containers within.

I would need the db accessible from other containers (for example nextcloud’s docker-compose is at ~/docker/next and wordpress at ~/docker/wp), for sake of simplicity, reliability (maintenance, crashes take down only one service) and also containerization (the point of docker, easy deploy, easy scale) they can’t be in the same compose. I would prefer to run a db for each service, but the same db is known to conflict with each other and that would also remove the ability to scale. If it is possible, I am also fine, if the db is not containerized (but services such as wp and nextc are)

Also I’d note that I have decided that I will be using MariaDB

Depends on how you think about scaling. And if your programs even work like that. I frankly have never used wordpress so… No idea. But docker does not magically scale things by itself. Programs have to be designed to be able to run them multiple times and load balance between them and you probably are gonna want kubernetes or so if you want to do any kind of automatic scaling.

Im also not quite sure if you got docker compose right. Its not combining containers in a way so that if one crashes all crash. Its just a way to boot up multiple containers with a neat config file and not have to bother configuring networking. They all are configured individually. You could stop one container or kill it and the thing would continue working. If you need one more container you add one to the compose file and start that one too. All I always did was ‘start everything’ and ‘kill everything’ but you dont loose the ability to do things like update container xyz and add another container. Compose can do those things Im pretty sure. But if you need a more advanced tool your looking stuff like kubernetes. It is the thing that (can) automatically scale / orchistrate fleets of containers for you. But that is also a whole lot more involved than using compose, especially when you have to set it up yourself unlike clouds where you click a button and have your kubernetes cluster as a service.

Compose also works docker swarm so that may be interesting. If you scale up more instances on the same machine. Are you really scaling that well?

So you can either go down that route or kubernetes.

This article might be useful. A little comparison between kube and swarm.

I wrote a post on how to setup Nextcloud in docker - you might find it helpful. It uses docker-compose as suggested above and builds a MariaDB container. It was written for setting up in a cloud hosting environment (Linode) but should be easily adapted to run wherever you want.

If you just want the docker compose file, you can find it here:

1 Like

I’m slightly confused by what you’re asking. Here’s a guide I wrote that might be helpful in general, for NextCloud, using Traefik and MariaDB:

More specifically, have you tried using:

      - <mariadb docker container name>

In your NextCloud configuration? Links are a legacy feature that may be removed in the future, but it should work even without networks, so if you try it and it works, it will tell you that the problem is with your network configuration. Disable all your network settings to test, because if you use links and networks together, the containers need to have a network in common for links to work. Without any declared networks, links should work fine on their own.

Another thing is where did you declare the network? If you have two separate compose files, and want to access the same network, you should create the network outside the compose files, and reference it as an external network in the compose files:


If set to true , specifies that this network has been created outside of Compose. docker-compose up does not attempt to create it, and raises an error if it doesn’t exist.