Help connecting a container to host Postges Database

So I have been trying to figure out how to connect my containers to a database installed on the host. The main reason I want to do this is because I notice that I have a lot of duplicate database containers, and imo that seems silly when I have PostgreSQL available on the host that can handle multiple roles and multiple databases. I have been trying to figure out how to do this for awhile now. At first I started looking for stuff centered around docker. Now I am trying stuff centered around Podman. I found this post on stack overflow about doing this sort of thing here. In it, two options are proposed. I like the first option, but I have limited experience with unix sockets (assuming that a unix domain socket is the same thing as a unix socket) (and also the first thing that comes to mind when I think of unix sockets is the docker socket: unix:///var/run/docker.sock), so I do not know how to get PostgreSQL to listen on a unix socket rather than port 5432. Also it might be listening on a port and a unix socket at the same time I guess, so maybe I don’t need to do any extra configuration. Yet, I don’t even know how to check what’s using unix sockets (i.e. netstat -tunlp - but for unix sockets). I have been trying to google this stuff, but I am not getting very far. So I decided to come here for help. To be honest, the first option may not be the better option, but I figured it’d be useful for me since I do not have much experience with unix sockets; and besides I do not need my database listening over the network at all since everything connected to the database connects via localhost.

Another possible solution:
Keep using postgres on docker, keep using TCP, use a docker network to connect the containers , create all databases on the single container… …
The docker postgres image already listens on port 5432

1 Like

Replying to myself to give you more context, with docker you can use a plain strategy of having separate containers, all participating on the same docker network, they will be able to dns resolve each other using their container name, or you can go one level up and use a compose file and deploy a stack like this:

version: '3.8'

volumes:
  db_data:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/DBDATA/db
  db_init_data:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/DBDATA
  db_certs:
    driver: local      
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/DBDATA/certs
  db_data_test_test:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/TEST/db/test-test/data
  db_init_data_test_test:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/TEST/db/test-test/init
  db_certs_test_test:
    driver: local      
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/TEST/db/test-test/certs
  db_data_test_development:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/TEST/db/test-development/data
  db_init_data_test_development:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/TEST/db/test-development/init
  db_certs_test_development:
    driver: local      
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/TEST/db/test-development/certs


  pgadmin:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /opt/nfs/shared/DBDATA/pgadmin  
      


networks:
  postgrest-backend:
  postgrest-frontend:

         
services:

  db:
    image: postgres:${PG_VERSION}
    ports:
      - "5432:5432"
    environment:
      # env vars specific to postgres image used on first boot
      - POSTGRES_USER=${SUPER_USER}
      - POSTGRES_PASSWORD=${SUPER_USER_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
      # env vars useful for our sql scripts
      - SUPER_USER=${SUPER_USER}
      - SUPER_USER_PASSWORD=${SUPER_USER_PASSWORD}
      - DB_NAME=${DB_NAME}
      - DB_USER=${DB_USER}
      - DB_PASS=${DB_PASS}
      - DB_ANON_ROLE=${DB_ANON_ROLE}
      - DEVELOPMENT=${DEVELOPMENT}
    volumes:
      - type: volume
        source: db_data
        target: /var/lib/postgresql/data
        volume:
          nocopy: true        
      - type: volume
        source: db_init_data
        target: /docker-entrypoint-initdb.d
        volume:
          nocopy: true        
      - type: volume
        source: db_certs
        target: /var/lib/postgresql/certs
        volume:
          nocopy: true        
    networks:
      - postgrest-backend
#    deploy:
#      endpoint_mode: dnsrr
    sysctls:
      net.core.somaxconn: 1024
      net.ipv4.tcp_syncookies: 0
      net.ipv4.tcp_keepalive_time: 600 
      net.ipv4.tcp_keepalive_intvl: 60 
      net.ipv4.tcp_keepalive_probes: 3

  db-test-test:
    image: postgres:${PG_VERSION_TEST}
    ports:
      - "5433:5432"
    environment:
      # env vars specific to postgres image used on first boot
      - POSTGRES_USER=${SUPER_USER}
      - POSTGRES_PASSWORD=${SUPER_USER_PASSWORD_DBTEST}
      - POSTGRES_DB=${DB_NAME}
      # env vars useful for our sql scripts
      - SUPER_USER=${SUPER_USER}
      - SUPER_USER_PASSWORD=${SUPER_USER_PASSWORD}
      - DB_NAME=${DB_NAME_TEST_TEST}
      - DB_USER=${DB_USER_TEST_TEST}
      - DB_PASS=${DB_PASS_TEST_TEST}
      - DB_ANON_ROLE=${DB_ANON_ROLE_TEST_TEST}
      - DEVELOPMENT=${DEVELOPMENT}
    volumes:
      - type: volume
        source: db_data_test_test
        target: /var/lib/postgresql/data
        volume:
          nocopy: true        
      - type: volume
        source: db_init_data_test_test
        target: /docker-entrypoint-initdb.d
        volume:
          nocopy: true        
      - type: volume
        source: db_certs_test_test
        target: /var/lib/postgresql/certs
        volume:
          nocopy: true        
    networks:
      - postgrest-backend
#    deploy:
#      endpoint_mode: dnsrr
    sysctls:
      net.core.somaxconn: 1024
      net.ipv4.tcp_syncookies: 0
      net.ipv4.tcp_keepalive_time: 600 
      net.ipv4.tcp_keepalive_intvl: 60 
      net.ipv4.tcp_keepalive_probes: 3



  db-test-dev:
    image: postgres:${PG_VERSION_TEST}
    ports:
      - "5434:5432"
    environment:
      # env vars specific to postgres image used on first boot
      - POSTGRES_USER=${SUPER_USER}
      - POSTGRES_PASSWORD=${SUPER_USER_PASSWORD_DBTEST}
      - POSTGRES_DB=${DB_NAME}
      # env vars useful for our sql scripts
      - SUPER_USER=${SUPER_USER}
      - SUPER_USER_PASSWORD=${SUPER_USER_PASSWORD}
      - DB_NAME=${DB_NAME_TEST_TEST}
      - DB_USER=${DB_USER_TEST_TEST}
      - DB_PASS=${DB_PASS_TEST_TEST}
      - DB_ANON_ROLE=${DB_ANON_ROLE_TEST_TEST}
      - DEVELOPMENT=${DEVELOPMENT}
    volumes:
      - type: volume
        source: db_data_test_development
        target: /var/lib/postgresql/data
        volume:
          nocopy: true        
      - type: volume
        source: db_init_data_test_development
        target: /docker-entrypoint-initdb.d
        volume:
          nocopy: true        
      - type: volume
        source: db_certs_test_development
        target: /var/lib/postgresql/certs
        volume:
          nocopy: true        
    networks:
      - postgrest-backend
#    deploy:
#      endpoint_mode: dnsrr
    sysctls:
      net.core.somaxconn: 1024
      net.ipv4.tcp_syncookies: 0
      net.ipv4.tcp_keepalive_time: 600 
      net.ipv4.tcp_keepalive_intvl: 60 
      net.ipv4.tcp_keepalive_probes: 3


  # PostgREST instance, is responsible for communicating with the database
  # and providing a REST api, (almost) every request that is sent to the database goes through it
  postgrest:
    image: postgrest/postgrest:latest
    environment:
      - PGRST_DB_URI=postgres://${DB_USER}:${DB_PASS}@${DB_HOST}:${DB_PORT}/${DB_NAME}
      - PGRST_DB_SCHEMAS=${DB_SCHEMAS}
      - PGRST_DB_ANON_ROLE=${DB_ANON_ROLE}
      - PGRST_JWT_SECRET=${JWT_SECRET}
      - PGRST_PRE_REQUEST=${PRE_REQUEST}
      - PGRST_LOG_LEVEL=${LOG_LEVEL}
    networks:
      - postgrest-backend




  # OpenResty (Nginx + Lua) instance that sits in front of PostgREST.
  # All the requests coming into the system are first hitting this component.
  # After some processing/checks and transformation, the request is forwarded
  # to PostgREST down the stack.
  openresty:
    image: openresty/openresty:bullseye-fat
    command: ["/usr/bin/openresty", "-g", "daemon off; error_log /dev/stderr info;"]
    ports:
      - target: 443
        published: "${NGINX_LISTEN_PORT}"
        protocol: tcp
        mode: host
    environment:
      - DEVELOPMENT=${DEVELOPMENT}
      - POSTGREST_HOST=${POSTGREST_HOST}
      - POSTGREST_PORT=${POSTGREST_PORT}
    volumes:
      - "/opt/nfs/shared/DBDATA/postgrest-starter-kit/openresty/nginx:/usr/local/openresty/nginx/conf"
      - "/opt/nfs/shared/DBDATA/postgrest-starter-kit/openresty/html:/usr/local/openresty/nginx/html"
      - "/opt/nfs/shared/DBDATA/postgrest-starter-kit/openresty/lua:/usr/local/openresty/lualib/user_code"
    networks:
      - postgrest-backend
      - postgrest-frontend
    deploy:
      mode: global
      restart_policy:
        condition: on-failure
        max_attempts: 3
       
      
  pgadmin:
    image: dpage/pgadmin4
    environment:
      PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:[email protected]}
      PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
      PGADMIN_CONFIG_SERVER_MODE: 'True'
    volumes:
      - type: volume
        source: pgadmin
        target: /var/lib/pgadmin
        volume:
          nocopy: true       
    depends_on:
      - db      
      - openresty  
    networks:
      - postgrest-backend


  postgraphile:
    ports:
      - "48500:5000"
    depends_on:
      - db-test-test     

    image: mattiarossi/postgraphile-plus:latest
    networks:
      - postgrest-backend
      - postgrest-frontend
    command:
      [
          "--no-ignore-rbac",
          "--no-setof-functions-contain-nulls",
          "--append-plugins",
          "@graphile-contrib/pg-simplify-inflector",
          "--enhance-graphiql",
          "--allow-explain",
          "--connection",
          "postgres://postgres:A@db-test-test:5432/dbfdev",
          "--schema",
          "api",
          "--watch",
          "--enhance-graphiql",
          "--skip-plugins",
          "graphile-build:NodePlugin,graphile-build:MutationPayloadQueryPlugin",
          "--simple-collections",
          "both",
          "--dynamic-json"
        
      ]





This is a compose file for three separate versions of postgres (db, db-test-test and db-test-dev), plus a bunch of containers that need to access the postgres databases.
This is accomplished by creating docker networks :slight_smile:

networks:
  postgrest-backend:
  postgrest-frontend:

and by linking the containers to these networks

Once the stack is up, you can have externel containers joined to these networks, and they will be able to connect to the postgres databases using the container name

As an example, the postgres connect string

"postgres://postgres:A@db-test-test:5432/dbfdev",

connects the container named ‘postgraphile’ to the database dbfdev hosted on container named b-test-test, using port 5432 …

That isn not what I want though. I was using docker before - docker compose even. I want postgres on the host because my 2021 devember project actually uses a postgres database. I also want to switch to Podman because there are some things I like better about Podman over Docker - plus we use it at work.

I am using the following approach to successfully connect to postgresql on the host from docker containers.

  • open postgresql port (5432 by default) on the host firewall
  • bind postgresql to the host network and make accessible to docker containers using directives in pg_hba.conf and postgresql.conf
  • connect via tcp to <hostname>:5432

Using Compose Files with Podman.
You can run compose files on Podman?
Also, running postgres on the host that holds containers looks like an unnecessary complication to me … there is no difference from a client standpoint on running postgres bare metal or in a container, other than it forces you to be more organizes in your services layout. The additional ease of maintenance overcomes the initial need to learn a slightly different way to deploy it very quickly. And you may learn a thing or two about networking in the meantime

But anyway, what is it that you are trying to do with podman containers that is not working?
I would stay away from using sockets, there’s no benefit over using tcp/ip on the machine and there’s a lot of complication in presenting and maintaining the socket to all containers …

I am confused now.

I tried that, but I quickly abandoned that route and decided against it. There was too much complexity with this: namely, whether to use podman compose or the docker compose plugin. Neither one really fully achieved what I wanted to accomplish with the move to Podman, plus it was information overload because then they threw in Kubernetes. However, I have kept my old compose files to use in the future if I decide to go back to docker or compose - Podman compose or otherwise.

I already have a need to run Postgres on the host for other not containerized applications. A single PostgreSQL server can handle multiple databases (particularly at the low overhead and low bandwidth I am using). Plus, the host server is smaller than even a single container instance. Add in the fact that I have more than one container/service that I run on my homelab with a need to access the database, and that’s a sizeable amount of storage I am wasting for containerization overhead. Now, this does give less flexibility in terms of versioning, but this is a tradeoff I am willing to make. It does need to be acknowledged that my previous structure could have been optimized to have one postgres container for everything, and that would achieve my same goal. The simple answer is that I simply wanted to use postgres on the host. I feel like it gives me greater control over the configuration and the software. Perhaps I am completely wrong.

To the first point, that is valid, but the additional complexity of maintenance also comes with greater control. Of course, there’s always the podman exec command to effectively chroot into the container, but I kinda like learning how to maintain my database server. To that end, while I wouldn’t say there isn’t more I can learn regarding networking, I do say that there is diminishing returns there as compared to learning database maintenance and management for me.

Fortunately, I actually accomplished everything I set out to do. I’d even forgotten that I created this post. I also was able to accomplish something with Podman that I could never replicate with Docker or Docker Compose. Though in the end, I kinda discovered that the software developer had never intended to allow people to do what I did in the process, nor was it honestly worth it. Plus I found that two of my services, Vaultwarden and Jellyfin, had some nice guides specifically tailored to Podman that capitalizes on some of the things about Podman that I liked better than Docker. Don’t get me wrong, Docker does some things better - namely Docker Compose, but it’s honestly just personal preference as to which containerization software you use.

I don’t know much in the way of this point, but I was interested in learning more. One, and granted it was only one, website that I looked at implied that it was faster than TCP/IP, and considering the db is on the same host as all other software that will be using it, it really isn’t necessary to connect to the database over the internet which could have some security enhancements. I actually do try to make my homelab fairly secure as it is quickly becoming the central location for all of my most important data - with my Vaultwarden instance being the most important data I need access to. This just made me realize that I need to make a backup of my homelab and put the backup medium in my fire resistant lockbox lol. Some things are probably hardly necessary, but I hope to also mitigate against stupid shit I do.

1 Like