Using MinIO S3 for docker compose volumes

So, I’ve been researching this project for a couple of days and can’t find a proper solution.
Here’s to hoping that one of your sharp minds can help me

tl:dr
I want to host my docker volume on my MinIO S3 server

My setup is kinda simple.
I have one server running docker and my TrueNAS server running MinIO.
I want to be able to write a docker compose file that connects a volume to a bucket on the MinIO server.

I can’t find a plugin that allows me to have a composer file with S3 connection information and an .env file for access tokens.
There is one docker plugin (mochoa/s3fs-volume-plugin) that might work, but it requires a lot of setup, thus removing my option of just writing the information into a file.

I’m at the point of “f it, I’ll just write the plugin myself”, but would love to hear if any of you have a suggestion.

Best option I’ve seen is Rexray as a plugin, although I don’t have it working yet. I’d give you a link but this board doesn’t allow that. Look on hub_docker_com for rexray/s3fs. The Rexray docs for S3 are at rexray_readthedocs_io/en/stable/user-guide/schedulers/docker/plug-ins/aws/#aws-s3fs. You’ll need to specify S#FS_OPTIONS=“url=minio path”. It works with S3FS, so any options for it should work with Rexray. Optionally, use S3FS or S3-fuse to create your own buckets/mounts, and point your volume mounts to it. I’m trying to make this work so I can set the driver name in docker-compose.yml and make it mostly transparent.

Why? Thats the exact opposite of kinda simple. Trying to use S3 like unix volume directly is rabbit hole of ugly unsupported hacks so deep, it isn’t funny. If you have to use it, do it natively or not at all.*

It can be done but it really shouldn’t be done without good reason. Object storage is not POSIX FS, but where there is will, there are tools.

If your goal is for docker to handle S3 storage directly and present it to container as normal unix volume, then good luck. It very exotic ask and when it was supported, it was not usually implemented on docker level but elsewhere.

You solution is either:

Easiest as always is containerized workload itself being S3 aware and thus capable using the bucket directly, just give it right config where it expects it.

Given you have truenas server already running, why not using docker native NFS support instead? NFS has inline volume driver and thus being the easiest to use.

I strongly recommend rethinking the why using S3 and why not using NFS. Using S3 as hinted in OP creates much more complex setup with comparatively fragile and untested critical elements for little to no gain (K.I.S.S. violation compared to alternatives).

/* unless using it is the point as POC or anti-POC** demonstration.
/** AKA demonstration why its bad idea in praxis.