Home Server Goals and Issues

I personally use multiple docker-compose files, one for each service normally, and they are all in their own folder, along with other config files as needed. I cd to the correct folder for normal operation, and in scripts i use -f /path/to/docker-compose.yml.

I see you have a single file with everything in it. So what do you see as the advantage to having one monolithic file for docker-compose rather than multiple? I’m curious.

I like it.

I have a slightly different situation where due to my limited upload bandwidth, a number of my selfhosted things in running on VPSs, so I use rsync over ssh with an exclude list to copy stuff over, then make a .tar.gz.

IMO, git would be a better solution. Proper version control.

Nope, not an issue. Or you could paste it directly into here, prevents link rot.

You can create code blocks with triple backticks.

And then create a collapsable block. Click the gear, then hide details.

Clickme
Hidden text

Overall, this is really neat, thanks for sharing the what you are setting up. I’m also doing things like this, although I am not quite as far along as you are in a number of areas.

1 Like

It comes down to just a very few reasons, really.

The first is to not introduce unnecessary complexity into the system. I have limited time, money, and skill to provide this project (I still have to work, and as a writer, my professional skillset is somewhat divorced from computers, naturally). At the same time, though, the maximum feasible functionality is a goal.

A specific example of that is Traefik, SSO, docker socket proxy, and the rest of what I want to implement on that level. It might just work, maybe, to have these things in their own folders, and be fine. It also might break everything’s ability to talk to everything else, or something in between, and then you need more complexity to make them work together.

The second reason is, yes, VC. I’ll get to discussing more about that later on. Oh, and there are exceptions to the monolith stack, though they’re few.

I mean, that’s probably fine. I want the entire server in my home because for me, “connect to it from within the home” is the highest-priority use-case. Hence why there’s no access from the internet yet. But I work from home, and all my highest-priority users are here. The rest will be shuffled in as we get there.

I have plans to put up a VPS and other stuff to help control and secure access from the internet, but that’s later.

For casually sharing it, not so much. For actually maintaining it, especially as it gets bigger, yes. I was thinking of Mercurial or Git, but I haven’t looked into either just yet. And this is that exception that I was thinking of, obviously it’s better to keep the VC for the server stack out of the server stack itself.

Thanks, mate. I know that the email stacks are going to require some manual writing and sending of “Oi, this isn’t a spam domain please whitelist so my people can talk to your people” nonsense emails, for example.

1 Like

Alright, that’s a good reason.
I use standard debian nginx on the host as my reverse proxy, rather than something in a docker container. So networking does not really factor into it for me.

Git for sure.

You can just use a local repository, or you can set it up as a GUI-less server with just git and ssh or alternatively a http server.

And you can share things if you set up a GUI server like gitea

Although having a paste service for casual sharing is not bad.

Oh, some places are such a pain. Microsoft’s spam filters are just terrible.

I use mailcow, it was really easy (as email servers go), to set up. It requires docker, although that would not be an issue for you. It does it’s own thing in a couple of ways, so it may not integrate well with the rest of your compose configuration. It has you pull down a git repo, which you clone, then edit a config file or two, then do docker-compose up -d, then you are off to the races in the web gui. But updates are a pain, they require you to update via their script, which I am not a fan of.

1 Like

Very much appreciate all the info here, thank you!

1 Like

OK. So now the update, or “Why I haven’t posted in a week”:

Databases. Databases. Databases.

I’m working on moving everything I can to categorized postgre multitenancy. So there’ll be a database for “content services”, for “work services”, for “security-sensitive services”.

But progress is slow, never having worked with dbs before. I frankly don’t actually know what I’m doing, but that’s OK.

1 Like

that’s not good

Attempting to implement Gitea, Psono, Firefly, and some other stuff this week.

I’ve given up on multi-tenancy for now, it’s taken too much time and if it doesn’t simply WORK with logical data fed to it, I’ll give things discrete Postgres (or whatever’s compatible) instances.

Most stuff is working well. Trilium sync is touchy. Vikunja is also touchy. I may have to replace these solutions. Further, I’ve noticed that the Jellyfin failure appears to be associated with journal files that aren’t cleaned up, sometimes because of unclean shutdowns of the host machine, sometimes because of a crash of the service. But frankly, the amount of stuff that just works over time is stupendous.

Hey Hey things are moving.

  • Server’s running again (Traefik, Socket Proxy, Organizr, Jellyfin, Planka, as of tonight - planka’s data was backed up elsewhere so it’s started fresh, and Jellyfin’s backup was pulled in order to return the core household user set to their profiles and watch histories – Big win!)
  • Got a Linode up for a Wireguard tunnel proxy (Thanks @PhaseLockedLoop - that name’s going to come up a bit and he deserves so much credit btw* - I can’t credit my partner K since she’s only on the Discord but she’s done a lot of the legwork here)
  • We’ve abandoned dedicated hardware for now, the market’s just… you know. I am watching for 1 & 2U used servers with decent amounts of drive bays but I’m content for now running it on this machine.
  • Multi-tenancy is dead - it’s not happening. I looked into the perf costs that @PhaseLockedLoop referred me to on Discord. They are a nightmare.
  • Working on Lets Encrypt SSL - this should be done soon, and we’ll be back to re-implementing containers.
  • We’ve done the work to make Traefik be aware of multiple server stacks - this will allow different stacks for different purposes (and I’ll single-stack anything that needs redundancy).

I should have gitea or other VC up later this week with non-secrets versions of compose and so forth up and linked here within 2 weeks max - aiming for 1.

2 Likes

Your welcome

Pretty much and for a server of your design needs its not really needed

1 Like

Yes. But I might well be doing the multi-container/multi-db thing.

We’ll see what performance is like, and if I nail lag someplace that’s thread limits or other container limits, I’ll spin up a compose for 2-4 of just those containers for continuing operation.

1 Like

OK. Traaefik2 SSL is causing us some issues.

tls:
  stores:
    default:
      defaultCertificate:
        - certFile: [path-to-fullchain.pem]
          keyFile: [path-to-privkey.pem]

This is a segment of the current traefik.yml (a replacement for traefik.toml that removes an unnecessary filetype / language setup to remember and maintain) that is throwing an error. That error follows.

time="2021-07-05T13:47:45-06:00" level=error msg="Cannot start the provider *file.Provider: field not found, node: [0]"

Obviously this is required for HTTPS support on the server, which in turn is required to make users happy (normies freak out when their browser says “PROBABLY UNSAFE SITE TURN BACK” after all). Besides the actual security benefits.

Removing the block from the .yml removes the error, so it’s clearly here. Can anyone tell me what’s wrong with the block? We can’t figure it out.

Edit: it is not the traefik.yml, it’s the dynamic config. Excuse me, many text files to keep track of (another reason for VC).

1 Like

Well the stack trace says something about a file provider. Is it pulling time or parsing time from something like a file?

(At a rest stop)

Thank you for driving safely. I suspect after this year I’ll never be cavalier about that again.

We’re not really sure why it’s calling out file.Provider, to be honest. Some searching has turned up things that don’t seem to be leads.

At this page we have a simple update note:

File Provider
The file parser has been changed, since v2.3 the unknown options/fields in a dynamic configuration file are treated as errors.

Meanwhile, at this one we have a few references to how to write similar entries (mostly not about SSL).

Meanwhile, reading the docs led us here instead buuuuuuut…

That’s the same freaking config.

I won’t repost our’s from above so immediately, but here’s the TLS cert/key config snippet from that section of the docs for easy on-forum comparison:

tls:
  stores:
    default:
      defaultCertificate:
        certFile: path/to/cert.crt
        keyFile: path/to/cert.key

The only difference is that the single hyphen cert is supposed to associate a cert and key pair so that Traefik doesn’t fuck up the pairing - which itself is sourced from elsewhere on the same page, an attempt to futureproof the possibility of multiple later domains (which may be required for user-access segregations).

1 Like

Having messed with the formatting somewhat and removing that odd hyphen and (I think but could be wrong: re-) tested the SSL configuration, we have made progress:

A new error.

time="2021-07-05T17:09:46-06:00" level=error msg="Error while creating certificate store: failed to load X509 key pair: tls: failed to find any PEM data in certificate input" tlsStoreName=default

Aand Traefik’s container CLI cannot see these .pem’s! Fascinating.

Further investigation proceeding.

1 Like

Certbot saved symlinks, not a cert/key pair, and we didn’t notice. FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

Running through the process again, see if we can set up a proper cert store and have actual, y’know, files.

1 Like

The SSL saga is over.

This was a misunderstanding of the docs (we can’t futureproof like that: no hyphen on single-pair cert/key and certbot is apparently somewhat misleading).

SSL works now, but:

  • because it is a wildcard cert Organizr cannot resolve to the root without throwing a bad domain issue
  • we’ll need to add a line in traefik to resolve root domain to www. for organizr

Ultimately frustrating, but now we’ll move on to implementing more containers.

1 Like
okay I've got a moment. Stopping near the southeast ID repeater station before heading into the passes (little to not signal)

Okay so I actually don’t use a wildcard cert. I register all my subdomains in a script with certbot because nginx allows me to do this

Did everything get solved?

It works. I specifically chose a wildcard because of the sheer number of services, and therefore, subdomains. There may be other advantages to individually registered subdomains but holy hell I don’t want to deal with it, at least not at this stage.

We’re now working on Wireguard, adding a front-page newspost to Organizr, and adding Trilium/Shiori/FresshRSS, and Ombi, in that order.

Beta friends/family (i.e. non-household) accounts are going out manually so that people can change their passwords as they’re notified on the Wireguard being active.

1 Like

I navigate via radio towers as landmarks. They always have a public searchable known location lol

Do you have services that are admining or critical to infrastructure exposed via a subdomain?

Wait you made a site to site VPN peer for each? or are you talking about the services themselves?

No. Anything like portainer or traefik dashboard (the dashboard is not… directly useful, just informatics, but I could see it being a dreadful attack vector) can only be reached through localhost currently as a security measure. I’ll be extending that to LAN-connections-only later to ease administration and to future-proof the setup against the later move to rack hardware. Docker itself does not even have non-local-machine access yet, though it will have to have LAN administration later.

No, I mean I’m handing out logins so that when the one-single wireguard is working, they can come in and change passwords.

1 Like