NAS, PiHole, LANCache, PLEX, GIT, Home Assistant, and probably a dedicated game server or two. I would consider myself novice at this kind of thing but love learning. How would you handle this? Is this homelab?
I have a UDM Pro currently running PiHole using podman in UnifiOS but wouldn’t mind moving it off the UDMP. I had PLEX on my main system but the drive died. The other services would be first time setup.
The system I have available to take on this project is an old gaming pc: 4790k, 32GB DDR3, GTX960 4GB, 1TB m.2 Intel 660p, and 12TB of HDD storage. I only have the built in NIC but can get a addin if needed.
Users on network can include at some points 8-10 other PCs, 4 ‘smart’ TVs, and who knows how many mobile devices.
Any recommendations welcome.
I imagine some version of linux would be the best way to go about setting this up. I’ve used Ubuntu and Fedora before.
I would like to have one system do it all but would it be better to have seperate systems for some of this?
NAS → Samba and/or NFS
PiHole → Ditch it, run blocky or AdGuard Home instead
LANCache → Makes little to no sense on such a small network (ditch it)
Plex → Plex (I guess, I’ve never touched it myself)
Git → Not sure what you mean here, just use an online server such as GitLab, GIthub, Codeberg, sr.ht ?
Home Assistant → Run in a VM
In general I’d say that FreeBSD 14 would be a good candidate if you want ZFS and good documentation otherwise I guess go with whatever you prefer.
First thanks for your input. I will definitely look into these I have never heard of.
I don’t plan on having redundancy for now. Will mainly be testing out setups to see what works best before committing.
LANCache is mainly for convenience of some users with less storage and visitors bringing their own systems. They can get the games/updates from LAN instead of relying on steam servers. At least that’s the plan.
And the GIT part of it is for some of us working on projects together and wanting to keep code completely private and within LAN.
VMs are cheap these days. First is security, then better management, better troubleshooting and amazing portability (live migrate is a thing). Having things “contained”, be it VM or container is the cleaner and easier approach. Spinning up a dozen VMs, one per app, is just made so easy with modern hypervisors.
I certainly don’t want to search for problems on a server running 20 daemons under one OS each stealing CPU cycles and memory from the other and ending up in an unmanageable mess. I’ve been through the 90s once and I’m glad we have virtualization and 50 servers in one machine today.
Server seems fine for a first server. I think the CPU will be the limiting factor and power consumption is far more than you’d have with modern CPUs. Memory is surprisingly good for 4th gen. on-board NIC aka 1Gbit is usually fine, but don’t expect filesharing/LANcache/PLEX servicing multiple users at once. I download from Steam via 1Gbit WAN so 1Gbit LAN wouldn’t do anything for me. 10Gbit Copper/SFP+ is a good upgrade, but also requires corresponding network gear. But file transfers >100MB/s require some more sophisticated storage.
Just keep things manageable and easy to roll back, restore or migrate. Getting familiar with a NAS OS or hypervisor will be helpful and reduce administration to a minimum.
I’m a Proxmox guy so I recommend Proxmox. But whatever floats your boat.
Problems and bottlenecks show up in practice and you can then act accordingly. Without knowing details, nobody can tell if all will be fine. User behavior and workload will tell.
Check the load and upgrade accordingly. And without redundancy, do backups. Everyone does mistakes and things get wrong, save yourself the trouble.
I see I didn’t think about having a VM per service but that would make it a cleaner setup to manage if I know which VM is doing what.
But then as you said my 4790k would be the limiting factor even if I assign a core per VM. I could probably group some services together so they share a VM. I’m probably misunderstanding how the allocations work.
I’ve only use VMs for Windows, MacOS/osx, or Linux using VMWare/VirtualBox. And containers that one time I used podman.
Proxmox looks very clean, will definitely be trying a set up with it.
And so it seems it will come down to which has the best “container”/vm management and compatibility with the services I want to have.
Once you’re more comfortable with VM setup, you can get better resource efficiency with container orchestrators like Kubernetes or Nomad. This way, you don’t need a whole VM per service (which has significant RAM overhead for small services), but you can still have the compartmentalization.
But, start with VMs first. Consider learning infrastructure-as-code like Ansible. These are fundamentals before moving onto container orchestrators.
Oh very interesting. I looked this up a bit. So I could run every service on thier own containers and schedule them using Ansible to free up resources when needed?
Now VMs seem very clunky. But I imagine not all services will run well or at all in a container.
LANCache adds more complexity and from what I can tell by having a quick glance at forums it doesn’t seem to be as efficient as it used to be which kinda makes sense as there’s a lot more focus on security and integrity (https etc) than before. If anything just rip out the essential parts and integrate it in your setup but I doubt it’s worth the time and effort given than no one has even bothered to package it,
Git is fine but can be a lot of work especially if you want to spin up something like Gitlab compared to a simple Git server and a frontend like cgit.
Home Assistant isn’t really made to be made into a package so you’re better off for the sake of maintence running it in a VM as they do provide and maintain a standard alone distro for it.
Since this is going to be a rather small setup I didn’t recommend Proxmox and whatnot since it adds more complexity, it’s not really recommended if you care about about your data (NAS part) and it’s more work to maintain for very little to no gain at all in your case. That’s why I suggested baremetal which works fine and is a lot easier to troubleshoot if something breaks. Not sure why performance is going to be an issue as long as you’re relatively realistic about it. If you run with defaults for pretty much everything it’ll do fine unless you’re transcode media and run a gameserver at the same time…