Really hard to gauge how much storage or compute you’ll need. Everything on your list, could theoretically run good enough for you on a raspberry pi; or you could find your self happier filling up a box full of disks.
If latter and you’re going for 6-8 drives and more, lookup LSI controllers in IT mode for ZFS on Linux… that way you get both docker and native container support without awkward VM shenanigans.
My setup has a dedicated hardware router, then a R720xd for Truenas and another R720 for VSphere. The 10GB nics for those are like $20 on ebay so you can have really fast storage access between the two, as well as local storage if needed on the R720.
I had a hard time getting the included raid controller flashed into IT mode so I bought preflashed ones on Ebay.
I also highly recommend Nextcloud for hosting your own stuff. It checked boxes for a lot of things for me. Bookmark sync, File sync, Password Manager. Plenty of other good ones too.
I personally run a Ryzen 5 1600 @home and it something around that should fit your setup perfectly as a NAS as well as some VMs. As every one else pointed out, do yourself a favor and buy a really small router sth with a Celeron or better and you’ll be golden with Opnsense or Pfsense (or even OpenWRT )
My NAS for example has 7 Linux Container (they’re like VMs without the overhead and their own kernel). One that does SAMBA, another with routhly 30 containers (yes container in a container ;P), another one for freeIPA, another one for backups, to name a few. Most of them have bind mounts to my BTRFS array, so that they have direct access to it without any translation. All that works pretty well in Proxmox.
Most VMs on the other hand are kubernetes nodes. And that all on a small little Ryzen. Just make sure to get enough RAM!!
I wanted similar to you. I went for an x86 box for my router (OpenBSD 6.9, Asrock Rack IMB-191, Pentium G4560, 4GB RAM, Intel NICs), Synology DS218+ for the NAS (I’d prefer self built but Surveillance Station for my 4x 4K PoE Hikvision cams won the day), and a Threadripper desktop for labbing/playing on *nix.
The NAS is only a low end 2 drive, but it self hosts almost our whole house in Docker. The router… routes, and handles firewalling (all hail pf) for our gigabit WAN. The NAS handles network shares of course (afp, ftp/sftp/ftps, nfs, rclone/rsync, smb etc), nginx reverse proxy for the whole domain/network, and lots of Docker containers including:
Ad blocking, DHCP and DNS (AdGuard Home)
Emby, Jellyfin and Plex (because why not?)
qBittorrent and Transmission
and anything else I want to play with. It’s not perfect but it’s basically flawless and it’s very easy to administer and backup.
I have had my server up and running for about a week now. Right now Proxmox is the base install with 2 VM’s running. One VM is TrueNAS with a PCI passthrough HBA Card and the second VM is a Docker container with Rancher that is currently hosting Jellyfin, OpenVPN, Pi-Hole, Minecraft and Heimdall.
I’ve yet to dabble with pfsense or opnsense or any other router type OS’s. I’m probably gonna wait until I get a dedicated machine for that.
The only things I need to fix is how to make jellyfin use the Quadro K2200 inside the docker container and how to set that up in Rancher.
WOW, sweet…cant see you needing to upgrade soon! Nice choices. Mine is similar but way less powered.
I ran my storage on proxmox via ZFS with a guide from here
I also then ran Plex in a Debian LXC container with a GPU passthrough that has been working out well for me so far. I may move to TrueNAS to make it a little easier to monitor my ZFS pools. It may be the easy way…but I can mess around more once I have some more knowledge.
I do pi-hole, with recursive DNS…I’m not sure what this is but I made it work. Im learning though.
Not nearly as powerful as your rig (which is awesome btw), but the CPU is up for a upgrade… I just have to decide if I want to put the 3700x, or the cheap 3900XT in it. I may do 3700x and use 3900x in my big rig with new gpu.
I do have the same goals as you. Im exploring the posts by @PhaseLockedLoop in this series he did.
I’m liking the push to self host. The next step I’d say in any learning is how to avoid Single Points of Failure. Which can be quote catastrophic. I kind of want to pic @Dynamic_Gravity’s brain on avoiding SPOFs
I do have high availability. A trully unlimited plus plan with T-Mobile. Full hotspot. Good for 350 GB before throttle… In case comshaft shittiest network in the history if networks ever. Goes down
Nope its an old rate and i stay and i upgrade my own phone … I dont do anything through them
Google Pixel 3 XL
OS: Lineage OS 18.1 self built - Titan M signed - No root
Main Store: Fdroid
Case: Spigen Tough Armor
Screen Protector: THICC tempered glass protector
Very much handle my own deal my dude
Almost all my stuff is self hosted, this includes media and music streaming, my VPNs, my NTP server, My own full recursive DNS server (Top down). The list goes on. I dont even use the LTE towers for accurate time. My NTP server is GPS disciplined. (I use a high narrow band rejection antenna given it sits in an RF crowded area)
@Dynamic_Gravity do you know anyone on the forum who has implemented Secure NTP aka SNTP. Something I try and implement so nobody can tamper with my time
Debatable. My NAS is a VM inside Proxmox. It has 2 TB allocated, out of the 14TB the host has. It’s only used for storing lots of data and not really accessed that often. Depends on what you want from your NAS. If you want to use it to run VMs on it, especially if you want HA w/o replication on multiple hosts’ internal storage, then a separate box makes a lot of sense. Otherwise, having it inside a VM is a fine alternative. And no, I don’t have passthrough enabled, it’s just a raw KVM disk image. A separate box however still makes for a single point of failure, but that shouldn’t be a too high risk.
If you have the option, run OCI containers (docker / podman / K8s / K3s) on Linux or Jails if you’re into TrueNAS / *BSD. If you want to fiddle with your configs often on Linux, try either LXC in Proxmox or LXD in VMs. I would go with LXD, just because I feel it’s better (I couldn’t find an option to live migrate LXC containers in Proxmox, LXD uses CRIU to do so - offline migration is fast, yeah, you can do it in seconds, but it’s still a reboot of the container which affects your uptime and availability, if that’s something you care for, like say for your mail server).
TBCH, I wouldn’t. I may be a purist or may be my autism kicking in, but I really don’t like a 1 size fits all service like NextCloud. Need a mail interface? Use Zimbra or SquirrelMail, or better yet, just use an email client. Need a file server? SFTP or Samba, preferably over a VPN. Need a Password Manager? KeePassXC and have it on your SFTP server or use Bitwarden_rs. I’m not a fan of bookmark syncing, so I don’t have a solution for that, I see no reason why something like SyncThing wouldn’t work (but I’d rather use rsync or scp whenever possible).
I think it would be easier to host Jellyfin in an LXD Container for that, but I won’t spoil your fun. And I always like seeing neoflexes.
Just what I was saying (I’m reading and replying sequentially).
I think the 2700 is just fine. You should try to migrate your VMs to containers. If you have too many, you may want to automate it, which may be an interesting project to do (in theory, should basically be just mounting disk images and copying files over, but there may be issues with data bases like mysql if you use dbs).
I’m also interested in that, but I don’t want to run anything besides Chrony or OpenNTPd. I believe it should be doable with Chrony. Never tried it though.
That’s Simple Network Time Protocol. Secure NTP is NTPsec.
You may use one internet gateway, a la a Linode server, host a VPN (wireguard) and have your infrastructure be a “road warrior,” ie the infrastructure is always connected to the VPS VPN and answers to the public IP address of your VPS. You may have some issues, like having your mail server be down when you move to a new location, but shouldn’t be a too big deal if you plan carefully (or if you host some more critical services on the VPS and some in your LAN).
This is also what I’m interested in, but instead I want redundancy for my services (just for giggles, 99% of self-hosting at home can do without HA), so I’m thinking of building a Raspberry Pi Dramble. A PC case with 3.5" HDD trays should make for a fun “hot swapable” Pi system, with an 8 port POE switch inside. Nowadays I don’t have much use for VMs other than OpenBSD, so I could take it out of the equation and just run a bunch of LXC containers inside a LXD Cluster on the Pi Dramble. I have read stories of people mistakenly loading 100s of containers of a single Pi 3 (bugs in the load-balancing deploy scripts) and the poor Pi ran lots of them for a long while before it crashed. I don’t remember if it was Docker or LXD (I believe it was LXD), but considering how many containers you can pack in a Pi 3 without it even sweating, my project should be doable with 5 Pi 4s (4 or 8 GB variants) - or just try to get my hands on the Turing Pi 2 (for RPi CM4), which would make much more sense (but still have a risky single point of failure, the board itself). What I would need is a separate NAS box for the storage needs. Or maybe 3 or 4 and run Ceph, but then portability kinda goes out the window (even with just RAID mirror of 2 disks, it will be quite bulky or use a lot of space). I think a separate Pi CM4 running a PCI-E with 4 SATA ports and a RAID 10 of 2.5" disks (be it SSDs or HDDs) would make for better portability and maybe even fit near the POE switch (look up for Wiretrustee SATA).
With the advent of the Pi 4 and especially the Pi CM4, there are now lots of options for self-hosting folks. But if you can’t wait that long, I have an easier solution with no waiting requirements: 3x 2nd hand Intel NUCs (the cubic ones, not the chungus latest ones), a 5 port switch and maybe 1 more NUC as a router (with a USB NIC). The advantage is that you can put Proxmox on them and make a really compact cluster and even run VMs if you really need an OS other than Linux. I have done a “mini-infrastructure” using a NUC for OPNSense, one for Proxmox (which runs FTP, Samba and Ubiquiti UniFI controller) and one on standby (we just had it laying around). We didn’t need HA. That little cluster has been running for half a year now with no issues. Of course, you could just go with 2 or even just 1 NUC and an el-cheapo router that can run OpenWRT, but I would argue OPNSense / pfSense or pure Open / Free BSD make more sense if you intend to have a part of your infrastructure permanently connected to a VPN - you can do that with OpenWRT, but unless you also buy a managed switch to go along with it, you won’t be able to split your network, whereas with those, you should be able to just configure your proxmox host to be VLAN aware and the rest of your network using the native / untagged VLAN. This setup is not the most secure, I understand (for more security, you’d also need a managed switch anyway), but if you want a portable self-host setup on the cheap, it should be fine.
I didn’t intend for this post to be this long. Sorry for the wall-of-text!