Should be more than adequate, that chip is not a power hog. If you have 16GB of RAM, it should be good enough.
You won’t have 100 users downloading things at the same time and even if you did, you will probably reach your bandwidth limits for your home connection upload speed before you can reach any other bottleneck (assuming the download won’t finish almost instantly anyway).
I used to have a samba server with about 2TB of mostly small files (10s to 100s MB files of excel mostly) that ran on a core 2 duo with 8GB of RAM on 2x RAID 1s of spinning rust. I was serving 70-80 users at once.
The only reason I upgraded it was because the hardware was running on borrowed time. I would have kept using it if I knew the hardware would be reliable. Which I wouldn’t have trusted, that thing had probably close to 10 years of runtime (it ran on CentOS 6), replaced it in 2019.
I migrated to an HP ProLiant MicroServer Gen8 with 12GB of RAM and 2 core Celeron (which was way more power efficient). The only reason I chose that is because we already had it laying around, I retired a virtualization NAS. Switched to 4x 2TB HDDs in RAID10.
Nextcloud with 10 users will be peanuts. The 1700x is also overkill, but since you already have it, you can use that. Its TDP is about 95W, although that doesn’t translate perfectly to power consumption. From other people’s test, idle should be about 25W average, with a max average of about 40W in easy workloads, like web servers. The turbo on the 1700x can help you, as web serving is bursty, because users load the pages, then the server idles, especially because you don’t have thousand of users to keep the server loaded constantly.
In a potential full load, you will probably see spikes of 140W, but having bursty workloads means they won’t last for long enough to notice it. With a GPU, your idle might increase with about 30-40W, because today’s GPUs are power hogs. If you are going to do 4K encoding, you will probably want something beefier, like a RX570 or so. But if you already have a GPU for that rig, you could keep that for encoding.
I see some people here and there who go by the rule that their old rig becomes their next server and they upgrade their current rig. I’d say the 1700x is efficient enough to justify not buying a new computer just for a home server. Of course, it is not ARM-efficient at idle (which is what most home servers do), but with 10 users and other stuff on the side, it should be fine.
Certainly cheaper than trying to split the workloads to multiple SBCs, like an Odroid M1 for NextCloud, a N2+ for web development and something else for jellyfin (encoding on-the-fly might be a stretch), although you get resiliency (one breaks, the other 2 stand). The idle power for 3 SBCs would be lower than any x86 CPU, at about 1.5W each for the whole package, but you give up in compute power because ARM isn’t as efficient per watt, just has low power consumption (which is why I prefer them, but again, not for the faint of heart, would not recommend jumping all in on ARM).
It is a bit early to talk about the workload split, but I would say that if you go with Ubuntu and LXD route, to share all the resources together and not limit one container to a certain RAM or CPU allocation. The test and dev web servers won’t eat that much RAM (and if your web program is really that inefficient, you can certainly limit each to 2 / 4GB of RAM via LXD, just to be sure).
Supposedly you can passthrough a GPU to a container (people do it all the time with OCI containers, but LXC shouldn’t be that much different, I heard some success stories of GPUs on LXC), but I also heard some headaches are involved, so for Jellyfin, I’d just make a VM with 4 cores and 8GB of RAM and pass the GPU to it. It is a safer bet with these kind of things. The vdisk for the VM can be a 16 / 32GB one and just mount the host’s media folder via NFS to the guest VM.
With NextCloud, 2 web servers (these 3 as containers) and the Jellyfin VM, I would suspect your RAM to go around 14GB, leaving 2GB (assuming a max of 16GB) for the OS, which is plenty, even with ZFS. And that’s just guesstimating, in all honesty, I would be shocked to see all 3 containers to go above a total of 1.5GB of RAM on a bad day, combined (leaving you with about 6.5GB for the OS and other containers or VMs).
The CPU would absolutely be able to push more than that, you are likely to run in a RAM limitation with just 16GB, so if you plan on running more things, I’d go for 32GB just to be safe. But I think even the Jellyfin VM could be limited to 6GB of RAM and it should be fine. The web servers won’t use more than 25% of 1 core each. NextCloud could use maybe 1 or 2 cores.
Jellyfin would be peanuts as well if you use GPU encoding. Speaking of which, I assumed you want to GPU transcode, but you do know that you don’t need to do that if you just transcode your media to h.264 and use native clients, right? Unless you are bandwidth limited on your other devices and need to lower the quality to watch media. And even so, you can allocate 8 threads to the jellyfin VM and it should do fine with CPU encoding, although it will increase your power consumption, especially if 3 or 4 people watch at the same time. With 10 people, you definitely want to avoid transcoding if you can, but if not, yeah, GPU accelerate it. That leaves you with at least 6 threads that you can use on something else.
I would still suggest you build an ARM backup server though. The RockPro64 4GB version is basically the ideal candidate, just get any SATA card (not the official one though), the official case and you get 2x 3.5" spinning rust for backup and 2x 2.5" drives for whatever else. Frankly, you could realistically use the rkpr64 as both a NAS, the NextCloud server and the backup server if you want to. Ideal OS for it would be FreeBSD, because ZFS is a first-class citizen and the board is supported by FreeBSD, although there should be Ubuntu-based Armbian images for it if you are uncomfortable with it.
The reason for suggesting it is because:
1) It gives you a testing ground to dip your feet in
2) If the plan fails for any reason, it is still a perfect, serviceable backup server
3) Even as a backup server, it can serve as a low-power device on your network that can do things like on-demand power-on via WOL or stuff, although you might not need that, given that you will probably want your sever 24/7 anyway
The NAS would be served from the 2.5" SSDs, the backup would reside on the spinning rust. This way, you can even avoid network bottlenecks when doing zfs-send. The rk3399 is capable of doing all that. Just need to use Jails for NextCloud. I’m not familiar with it, but it’s basically the predecessor of containers (along with Solaris Zones).
Yeah, I’d definitely invest in a backup server first, then use the 1700x as the home server for everything else.