Looking for input on a new homeserver

I am looking to upgrade my homeserver and I am seeing used Xeon workstations on ebay for pretty cheap. I was hoping to get some input as to any incompatibility there may be or any better suggestions. It has been a few years since I have researched into building a server

My current server is a old desktop with an i5, 8GB of RAM an 128GB ssd boot drive and a 2TB Spinning hard drive. I am running Ubuntu server with nextcloud.

I am looking at a used HP Z440 with a E5-2683 and 64GB of RAM
was hoping to use m.2 SSD’s with a pcie riser adapter but I have no experience with these.

My usage requirements would be as follows

Nextcloud
5 + TB of Use able SSD storage
Data redundancy. ZFS? I have used freenas before
Webserver for test websites and development

Features I would like but have not looked into enough yet

Remote video encoding / file conversion FFMPEG?
Potentiality a streaming server
Hypervisor or a VM for testing or running multiple applications
Might make a slower server with spinning drives as a backup

I am pretty comfortable with using a server headless over SSH so any Video card I would add would only be for remote encoding.

Any suggestions or things to look out for would be appreciated. Thanks.

2 Likes

AM4 5700g cpus are pretty cheap ATM

I only steer away from old gear these days due to power draw.

How much ram do you think you would actually need? Whats your budget?

They are just adapters remember m.2 is just pcie. You will probably not be able to do the 4 cards on 1 16x slot tho and probably would only be able to do 1 m.2/u.2 per slot.

2 Likes

I am not sure just how much RAM I will need. I do know that ZFS is pretty ram hungry and ddr4 ECC ram is pretty cheap on ebay right now. For me right now I am finding nextcloud is a bit slow but looking at cpu hard drive and ram I don’t see any one limiting out during light use.

Price range $1000 - $1500 USD. cheaper would be better but don’t want something slow.

Last I heard Freenas or ZFS didn’t have the best compatibility with Ryzen. It has been a while though so things may have changed.

1 Like

32gb sticks are still pretty pricey no? Last I checked was like $150 a pop (x4 = $600 in ram)

I run freenas in a vm on my proxmox server with a 3900x its fine, I just pass through the disks. ZFS for my VM storage on proxmox as well.

As the current price for M.2 drives is not much higher than sata I would like to go with them if possible. I would like to be able to saturate my 1Gbs uplink as I occasionally send out a link from my nextcloud server.

Issue with m.2 is having the available pcie lanes

1 Like

Maybe I am missing something but I am seeing ddr4 32GB sticks of ECC Ram for $45 on ebay

1 Like

link ?

ill go look (maybe registered sticks you are looking at)
Yeah probably registered sticks, for mainstream you need unbuffered.

I did not realize that there was a difference on registered vs unbuffered. My account is too new to post a link

I bumped your Trust level you should be able to post links now. But yes there is a difference.

It turns out the RAM i was looking at was registered

Welcome to the forum!

I would stay away from old servers, because of power consumption. Also, I would definitely look into your needs very well before you jump head-first to buy stuff. Enterprises always do a small study into their usage and requirements and buy servers based on their needs, not all businesses buy the biggest chungus EPYC server just because they can afford it. People at home should do the same.

I would say that for the OS, you stick with Ubuntu, since that is what you are used to. Just install libvirt, qemu and use virt-manager on another computer to manage the box. That’s what I do.

I would say to go for 2x Samsung Qvo 8TB SSDs in RAID 1. No need to go overboard, you are a single user. Ubuntu has ZFS support on the desktop, but on the server image, it lacks it, although you should be able to just install zfs-dkms and zfs-utils for the commands (I think). Otherwise, just go with Proxmox.

I am biased towards LXD and Ubuntu can install it with the snap repo. Adding that, you can create containers on your system, instead of VMs, to create your nextcloud server, a MySQL / MariaDB container, web servers containers. You can also use the Ubuntu image for the containers, since that is what you are used to. That said, Proxmox has LXC built-in, although I find the proxmox offering lacking, although you can make it work.

Might want to consider a VM and do a GPU passthrough. Intel GPUs are hot for their QuickSync encoding, not sure of the availability where you live, but I’d say go with AMD. If you plan on running Jellyfin, it supports Intel QuickSync (QSV), AMD AMF and nVidia NVENC, all through VA-API.
https://jellyfin.org/docs/general/administration/hardware-acceleration.html

As noted in the hwacc page, quicksync uses a forked version of va-api, so that might give you some headaches. Go with any AMD GPU that has encoding capabilities, so RX 500 series, or RX 6600+.

I think your old desktop should still be doing just fine, just upgrade to 16GB DDR3. Web development isn’t very resource intensive, I used to manage an infrastructure for web development on Tomcat, Wildfly (and old JBoss), WebLogic and at some point we almost introduced Payara (a fork of Glassfish). Our VM specs? 4 core + 16 / 32GB of RAM for the ones running the hoggiest of the web servers. We had about 20 users per instance or so, those were more like a pre-UAT environment. Normal VM specs for testing? 2 core + 2 / 4GB of RAM. Had literally hundreds of those small VMs (reached about 370 total VMs IIRC, but managed to cut through them to about 240, we only had bigger sized VMs than this in the low 10s).

I would highly doubt you will need more than a 6 core 5600X and even that I find a bit overkill, given that you will mostly use GPU acceleration for the encoding, which will be the most intensive workload by a longshot.

ECC is a bit overrated.

That is not true and it’s unfortunate that this idea still prevails. Especially not for home usage. I will be running ZFS on a 4GB RockPro64 full time, because I know it will run fine. I didn’t manage to get ZFS on my Odroid HC4, so that will be an experiment for another time. I would suggest you use something like a RockPro64 for your backup server, just get 2x 10TB spinning rust and do zfs-send from your main box to your backup box, backups will take minutes if you do incremental backups daily.

ZFS will definitely use as much RAM as you give it, but in case the system requires RAM for something else, ZFS will just give the RAM back. People have ran ZFS on core 2 duos with 2GB of RAM. You might need more RAM if you plan on doing deduplication, but even that shouldn’t require a lot, like 1GB per TB, so you could get away with 16GB of RAM for a 16TB storage (total, not usable, if I’m not mistaken). And the higher you go, the less RAM you will need, really. 64GB of RAM for a 64TB NAS is a bit overkill even with dedup, unless you have tons of users and can make use of L1ARC.

As for the server itself, if you really want ECC, I’d still go with Ryzen, just pick a motherboard that has been tested to work with ECC. My home server is a TR 1950x I bought last summer. I wish I could have gotten the 1900x (for less power consumption), but it was a steal from a fellow forum member. Just go for unbuffered memory, as Ryzen doesn’t support registered ECC (neither does TR).

People nowadays really overspec their home servers. I get that you can buy cheap enterprise server, but those will tend to use more power, produce more heat and make more noise than they’re worth it. In 2 years of continuous use, you will have paid the price of a lower power consuming box. I know, because I’ve used one. In all honesty, I wasn’t even pushing my Xeon x3450 that far, but the power bill always killed me when that thing was on 24/7. Nowadays, if I don’t have a need to have my TR on, I power it off too, even though the power consumption on it is small.


If you feel a bit adventurous, you could get a small 1Gbps switch and a few single board computers. You can realistically run all but the jellyfin server on 2 RockPro64 (1 NAS, 1 backup server) and 1 Odroid M1 (the multi-purpose container server). But ARM is surely not for the faint of heart, although it is cheap enough to get into to not make the purchase feel like a waste.

I would say that probably even an older Ryzen 3600 system with 32GB of RAM should be more than plenty for what you need.

1 Like

Thanks for the info. I will look more into it.

I am intending on having maybe a dozen active users that upload on the nextcloud instance and up to 100 users that could get a link to download files.

I just realized I still have a Ryzen 1700X after a recent upgrade. Would that work well for power efficiency?

1 Like

Should be more than adequate, that chip is not a power hog. If you have 16GB of RAM, it should be good enough.

You won’t have 100 users downloading things at the same time and even if you did, you will probably reach your bandwidth limits for your home connection upload speed before you can reach any other bottleneck (assuming the download won’t finish almost instantly anyway).

I used to have a samba server with about 2TB of mostly small files (10s to 100s MB files of excel mostly) that ran on a core 2 duo with 8GB of RAM on 2x RAID 1s of spinning rust. I was serving 70-80 users at once.

The only reason I upgraded it was because the hardware was running on borrowed time. I would have kept using it if I knew the hardware would be reliable. Which I wouldn’t have trusted, that thing had probably close to 10 years of runtime (it ran on CentOS 6), replaced it in 2019.

I migrated to an HP ProLiant MicroServer Gen8 with 12GB of RAM and 2 core Celeron (which was way more power efficient). The only reason I chose that is because we already had it laying around, I retired a virtualization NAS. Switched to 4x 2TB HDDs in RAID10.

Nextcloud with 10 users will be peanuts. The 1700x is also overkill, but since you already have it, you can use that. Its TDP is about 95W, although that doesn’t translate perfectly to power consumption. From other people’s test, idle should be about 25W average, with a max average of about 40W in easy workloads, like web servers. The turbo on the 1700x can help you, as web serving is bursty, because users load the pages, then the server idles, especially because you don’t have thousand of users to keep the server loaded constantly.

In a potential full load, you will probably see spikes of 140W, but having bursty workloads means they won’t last for long enough to notice it. With a GPU, your idle might increase with about 30-40W, because today’s GPUs are power hogs. If you are going to do 4K encoding, you will probably want something beefier, like a RX570 or so. But if you already have a GPU for that rig, you could keep that for encoding.

I see some people here and there who go by the rule that their old rig becomes their next server and they upgrade their current rig. I’d say the 1700x is efficient enough to justify not buying a new computer just for a home server. Of course, it is not ARM-efficient at idle (which is what most home servers do), but with 10 users and other stuff on the side, it should be fine.

Certainly cheaper than trying to split the workloads to multiple SBCs, like an Odroid M1 for NextCloud, a N2+ for web development and something else for jellyfin (encoding on-the-fly might be a stretch), although you get resiliency (one breaks, the other 2 stand). The idle power for 3 SBCs would be lower than any x86 CPU, at about 1.5W each for the whole package, but you give up in compute power because ARM isn’t as efficient per watt, just has low power consumption (which is why I prefer them, but again, not for the faint of heart, would not recommend jumping all in on ARM).


It is a bit early to talk about the workload split, but I would say that if you go with Ubuntu and LXD route, to share all the resources together and not limit one container to a certain RAM or CPU allocation. The test and dev web servers won’t eat that much RAM (and if your web program is really that inefficient, you can certainly limit each to 2 / 4GB of RAM via LXD, just to be sure).

Supposedly you can passthrough a GPU to a container (people do it all the time with OCI containers, but LXC shouldn’t be that much different, I heard some success stories of GPUs on LXC), but I also heard some headaches are involved, so for Jellyfin, I’d just make a VM with 4 cores and 8GB of RAM and pass the GPU to it. It is a safer bet with these kind of things. The vdisk for the VM can be a 16 / 32GB one and just mount the host’s media folder via NFS to the guest VM.

With NextCloud, 2 web servers (these 3 as containers) and the Jellyfin VM, I would suspect your RAM to go around 14GB, leaving 2GB (assuming a max of 16GB) for the OS, which is plenty, even with ZFS. And that’s just guesstimating, in all honesty, I would be shocked to see all 3 containers to go above a total of 1.5GB of RAM on a bad day, combined (leaving you with about 6.5GB for the OS and other containers or VMs).

The CPU would absolutely be able to push more than that, you are likely to run in a RAM limitation with just 16GB, so if you plan on running more things, I’d go for 32GB just to be safe. But I think even the Jellyfin VM could be limited to 6GB of RAM and it should be fine. The web servers won’t use more than 25% of 1 core each. NextCloud could use maybe 1 or 2 cores.

Jellyfin would be peanuts as well if you use GPU encoding. Speaking of which, I assumed you want to GPU transcode, but you do know that you don’t need to do that if you just transcode your media to h.264 and use native clients, right? Unless you are bandwidth limited on your other devices and need to lower the quality to watch media. And even so, you can allocate 8 threads to the jellyfin VM and it should do fine with CPU encoding, although it will increase your power consumption, especially if 3 or 4 people watch at the same time. With 10 people, you definitely want to avoid transcoding if you can, but if not, yeah, GPU accelerate it. That leaves you with at least 6 threads that you can use on something else.

I would still suggest you build an ARM backup server though. The RockPro64 4GB version is basically the ideal candidate, just get any SATA card (not the official one though), the official case and you get 2x 3.5" spinning rust for backup and 2x 2.5" drives for whatever else. Frankly, you could realistically use the rkpr64 as both a NAS, the NextCloud server and the backup server if you want to. Ideal OS for it would be FreeBSD, because ZFS is a first-class citizen and the board is supported by FreeBSD, although there should be Ubuntu-based Armbian images for it if you are uncomfortable with it.

The reason for suggesting it is because:
1) It gives you a testing ground to dip your feet in
2) If the plan fails for any reason, it is still a perfect, serviceable backup server
3) Even as a backup server, it can serve as a low-power device on your network that can do things like on-demand power-on via WOL or stuff, although you might not need that, given that you will probably want your sever 24/7 anyway

The NAS would be served from the 2.5" SSDs, the backup would reside on the spinning rust. This way, you can even avoid network bottlenecks when doing zfs-send. The rk3399 is capable of doing all that. Just need to use Jails for NextCloud. I’m not familiar with it, but it’s basically the predecessor of containers (along with Solaris Zones).

Yeah, I’d definitely invest in a backup server first, then use the 1700x as the home server for everything else.

1 Like

The Workstation style of this Xeon machine is much quieter than the rack mount server style. You can really load this up with the very cheap ECC RAM available on ebay. Better still buy one with 256GB RAM.

The board seems to have a couple of full length slots and an open ended 8x slot. You may need one for an NVidia GPU for your video streaming and one for a 10GB NIC and one for an HBA so you can use cheap SAS drives. Yes you might want M.2 drives for VMs, cirtainly SSDs.

Either Proxmox or TrueNAS will do the job or even both. Proxmox is the VM master and TrueNAS is the master of file serving.

  1. Using a pair of small 2nd hand Intel SATA SSDs install Proxmox. Allow Promox the use of your big SSDs to create a pool for your VMs to use.

  2. Create a VM for TrueNAS Core and PCIe pass it the controller for the big hard drives. Install TrueNAS on that VM and it should see the hard drives as hardware.

  3. Log into TrueNAS (aka FreeNAS) GUI and create your file server pool and whatever else. You probably want an SMB share for Proxmox ISOs, backups and templates but not the VMs themselves.

I’ve currently got Proxmox and TrueNAS on separate servers but I have had them all in one. The mistake I made was to iSCSI a dataset over to Proxmox to use for a VM. It worked but don’t do that.

I have used old servers in the past because they are very powerful, reliable and cheap but too noisy. My current servers are all RYZEN based in rack cases. Very quiet. Workstation is a good solution, cheap and powerful like a server but quiet like a PC.

1 Like

My servers are all Zen1 and Zen1+ systems, about all they’re good for. The problem with AM4 is lack of full sized PCIe slots. I have a cheap MSi X470 motherboard with 3 x16 slots (not all 16 lanes though). This allows me to fit an HBA, 10GB SFP+ and a 4 port NIC. I chopped down a small GPU to PCIe x1 just for some output.

1 Like

I would avoid complicating the setup more than necessary. Here is where I differ in philosophy with many, instead of running classic VMs and literally wasting CPU and RAM, better to use containers whenever possible. Also, Proxmox does just what TrueNAS does, just without the fancy GUI, for which OP said he is comfortable with the CLI, so I think having Proxmox serve as both hypervisor and a NFS NAS for both the local network and the guest VMs should be fine.

Containers aren’t perfect, but for home use, you can cram so much more in a weaker machine, it’s crazy. Also, I doubt OP needs 10G. It might be a nice bonus, or at least a 2.5Gbps, but that means a lot of upgrade. Sure, enthusiasts can just throw money at the problem, but I think gigabit should serve well for now. I have run more demanding infrastructure on 2x 1Gbps LACP, or when not available, balance-alb and it held itself really well.

The reason why I say the upgrade may not be necessary, is because the upload speed on a home connection would definitely be the bottleneck, so that 10G will just be wasted resources, doing nothing. Even I start to reconsider my router choice to go with 2.5Gbps, I might just stay with gigabit (I am / was? planning to get a 2x 2.5GbE adapter for some active-standby shenanigans and maybe use the integrated NIC that is currently used as router-on-a-stick as a CARP or keepalived dedicated pipe).

1 Like

Proxmox LXC are wonderful. You hardly need any resources.
I would not have Proxmox server your files, just uses a file server LXC if it’s simple and you don’t need all the fancy ZFS on the file server.

Genuinely curious what is your reasoning, since that is what TrueNAS would be doing if it was running on bare metal. :slight_smile: Even as a VM with HDD passthrough, it still functions the same way, just that it is a VM. IMO there is nothing inherently wrong with doing it this way.

Just like on TrueNAS, you just create a ZFS partition or volume and share it via NFS and you’re good to go. ZFS even has NFS share utils built-in (although I still prefer the exports way, I should probably get used to ZFS utils).

I prefer LXD, but that is just my biased choice, because I find Proxmox LXC offering a bit lacking, LXD offers a lot more images with a lot more architectures and a lot more versions of the same distros, to the point of almost being overwhelming if you don’t know what you are looking for. I’m probably biased towards LXD because I’m used to its CLI and because I have it easily accessible in my distro’s main repo and I use ARM mostly, so I barely touch Proxmox these days.

I am aware of PiMox, although I would rather just use the CLI. Besides, for PiMox, you need to download aarch64 lxc containers anyway, so it is just easier to use LXD at that point. Just my $0.02.

Proxmox is a hyperviser. Yes I know it’s based on Debian but you should not think of it as a server any more than you think of an X-Box as a PC.
TrueNAS is primarily a server with some VM featured bolted on.