Looking for input on a new homeserver

Thank you for all the input. I think I will need to research some more into proxmox vs LXD. For my use case, I think I might be better off with containers compared to a VM. Once I better understand my use case I can build a server for my needs. The type of transcoding I am interested is more like how YouTube does it on a video uploaded. Taking 1080p and rendering it down to 720p, 480p etc. Nextcloud does have an option as a plugin but as I am currently running the snap version so I can’t enable it on my current setup.

2 Likes

With new hardware coming out, I would actually consider an AM5 system for your use case. Now, this would be an investment, but hear me out:

  • The Gigabyte Aorus B650E Master offer 4 slots of NVMe storage from the ground up. This means you can easily fit a few 4TB sticks in there…
  • … And that same motherboard allows you to install a 4x PCIe card, giving you a total of, at least, 8 NVMe drives…
  • … And still have two x16 slots, PCIe 4.0, one with x4 lanes and one with x2
  • The AM5 platform will allow you to upgrade to a bigger, badder CPU, letting you run, hopefully, up to Zen 6 CPUs.
  • All AM5 CPUs come with integrated graphics → Headless is a breeze
  • Jury is still out on VFIO, but it looks like VFIO will be awesome even on B650

Quite frankly, this seems like a crazy good server platform for me, only question is the price which, admittedly right now is at a premium. Here is a suggestion for a high performing home server, though I would probably skip the HDDs and put it in a 1u case personally, but that’s just me…

AM5 home server

Type Item Price
CPU AMD Ryzen 5 7600X $299.00
CPU Cooler Noctua NH-L9x65 SE-AM4 $59.95
Motherboard Gigabyte B650E AORUS MASTER $349.99
Memory Corsair Vengeance 2x32 GB 5200MHz CL40 $279.99
Storage Crucial P3 Plus 4 TB NVME $399.00
Storage Crucial P3 Plus 4 TB NVME $399.00
Storage Crucial P3 Plus 4 TB NVME $399.00
Storage Crucial P3 Plus 4 TB NVME $399.00
Storage Seagate IronWolf Pro 18 TB 3.5" HDD $299.99
Storage Seagate IronWolf Pro 18 TB 3.5" HDD $299.99
Case Fractal Design Meshify C $113.98
Power Supply SeaSonic FOCUS GX 550W $82.03
Total $3380.92

I would wait to pull the trigger on this system until January, but just some food for thought what is coming up. :slight_smile:

Hey, I posted an update today about a TrueNAS hardware upgrade I made over this thread here

I managed to source a used Lenovo P510 workstation (usually waay cheaper than used Dell or HP)

€ 280 (tax included)

  • Lenovo P510 workstation (came with Xeon E5-1603 v4 + 16gb 2400 ECC lenovo branded but the chips are Hynix)
  • SK Hynix SATA SSD 256gb for boot drive
  • 650w Gold PSU

and added these upgrades

€ 65 (tax included, AliExpress)

  • Xeon E5-2650 v4
  • 16gb SK Hynix 16gb 2400 ECC

Also in it is a 2gb PNY Quadro P400 which is assigned to my Plex containers to do the transcoding, which can also be use for encoding.

The HP Z440 with a E5-2683 and 64GB of RAM sounds like a pretty sweet deal, it’d be nice if it’s a v4 :slight_smile:

I was picking between a Dell and Lenovo… what I like with the Lenovo P510 case are the drive bays, can easily fit 4x 8TB (or more)

The PCIe lane math on this didn’t look quite right, so I looked through the manual (and kudos to Gigabyte for having a block diagram): two of the M.2 slots only function as an x8 split from the PCIe5 x16 slot.

So that allows for a total of “only” 6 PCIe5 NVME storage: 4x on board + 2x on a card, or 2x on board + 4x on a card. There aren’t enough lanes on the platform to support 8 full speed drives.


To sanity check this in general:

  • AM5 CPU provides 24 PCIe5 lanes
    • boards may run some of these lanes at PCIe4 for non-E chipsets
  • B650 provides 8 PCIe4 lanes, sharing a single PCIe4 x4 uplink
    • and 4 PCIe3 lanes or 4 SATA ports (or combination in units of 2)
  • X670 provides 12 PCIe4 lanes, sharing a single PCIe x4 uplink
    • and 8 PCIe3 lanes or 8 SATA ports (or combination in units of 2)

Most boards are going to use some of the chipset PCIe lanes to provide things like networking.

2 Likes

Hmmm… That does put a slight bumper in the plans, though, in theory someone could make an x8 5.0 card that supports 4x4 PCIe 4.0 lanes, and rumor has it that 5.0 NVMe drives will move to 2x lanes. Of course such a setup would require x2/x2/x2/x2 motherboard bifurb, and I’m not sure the motherboard market supports that yet.

Still, potentially 6 NVMe spots is more than enough for many homeservers, too. I still see it as a viable strategy.

On a related note, will be interesting to see what the ITX market has to offer, this time around. For ITX, B650 offers more than can possibly fit on a regular ITX board… Will they make more backside slots perhaps? I could definitely see a B650 Master in a SFFTime P-ATX case as a viable home server alternative. :slight_smile:

PCIe NVMe risers themselves are basically just PCBs. There isn’t much to go wrong. The issue lies with the motherboard. If it was made before NVMe came into play the odds are good the BIOS doesn’t support booting from NVMe devices. You can still use them as storage though. I modded NVMe modules into the BIOS on my system so it could boot from NVMe, but whether that’s possible on a given system sounded VERY platform dependant. I imagine it would be even weirder on server motherboards.
In general I’m not a fan of mixing shared storage and virtualization on a single box. If you go that route decide which way your Matryoshka are going to stack, whether the storage server will be a VM on the hypervisor, or the VM platform reside on the storage server. The former may not be possible to do properly, depending on your motherboard’s hardware passthrough virtualization features, as you want the storage server to have direct access to the drives it will be using. The latter is easier, but somewhat clunky.

1 Like

I agree about going with a more modern server CPU, rather than old enterprise gear from Dell/HP. I currently use old enterprise gear (have an HP ML350 and a Dell T630). I’m replacing that with custom-built servers using eBay-sourced EPYC CPUs and a modern mobo.

If my calculations are right, the new server should cut energy consumption in half. I reckon that will save me USD$80 per month on electricity at current energy prices (I live on an island with a diesel power plant - so being shafted atm). That said, the logic remains the same for everyone now. While a custom build will bring your budget up by USD $500-1000, in the long term, you’re saving money.

1 Like

I think I might start with making a backup server. Just curious why do you recommend not getting the official sata card?

Also looking at on the main server running on a ssd with a hard drive that is local taking daily or hourly snapshots. It would be a lot cheaper and make it easier to roll back if needed. I would think it would also fix the wear leveling of SSDs in raid 1.

I am thinking that I need to find out why my Nextcloud is currently running slow because the hardware it’s on should be acting faster. It seems to be slow to wake up from the Network when I use the web interface. But when I use the SSH to access it, it wakes up the server way quicker. I think if I make the backup server before using it as a backup server, I can experiment with Nextcloud.

Because the official Pine64 SATA card, for the longest time, didn’t even work on the board. It would work on x86 or other SBCs, but not on the RockPro64 for whatever reason. I don’t know if that has been fixed yet. I just bought last week a x4 ASMedia card from StarTech and it worked out of the box (more details on my Journey into SBCs thread, towards the bottom, in October).

I remember you were planning on running ZFS on it, right? I don’t remember which distro supports it out of the box. FreeBSD works OOTB, ZFS is a first-class citizen there. Armbian Ubuntu might build zfs-dkms. I had issues with Armbian Debian and ZFS, haven’t tried the official Debian + ZFS (yes, debian officially supports the RockPro64 via the minimal installer). I managed to build zfs-dkms on Void, but it is hard if you don’t know what you are doing and where to look (some tools to build zfs were absent, had to manually download them).

That said, you could run NextCloud in jails in FreeBSD, but migration might not be as straight forward as from a distro to another or docker container to another system.

You have a few users on NextCloud, are you using a db like mysql or postgres, or are you just using something like sqlite? If you use sqlite, there could be the problem.

Not sure what you mean by this. Is the web interface slow, but SSH fast? What do you mean by wake, as in, loading? Or is the server sometimes down and launching it from a GUI, like, idk, from Portainer, is slow, while launching it via SSH is fast?

Fist off the Nextcloud I am using is the Ubuntu snap. It was supposed to be a “test” server to see if Nextcloud was going to be useful. It ended up being in use for over a year now. What I mean by slow is if you open the web interface it likes to pretend there is no webserver for a couple of minutes. If I open ssh in a terminal it wakes up and the website instantly responds. The web ui has always been a bit slow with 1 user and transfers on lan were slower than expected.

I have used ZFS with freenas for another project in the past. Not sure if someone like BTRFS with frequent snapshots wouldn’t be better if I am only running a few drives and not a large raid style array.

I have tried using own cloud in a jail before but the experience was pretty bad as it was a pre made image that was not properly updated. I never tried setting up my own jail though.

For distro’s I have mostly used Ubuntu for server usage but run arch on my laptop, so wouldn’t be opposed to using a similar disto if it works good as a server.

Ok, that could explain some of the weird behavior. I know snaps are easy to set and forget, but they are probably among the worse ways you could set it up, at least IMO. You gain easy setup, but lose the customization ability. Even docker would be better than snap in this regard.

I would say to find a way to backup your nextcloud instance, most of the data must be residing in the snap squashfs container I think. Get your data out and safe, then set up Ubuntu, Debian or whatever you are comfortable with on the RockPro64 and just set up Docker / Portainer or LXD (non-snaps). LXD gives you the traditional linux management style, just exec into a linux image and set nextcloud, nginx and a DB. If needed, ssh server too. Then you can just transfer the data over to it. Docker should be somewhat similar in that regard. Both allow you to just transfer the offline container image somewhere else and launch it on another system, making migration easy.

I am biased, I wish other people would respond here. LearnLinuxTV has some good NextCloud tutorials, one updated 2 months ago or so. Unfortunately I don’t know how you’d take the data out of the snap, if nextcloud has a backup utility built-in, that’d be nice, otherwise you probably would need to unsquashfs the image and grab the data from there.

ZFS runs on a single disk just as well and you get its snapshot capabilities and zfs send / receive. Both ZFS and BTRFS have FS compression, but I’d go with ZFS and zstd. Again, my bias shows. This makes backups via ZFS send much easier if you are running ZFS on another device too.

You lose redundancy, which I would recommend on any system that is supposed to run 24/7, but at least you get the snapshot backups. Unfortunately, without at least a RAID mirror or a parity to get some more information from, zfs scrubs (probably btrfs too) could kill corrupted data on the disk if the checksum doesn’t match.

The RockPro64 official case has 2x 3.5" drives and 2x 2.5" drives slots, which I think should be used. I personally find redundancy more important than ECC for data integrity. Getting the RAM corrupted happens rarely, but disks, that can happen more often. And without the redundancy to do some cross-checks to see which data corresponds to the checksum data, you cannot recover that data during scrubbing. Just my $0.02.

It’s a real shame but the electricity these well built perfectly usable old machines use cost more than replacing them with more expensive hardware.

I did set up a machine to only run at night on cheaper electric. It would turn on using the BIOS in the late evening and switch off with a crontab in the early morning. It did the job well as a backup server.

I installed NextCloud using a Snap, it eventually stopped working. NextCloud is as easy to install on a webserver as Wordpress so no need to use a Snap.

Watch out with those - that’s what threw me off of buying a T630 - Dell disabled bifurcation (which is necessary for those cheap M.2 PCIe cards to work) in BIOS, so it wouldn’t work. Do your research if it will work before buying.

ZFS uses all the RAM it can get, but also works fine with only 8 GB system memory.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.