First Home Server Build Advice!

Hey All,

Im looking for some advice ahead of the arrival of my components.
Ultimately I’ve wanted to build a home server but this got expedited due to my photo storage cost on GDrive.
So my initial use case is storing my family photos but I’m also going to setup Jellyfin and I’m sure my use cases will grow. All in all I have decided to go with Proxmox.

I’ve got 2x Seagate IronWolf 4TB NAS Internal Hard Drive to start and ended up going with Topton 6-bay N100 i3-N305 with the included 32GB DDR5 and 1TB NVME.

I’m thinking now that I should use those Seagate drives for the photo/videos and get another 2 for Jellyfin video collections (which I’m less concerned about losing data with)

What I’m looking for here is insight into what I should do for a boot drive, should I go ZFS and use the included 1TB NVMe as a caching drive? Is it good idea to get 2 more drives for storage outside of my photo/videos? Also what RAID type should I plan for with these 2 storage pools? Is it worth getting a 3rd drive for the photos and go Raid5?

Im also welcoming any tips or advice ahead of this.
Thanks all!

I had the same concerns as you when I was building my Proxmox systems and I am not experienced when it comes to automatisation (for automatic backup etc.) or if it comes to the terminal.

So I would opt for the easiest option, to play to the strengh of the Hypervisor and avoid unnecessary complications.

  • Boot drive and storage of containers (VMs):
    1TB NVME
    This way boot time of the host OS is really fast and with the default it creates a local storage for container / VMs. VMs need all the IO they can get

  • Storage. One big ZFS Pool of multiple mirrored hdds with cheap SATA cache SSDs (A nvme would be a waste unless you are doing something really demanding like videoediting from these drives).
    Advantages: High reliability, high performance - even if the pool is degraded, can be expanded by throwing in another pair, can also be grown by one-by-one replacement with bigger hdds.
    Disadvantage: you loose 1/2 of your total capacity. This configuration saved my ass though, because I started with cheap and used desktop drives and one of each pair failed. I would have been boned with zfs1 for example.
    I only do multiple pools if I have drives of different technology - I sort them by SATA-SSD (warm); nvme/PCIe-SSD (hot); mechanical HDDs (cold). I name my pools accordingly. (coldtank, hottank, warmtank) .
    I would recommend to enable following features for your ZFS pool: compression (lz4), thin provisioning, autoexpand

  • Storing the files, network and sharing data with host / between VMs:
    I would recommend to store your data within a virtual disk of a VM, otherwise you will be working outside the featureset of Proxmox and have to setup backup scripts etc. manually. for files stored directly on a zfs dataset
    You can create VMs which have a second virtual disk which can be placed on your hdd pool/dataset just to store your data.
    Use a network service within the VM/container (windows smb sharing, nfs) to share the files with your other other VMs, your PCs or even the host itself. As long as you are accessing VMs within the same Proxmoxhost / networkbridge you are running at bus speed. Make sure to select the paravirtualized network adapter option for your “fat” VM.

  • Backing up data
    You can easily add smb / Windows shares in the Proxmox webinterface and create a backup on target machine of your choice. The buildin backup is a bit lacking and can only manually backup one VM at a time. But it is a start. Proxmox is also offering a backup server which has more features.
    I would advise to use fat VMs for data storage because then the proxmox backup is block based which takes a lot less time than backing up the a huge lxc volume - which is file based and slooow for everything above a few GB.

1 Like

Something like the Asustor Flashstor all SSD NAS could easily satisfy your needs for something like $800-$1200 including drives - only real drawback is the lack of ECC and the lack of affordable high capacity (above 4TB / disk) storage. For “Just a Home NAS” it fits the bill, but if you want to do some serious docker stuff you might want to add a SBC computer (Raspberry Pi or clones) or a NUC.

1 Like

For proxmox, I’d say to use a single 256gb or lower ssd, go with either ext4 or zfs (won’t make any difference, I prefer ext4 for single drive, also more tested and stable in the linux world, but zfs should be fine as the root partition too). You don’t need zfs caching, neither l2 arc nor ZIL. Those are features more for enterprise users, or users with heavy things running on their servers, or with many, many users. Literally more cost for no benefit.

Since you already have the 1tb SSD, use it both for proxmox boot and root partitions and for VM’s root volumes. Use the zpool for bulk data storage (2nd vdisk for VMs, or SMB / NFS share from proxmox).

For 3 drives, no. Just go with 2 disk mirror. For 5 drives in raid-z, that’s the sweet-spot between capacity and redundancy IMO (and 6 for raid-z2, with less capacity, but a tiny bit more redundancy, which is a bit much for most labs anyway).

I’d say:

  • mirror for photos & personal videos
  • single disk for movies
  • backup server with 2 disk mirror for important data and single disk for unimportant data (like movies), the latter which you can of course skip

If you’re feeling adventurous, you can buy an odroid hc4 and use the 2 HDD slots for important data backups and a (powered) usb hdd for your video collection (or, you know… just slap the USB inside your main system and back up the movies there, without having to go through your network). If not, an odroid h3 with the type-1 case should do (you could even run proxmox backup server on the h3). That’s if you want to save some power by powering off your backup server and its spinning rust that’ll just idle. Also, technically more secure to use a different box, so if something catastrophic happens to the main server, the backup server was intact (and powered off).

Do be mindful that if you want to expand in the future, you’ll want mirrors. You can have 2 disks in mirror now and add 2 or 4 more disks, each in 2-way mirror, then stripe all 4 / 6 of them (note that your pool won’t be balanced, meaning that data will still reside on the single original mirror, but if you delete data and restore it, it’ll be accessible again - for picture contents, it wouldn’t be a problem and you don’t need to worry about it being unbalanced).

Back to the backup server, if you want to save a buck, you can just run backups on a different pool, that will not be affected by the main pool. And you can take snapshots and zfs-send them to the other pool, all locally, avoiding the network, which is nice. So you can basically ignore the hc4 / h3 if you don’t care about expanding with more drives (and instead later expanding with higher capacity drives).

3 Likes

When using ZFS, you do not want RAID (because zRAID :wink: ).

I ran into similar issues as you regarding NAS-size. I am running a (janky) setup based on an Odroid H3 (2 HDDs local) and a 4x USB-HDD enclosure (6 drives total in two “pools”). BTRFS Raid1 for both “pools”. Works well enough.
Backup-NAS is an Odroid HC4 with 2x 16TB drives.

I did not bother with cache-drives, but my workflow is built around the NAS not being fast enough to edit off of, so lots of loooooooong file copys.

1 Like

I doubt caching would’ve helped you edit off of it. If you went with a main SSD pool and a secondary spinning rust pool that you copy data to, then maybe (also, does btrfs even have something as nifty as zfs-send?).

1 Like

It does – btrfs-send(8) — BTRFS documentation. But when I’d looked into it, it was a bit fussy (i.e. btrfs send from a linux box to a synology didn’t work because of btrfs version mismatches). However, that was just research, I haven’t had a chance to try it out and see if it’s (currently) still fussy. Likewise IDK why it wouldn’t in a ‘roll your own’ scenario.

/tangent, sorry OP :slightly_smiling_face:

1 Like

That was never the design goal, and so I did never even try.

No idea. I just used BTRFS because having redundancy with it is a single line to configure. After running a storage cluster, I wanted simple.

1 Like

Hey @ThatGuyB

So the server is running but I haven’t done anything except install Promox on the faster M.2.
Ive added a cheap second m.2 and also got a 3rd drive of the 4TB IronWolfs.
I have worked a bit with VMs in vsphere but I have very little know-how with how environments like that are built but this is awesome cause I love dev ops work and I’m eyeing it up.

That being said you can see the storage in the screen and now I’m wondering next steps like do I make 2 storage pools, 1 for 2 of my disk drives in Mirror for photoprism and then 1 for Jellyfin?

Once I do am I spinning up a VM to host jellyfin and then another for photoprism and assign them to the relevant storage pool?

Also the second 1TB NVME do you have suggestions for use in this scenario if not for a cache drive?

Thanks again for the advice, I know these are very noobish questions but I feel like id be an idiot not to pick that brain of yours.

You click on the empty disks and press “Initialize disk with GPT” (partition table). Create partitions of equal sizes and then make a zpool using the partitions.

1 Like

I usually group the storage based on speed (nvme, mechanical hdds, SATA ssd) in different zfs pools.

If you aren’t replacing the iron wolf 4TB anytime soon and not planning to extend or resize the pool you could do a RAIDZ-1, this would be the sweet spot for storage space efficiency and data loss prevention IMO. You cannot resize this pool, though.

Otherwiese:
You could also do a pair of 2 disk mirrors, but then you’ll need another 4TB hdd. Advantage is, you can extent the pool (by adding another pair) or resize it (by replacing a the hdds in a pair with bigger ones one after another) . Even if a hdd of the pool fails, it should work at the same speed. You’ll loose 50% of your storage capacity with this version.

1TB SSD
if you have plenty of RAM (32GB and above), then I think a cache is not necesary - I would rather use it as VM storage. If you are hammering the 4TB disks then I would try it as cache. Cache can be assigned / removed without loss - so you could use it for something else if this isn’t efficient.