Suggestions for a home everything server. What are some good directions to take?

Introduction

Hi, I recently bought a workstation kind of on a wim for the express intent on a home server.

I was thinking about doing it for a while and seeing my bank account was looking a bit too sunny, I set myself out to buy a cheap computer for a server to bring it back to a slightly more comfortably cynical level.

And so I bought this:

The specs as of now:

  • CPU: e5-3699 v3 18 cores 36 threads @2.3GHz boost up to 3.6
  • RAM: DDR4-2400 ECC 128GB
  • GPU: Quadro NVS 510(an absolute potato)
  • Networking: 1Gbit. built in
  • Storage: 600GB SATA-SSD boot drive and currently 1*4TB HDD

In general for the price I paid it was really cheap(I think…) for what I got but sadly I only thought about power consumption after the fact. Only when a sys-admin (in training) friend of mine pointed it out.

That blunder aside I have gotten quite far on my own learning how to set it up for my use cases. I already have some experience with containerization and virtual machines on my local machines for testing purposes. Im not an expert but I know enough to use, create and configure them. I also have a good amount of linux experience. I however barely have experience with servers.

Where I am now

What I currently have setup is Fedora Server 40 as the base OS with Jellyfin and git podman containers.

Through research and trying things out Ive grown a liking to running the containers as rootless quadlets(podmans sorta but not really docker compose tied to systemd). Each container and set of functions related to that container are all set under a different user account with differing restrictions and permissions.

I came here kind of organically and Im quite happy with where I got so far but Ive come to the point where Im not sure what to do next, what options are available and which ones I should go with.

My questions

  • Is it a good idea to mix mirrored pairs of drives with the same size but different brands/models?

The server currently has 4TB barracuda but for data safety reasons I would want some redundancy. I have a Toshiba 4TB drive in my main system I could mirror with it but despite the 2 having the capacity their completely different. One is I think 5400rpm(I think) compute drive and the other is a 7200rpm consumer drive.

  • What is the difference between zfs and LVM2+xfs in features and functionality and what file system should I use in general/volume manager should I use in general?

I saw a lot of ppl talk about zfs as a file system in these forums and I have known about zfs and that’s the filesystem most ppl use for their NAS’s but I have never stood still on what it does exactly. Fedora Server 40 by default came with LVM2+xfs. I read what both and I know that zfs and xfs have very different priorities but LVM2 adds a lot of features that zsh has to xfs. Although I feel like Im still not seeing the full picture on what exactly the differences are between the 2.

  • Is it worth it to get a dedicated NAS os? Do NAS os’s provide features that a linux server distro cant get or is it just an ease of use thing with purpose built management gui?

I have been thinking to add TrueNAS to a vm to handle the nas portion of the server but I accidentally came across that a lot, a lot of the features are already in Linux or Fedora server. I had the naive understanding that for NAS stuff you needed a NAS os but now I realize it might not be necessary. Is it still a good idea to use TrueNAS for the NAS stuff or to even set it up as my base os?

  • I am using cockpit to management my containers but because of my configuration the podman containers section shows no containers unless I log into one of accounts to see the containers running under that specific user. This isnt major but Is there a way around this?

Agiain this isnt a disaster. Im used to the commandline but having a good overview is nice at a glance and to react quickly to issues. I would at the very least be able to monitor all the containers from one interface.

  • What are the pros and cons to my solution of containerization using rootless quadlets over others?

I like this solution because it means I can lock down features based per user and the tight integration with systemd means I can manage it as if it were a part of the OS. Also the added segmentation is nice as it means I can have separate containers, features, rules, restrictions and access per user. It does come with a couple downsides like a lack of overview and some inconvenience with managing the containers. Although since Im new to trying this server thing out I was wondering what others with more experience thought of what I am doing.

  • What is Proxmox exactly? Is itome kind of virtualization management underlayer thing? Is it a good idea to use?

I have seen people talking about it the forums but honestly never knew such a thing existed. Is it a good idea or is what I have good enough. I know there are a lot of ways to skin a cat but I was wondering why people would pick proxmox over other solutions. Also Im not sure what the difference is between proxmox and something like TrueNAS.

  • Is the NVS 510 too pinner for transcoding?

I didnt look at the gpu specs when I bought it. Thinking that for transcoding that iGPU’s are usually fine but I looked at the specs after the fact and realised it didnt even have GDDR memory. It uses DDR3 what also stung was that it didnt have NVENC. Im going to stick with it for now but is it better to just do software transcoding considering the specs of my CPU?

Closing thoughts

I probably have more questions to come as I learn more and I get to try more things. I dont mind getting dirty with the comandline and configuration files so highly configured and custom solutions to my questions are welcome. It helps that I have experience in software development and daily driving minimal Linux installs.

Its been overall fun tinkering with this ridiculously overkill(for me) machine I bought. I probably need to figure out a way to get it shutdown on a timer at night because of its power consumption. I haven’t measured it yet but again Ive read somewhere that would Ive even uses a lot at idle but Ill get there when I get there. I have some ideas on how to do it and the house has solar panels.

Im looking forward to all of your questions.

Cheers.

2 Likes

Welcome to a comfortable addiction, you’re gonna love it…

Sure!

but not like this…

You’ll be slowing down the fast drive to 5400 RPM

and skip hardware RAID that’s built into that machine, which it looks like you are already doing…

ZFS is an enterprise ready solution for maintaining data integrity

xfs is a journaling file system comparable to Ext3

ZFS is the defacto standard for maintaining data integrity

Off the top of my head, I cannot think of anything you’d want to do to your data drive that cannot be easily done with ZFS

Let’s not do that, TrueNAS is what you want for a file server with VM’s.

If you don’t have one already, I’d strongly recommend investing in one. Your current Z440 would be the ideal candidate. But be warned: TrueNAS is a NAS OS first and foremost, ProxMox is a VM hypervisor first and foremost.

TrueNAS, or anything handling ZFS needs block level access and thus requires direct hardware access. There are guys here virtualizing TrueNAS, but it is strongly advised to be the baremetal OS.

Fedora is not for prod, it’s sweet for testing and homelab.

Debian server is solid for hosting an internal facing web server.

And Ubuntu is…generally not well liked outside of Reddit due to their continued corporate decisions regarding data privacy and general straying from the Linux ethos of lots of small simple utilities that each server a given purpose well.

The con of containerization is shared root with host OS.
No matter what you do to ā€œsecureā€ it, the foundation is fundamentally impossible to truly secure. EVERY container solution discloses this in the fine print.

Only full virtual machines with dedicated roots can be secured.

You can easily spin up a VM container host on any hypervisor and be good to go.

Hypervisor, and voted most likely to replace VMWare ESXI.

Not if you’re transcoding 144p streams…

I use a similar machine with dual CPU’s and GPU’s as my primary work workstation, it won’t be overkill for long.

lurk more and use the search function

There’s a wealth of knowledge here from professionals in the field to homelab lunatics virtualizing everything you can imagine.

P.S. good first post

3 Likes

Amazing piece of hw (in 2014). Still very interesting for homelabs.

Haven’t we all …

I think you did the hard thing of figuring out the core technology first. Chapeau!

Here are some suggestions:

  1. Go down the apps rabbit hole. That is usually the exciting bit for most home labbers. Try apps like portainer, pihole, jellyfin, homeassistant, gitlab, watchtower, vaultwarden, frigate, etc. If none of these name ring a bell - don’t worry. Google or search on this (or other) forum. You’ll be amazed.
  2. Go down the virtualization rabbit hole. You figured out containers with podman. Yay! Next on the line is KVM, LXC, K8S, etc. You seem to be driven by technology - so maybe not the worst next step. A serious shortcut is to switch from your trusty Fedora to another OS that offers virtualization. Proxmox and TrueNAS Scale would be first on my mind, but Unraid and others are good, too.
    I like Fedora because it’s stable and continuously supplied with the latest tech. Proxmox and/or TrueNAS are probably better suited to run a homelab. (Tip: Run Fedora in a VM on either to keep going).
  3. Go down the alternative processing unit rabbit hole. GPU, NPU, APU, etc…
    hw assisted video transcoding (jellyfin/plex)? check!
    hw assisted AI detection (frigate)? check!
    hw assisted 3D apps virtualized (Looking Glass)? check!
  4. Go down the storage rabbit hole. I’m putting this one last on purpose. I haven’t seen anything in your post that says that you need a lot of storage. It’s quite a rabbit hole and potentially expensive.
    First - find out what you need (want) and let this guide you. 99% or needs are likely covered by a single large nvme drive.
    You asked about file systems. Awesome. Start reading… about ext2/FAT … then journaling file systems (NTFS, ext3/4, xfs, jfs, etc.) … copy on write file systems (btrfs, zfs, etc.) … then how to manage file systems across multiple devices (mdraid, LVM, etc.) … then how to manage file systems across multiple machines (CEPH, GFS2, OCFS, GPFS, CSV, etc.).
    Still with me? Try out the ones that didn’t bore you to death or that you find an actual need for. Learn about partitioning (GPT, MSDOS, mac, aix, bsd, etc) so you don’t have to buy a bunch of hw to try things out.
    Or figure out what storage devices you actually want to try out: HDD, SSD, SATA, SAS, NVMe, Optane, tape drives, floppy, etc.
    Remember when I said that it was a rabbit hole? Figure out what you need/want first!

With that attitude the sky is the limit.

In a not demeaning way I give you the advice I give to my kids:
Stay safe!
Have fun!

3 Likes

NAS OS:es are typically based on a linux server distro. E.g. both Proxmox and TrueNAS Scale are based on Debian.

I see the NAS OS:es as providing additional layers above the (already quite complicated) worlds of e.g. ZFS and KVM/qemu/libvirt. They look good, but they complicate things and cause problems, in my worldview (which differs from most people’s in this respect, probably ĀÆ\_(惄)_/ĀÆ ).

1 Like

Hey, thanks for the replies.

From the replies in general, Im getting the inclination that I should just try different things.

Im going to save what Ive got now as an iso and im going to try the following. According your suggestions Im going to try the following

  • TrueNAS scale with all my stuff on top
  • proxmox with all my stuff on top
  • to keep going with what my current solution to see where I end up and what I can do with it. Although I can do that from the other solutions.

Im going to save them in separate iso’s so I can get back to them when I feel like I can make a decision and thanks @jode for pointing out I can just run what I have on top of the other solutions.

2 Likes

I’ve been running my NAS in a VM for over a decade and it works very well…

You’re correct with:

Absolutely agree - I pass through a HBA to give the VM direct access to the drive controller - I’ve been using an adaptec 71605 for the last 4 years, after a few iterations with various LSI HBA’s.

The structural advantage for running the NAS inside a VM is the ā€œnetworkā€ connection between the NAS and VM’s is massively fast, so along with benefits like being able to easily experiment with alternate ZFS solutions, upgrades, etc. you also have the ability to back your other VM’s with effectively local-speed storage via iSCSI or NFS.

For the underlying server OS - I’ve got RHEL (free Red Hat Developer Subscription for Individuals which includes updates) on RAID-1 on a pair of nvme drives and use ZFS on Linux to manage a mirror pool using the remainder of those two nvme drives for the local native storage needed (Truenas Scale VM, install ISO’s, and some base host images to clone… the majority of my VM storage is out of the Truenas VM via local bridged network (i.e. all inside the bare metal host not thru actual network).

1 Like