Setting up a homelab like a masochist. Why fedora server wasnt the best choice

Context

About a month ago now I endeavored on a new hobby. Setting up a homeserver. I back then made a post here of that endeavor.

This one:

In the replies a got a lot of advice on what I should try and was good and bad with the work I had done so far.

Some of these suggestions were:

  • Try Proxmox
  • Dont run TrueNAS in a vm
  • Run TrueNAS in a vm
  • Dont mix drives of differing rpms

etc…

Basically what it came down to was try a bunch of stuff and see what you like.

I didnt have a lot of time in the last month but in that time. I have a tried and experimented with a lot of the suggestions and why eventually stuck with my original approach even if its in hindsight maybe the most difficult out of the options I know of.

Trying alternatives

The first time I booted proxmox I thought “wow this enterprise!” but after experimenting with it that reaction became “wow, this is enterprise… I dont need it”. Its good to good option to keep in mind in the future if I ever need to run clusters or something like that but I dont need it. I am mostly going to use containers anyways so I dont see the point in the hypervisor features. Also proxmox uses LXC containers which I learnt dont fit my usecases very well.

As a test I tried TrueNAS in a vm on proxmox since might as well and yeah its great. Its easy look at all those plugins and buttons. Except containers are still kind of an issue and furthermore well… its boring. I didnt consider it until this point but with my first approach I was enjoying figuring stuff out and feeling the burn of having to figure out why my git container really likes to crash at startup.

I had also thought of trying open media vault but that would have a lot of the problems as TrueNAS.

So back to fedora server I went.

Fedora server was a bad idea

I had an iso from the previous attempt but I thought It would be nicer to start fresh and since the last installation ended up a bit messy.

So with all things I learnt previously I had all the containers setup surprisingly quickly and a lot more cleanly. Next was to some apply the feedback you all gave me.

LTS Kernel

First being getting some sort of a stable base.
This should be easy even ArchLinux provides alternative kernels in its repositories, surely fedora does as well… hm… the lts kernel isnt there.

I eventually find someone uploading and updating lts kernels in the copr repositories(third party repositories). Its a bit weird having to rely on an unofficial volunteer for my security updates but he seems to be very on the ball and the repo was popular. I have more examples of this like installing proprietary nvidia drivers or god forbid flatpaks that arent blessed by the fedora team but this one imo was the most egregious.

A smidgen iffy but should be fine.

Installing ZFS

Next and where Im currently at, zfs.
I read it as heavily recommended and something I should use in my first post so I found it in the official repos and installed it. Only to find out that this version of zfs is ancient. Ancient to the point that the outdated zfs dashboard plugin I wanted use considered it too old. I looked at the copr repositories for packages but they were all out of date experiments by randos so that wasnt an option either.

Thats when I did a bit of research and learnt that openzfs plays a dangerous game of hokey-pokey with oracle(them again…). Therefore openzfs isnt packaged with fedora. This is fine but what made it the most annoying was that the official methods for installing zfs on fedora doesnt work or is outdated. I eventually had to compile it in from source to get a functional installation.

COPR

I tried solving a lot of my issues through copr while it sometimes worked a lot of the time the state of packaging on copr isnt very good. Theres a lot duplicate uploads of things and a lot of packages that arent maintained. Its not like the aur which is looks to be in a better state. Not that its a bad experience overall. Its very easy to see which packages to trust but it makes me a lot less comfortable relying on it. Especially for packages like the bloody kernel. It wouldnt be so bad if I hadnt have to turn to it so often as I have.

The Good

I have more complaints but they boil down to more of the same either its not available or I have to jump through hoops to get it. However if you play by fedora’s rules it might be one of the smoothest Linux experiences out there. If… If you play by their rules which is why it is so popular despite the shortcomings, I guess. The package manager is very feature rich and easy to operate at the same time. The installation experience might be the nicest out there and the installation package configurations are genuinely useful and not full of bs. The overall feel of polish exudes even through the terminal interface. There is literally no weirdness until that is, you start to try some things.

Summary on Fedora Server

In practice fedora post install is frustrating distro to use use. Arch and Gentoo might be harder to install but once installed their straight forward. Especially if you have a decent amount of Linux experience. It basically boils down to packaging.

With Arch and Gentoo the packaging is far more lenient with licenses than fedora is. They also keep generally useful alternatives like gpu drivers and alternative kernels in their main repositories making them a lot more flexible. Fedora doesnt like giving easy access to alternatives it seems. You have to play by the rules.

Sticking with Fedora?

Which is why its strange Im sticking with it. I looked for alternatives but as I said I enjoy the pain to certain extent and I feel like a learn a lot more from this than if I were to go with something like Ubuntu server.

I was going to ask about zfs configurations and what vpn to use but this ended up more like a blog post. Not sure if ppl are interested in what I have to write about but I guess ill see.

Ill split my questions into another post.

Most people will have a good time with complex technology as long as they’re following the beaten path.
Fedora is advertised and designed to be a distro that adopts change quickly. This is very well supported and while it seems painful to upgrade kernels basically weekly it has been very reliable for me for 15+ years.
The idea of choosing an LTS kernel from a COPR repo feels wrong to me on several levels. If you want/need a slowly changing system, go to Debian.

Follow Fedora — OpenZFS documentation
avoid zfs for root and you’ll have no issues.

In my experience, Fedora is very stable as long as you’re sticking to the main repos accessed by the package manager “DNF”.
I generally turn off flatpak support, and avoid COPR repos on servers. I learned to trust rpmfusion repo for desktops (Nvidia, cuda, all the media support that Fedora doesn’t want to ship ootb).

Yep, that’s it. You make it sound distopian like you have to follow the rules, but similar rules exist for any complex piece of software. You need to learn what the rules are and what the cost is of deviating from them.

1 Like

Why? I run zfs on root on Fedora and it works just fine.

Sorry - I didn’t mean to say that it doesn’t work. IMHO it’s too complicated for what it offers. I stated my KISS assessment.

The reason behind it: because of the delicate dance between the openzfs project and the kernel it sometimes takes a little bit before a new kernel version is supported by openzfs.
With Fedora adopting new kernel versions quickly I found myself several times in the situation where upgrading according to Fedora schedule would have rendered my zfs pool inaccessible until the next release of openzfs.

The solution is to stay on the latest working version and update with dnf parameter “-x kernel*”.

That, of course, can still work with the root filesystem on zfs, but I feel that avoiding root on zfs guards against the machine not being able to boot.
Again - not technically necessary, but I didn’t want to deal with it.

1 Like

The main reason I went with an lts-kernel is because fedora moves too fast for open-zsh. Especially since I need to compile it from it from their download site. I wouldnt gone with install from copr either. Thats I guess what I get for learning as I go.

Also I have tried the exact same instructions you have posted. it doesnt work with Fedora 41. The ZFS repo link doesnt match up with their actual repos and the repos themselves are out of date as of now. I had get the source files from their official download site. Its as much Fedora problem as a zsh problem. I think… Unless I did something stupid but I literally tried copy and pasting the commands. Even modifying the link to the git repos to work but even when the rpm downloads it fails when itry and install it. I get a its ‘not a rpm error’ and I wasnt going to learn the rpm spec enough to troubleshoot that.

Edit: you express the experience in your followup comment. although yours rely on already having the older kernel versions already installed.

Not my intention to sound doomery :sweat_smile:. It was merely and observation and I should probably tone down the dramatic flair a little bit. Im DIY distro user so thats my perspective. Also yeah your right I learnt the rules the hard way :smiley:.

If you have another suggestion other than Debian for a slower release cycle that uses systemd(otherwise I wouldve gone with Void)? My experience with Debian is painful bc its the opposite extreme and I really dont like using Debian for that reason. Something with the package cycle of Gentoo would be ideal but Im not going to use Gentoo bc of uptime.

The reason i want to stick to it mostly bc I dont want to start over and I want to actually have something working. Although maybe should actually listen to my gut and find something else.

1 Like

or so you think…

By design, TrueNAS is an end all, be-all storage appliance. That’s their niche.

This is like finding the crystal meth tweaker least likely to steal your bike.

Debian is stability refined.

Fedora is bleeding edge Linux. New hardware compatibility and features supersedes things like uptime or stability.

Love it for that, but not a good foundation on which to build anything mission critical.

this is gonna get fuckey

You do see the irony of having to compile code and push it into prod in an effort to maintain data integrity?

Stockholm syndrome

If you can swing it: have a dedicated NAS for data integrity / backups
run this server for all your shenanigans

take regular backups and practice restoring so when an update bricks the install you can recover.

but above all: have fun

1 Like

If you want ZFS on Fedora 41, you’re just going to have to wait. Open ZFS just isn’t compatible with kernel 6.11 or Fedora 41 yet. You just need to download Fedora 40 and be ready to look at this for a while:

and use this upgrade command in Terminal
“dnf update --exclude=kernel*”

Other than that, it’s easy peasy lemon squeezy.

1 Like

You sort of did, ZFS master, which you likely downloaded from GitHub is not what anyone should be using for anything but testing and development. The reason those commands did not work is ZFS is not compatible with Fedora 41 and Kernel 6.11 just yet. It will be soon-ish. They’re working on it, you can track the progress on GitHub. Best bet it to download Fedora 40 which comes with Kernel 6.8 and then download ZFS the right way.

edit: All that said, if you want stability and ZFS, either run Ubuntu, or go all in and run TrueNAS. IMO, that makes the most sense for a home server.

1 Like

Ouch. You just arrived at the perfect time to fall through the cracks. Hmm, I just installed zfs on a F40 machine yesterday. Maybe, that’s a way to enter Fedora if you still want to…

I mentioned Debian as the example for slow upgrade cycles. I think almost any other distribution will fall in between.
I started using Fedora around version FC9 (it used to be called Fedora Core). All the stuff you’re learning painfully now I had a couple of years to learn painfully myself. That’s why I feel comfortable providing guidance on Fedora.
I have limited experience with other distros, and as you know, there are many reasons to choose a distro other then technical reasons.

If you’re still in the market I’d give Alpine a look - it’s designed to be small, and it’s quite fast as a result. It’s commonly used as base image for containers, but I find it enjoyable for server type (cli type) installs.

Also, I am in the process to evolve my home lab from a server based architecture into a service type architecture and I am in the process of learning painful lessons around Proxmox after getting fed up by TrueNAS. :slight_smile:

1 Like

I didn’t like truenas docker system. I went for Debian with Docker and zfs. I’m using cockpit as a webinterface to monitor things similarly as the webinterface for truenas works.

It has been stable and figuring stuff out without too much layers of abstractions has been nice. I also like having some applications running directly like Plex, so gpu encoding is easy to do ( virtual machines can be problematic)

Thanks for all of the replies.

“crystal meth tweaker” That gave a good laugh but as I explained in previous post I really dont like Debian as its the opposite extreme. I end up doing a lot of the same things as I am doing with fedora but in the opposite direction.

I didnt use the github master version. I know enough about software development to know thats a bad idea. Im using the latest stable from their off github download site, version 2.2.6. Its still source code but alteast not a development build.

Thanks for the suggestion but Alpine uses openRC as its init system. There a lot of really good minimalist options to use but most of the viable alternatives I know of use either runit or openRC as init systems. Not that theres anything wrong that but I would like to leverage some of the big systemd features to make my life easier and to not start from 0 again if I am switching distro.

The only alternative I can think of is ArchLinux but I am not insane. Other alternatives I know of are nix but nix is a rabbithole I am not prepared for mentally also not really built as a server distro or Gentoo-systemd but thats a source distro.

Edit: I guess it would be useful to create a list of requirements.

My list of requirements are:

  • Latest LTS-packaging. Basically like the newest versions of the longterm support releases for atleast the core of the OS like the kernel, init system etc.
  • Flexible enough. Having multiple versions of the kernel and other crucial packages.
  • Good support so popular enough.
  • Systemd although I am slowly changing my mind on this since this really seems to be the biggest damper on a lot of good options.

This is not being critical but more curious. Why do you want all the newest versions of stuff? For a nas or server I would think having something a little older but solid and stable to be enough.

Only for my desktop would i go for up to date stuff like fedora.

@nutral Its a fair point to make. Its from experience using point release distros in the past. I have had them break because of age of the packages as much as really fast moving distros. Its fine if you plan on only doing nas things with your server but Im not only going to do NAS things. Im also going to use it for things like offloading compile workloads and things like game servers.

The server is very overkill for just a nas :P. It would be a waste if I just used it for that.

1 Like

Ah i see! I agree if you want to do more stuff than that. It is the same for my usecase. I have nas running debian and an “application” server running proxmox.

I spin up virtual machines for specific things like gameservers or CI workloads. But it is also running all my home server stuff in about 32 docker containers in seperate LXC containers so a rogue application can’t take all cpu cores/memory of my home automation stuff.

But this is already 4 years into my home server story. i started on a single ubuntu-server with docker, that worked pretty well.

image

2 Likes

Maybe I should use proxmox after all :sweat_smile:.

I could run a simple stable base system and put vms and containers on top of it but compiling in vms doesnt always work well.

Edit: did a bit of research. Still learning. Compiling in a vm should mostly be fine. I have done it before but thats probably bc I was trying to cross compile for an uncommon architecture.

2 Likes