So the fedora systems are going to run as VMs inside your gentoo system??
did I read that correctly?
So the fedora systems are going to run as VMs inside your gentoo system??
did I read that correctly?
Yeah. I don’t want to reinstall the base system (Gentoo) but I want something easily supported by Kubeadm for the VMs (Fedora)
if you’re only using one machine, why not use plain old docker?
I just want to learn kubernetes. And I plan on having more machines once I get the moneys
You should probably start with minikube and get comfortable with kubernetes as a user of it, before you go and run your own cluster.
In a typical on prem kubernetes setup, you’ll usually have some admin host or two to help bootstrap a cluster, or you’ll be bootstrapping one cluster off of another. It’s also unlikely you’ll be able to actually experience what it takes to set it up, until you actually start having to deal with remote storage and overlay networks and gateways. Try minikube, get machiines, (could actually be 5-10 debian VMs if you have the ram and patience), and then look at setting it up from scratch docs. (start with an etcd cluster and so on).
I don’t believe you. I am going to try this and return to either shame you as a liar or thank you endlessly.
I have been wanting to try this for a while.
x2 on kubeadm, though. I love it.
I hate the samba/ldap integration way… it gives crazy uids and AD groups & OUs naming schemes werent designed ( by most orgs at the start) with linux integration in mind.
Heres a sample auth line I use in my kickstart scripts:
auth --enableshadow --passalgo=sha512 --krb5realm=EXAMPLE.ORG --krb5kdc=*,EXAMPLE.ORG --krb5adminserver=EXAMPLE.ORG --enablekrb5
Assuming this scrub finishes without errors, my plan is to upgrade the box as is… then im not sure if I will try to upgrade directly to Fedora 28 or 29…
edit:
made it…
Of course zfs is broken after a distro upgrade…
modprobe zfs
modprobe: FATAL: Module zfs not found in directory /lib/modules/4.20.8-100.fc28.x86_64
dkms status
spl, 0.7.12, 4.20.8-100.fc28.x86_64, x86_64: installed
zfs, 0.7.12: added
Just found this in the install log…
Building initial module for 4.20.8-100.fc28.x86_64
Error! Bad return status for module build on kernel: 4.20.8-100.fc28.x86_64 (x86_64)
Consult /var/lib/dkms/zfs/0.7.12/build/make.log for more information.
warning: %post(zfs-dkms-0.7.12-1.fc28.noarch) scriptlet failed, exit status 10
Non-fatal POSTIN scriptlet failure in rpm package zfs-dkms
Non-fatal POSTIN scriptlet failure in rpm package zfs-dkms
Installing : zfs-0.7.12-1.fc28.x86_64
more errors in that make.log file…
r/lib/dkms/zfs/0.7.12/build/module/zfs/vdev_raidz_math_ssse3.o
CC [M] /var/lib/dkms/zfs/0.7.12/build/module/zfs/vdev_raidz_math_avx2.o
CC [M] /var/lib/dkms/zfs/0.7.12/build/module/zfs/vdev_raidz_math_avx512f.o
CC [M] /var/lib/dkms/zfs/0.7.12/build/module/zfs/vdev_raidz_math_avx512bw.o
LD [M] /var/lib/dkms/zfs/0.7.12/build/module/zfs/zfs.o
make[3]: *** [Makefile:1566: module/var/lib/dkms/zfs/0.7.12/build/module] Error 2
make[3]: Leaving directory ‘/usr/src/kernels/4.20.8-100.fc28.x86_64’
make[2]: *** [Makefile:27: modules] Error 2
make[2]: Leaving directory ‘/var/lib/dkms/zfs/0.7.12/build/module’
make[1]: *** [Makefile:739: all-recursive] Error 1
make[1]: Leaving directory ‘/var/lib/dkms/zfs/0.7.12/build’
make: *** [Makefile:608: all] Error 2
========
not going to mess with it too much. just going to download Fedora 29 directly and then install zfs again… then import my pool… then restore a few conf files
Any reason for not going with Ubuntu?
See my red hat?
I also have a lot of Fedora at home.
I can confirm those rails do fit on the L4500 case. I’ve got two of them. They’re pretty fiddly getting them mounted both on the case and in a rack, but they are nice high quality rails.
So after a fresh install… still errors…
[root@Storage ~]# zpool list
The ZFS modules are not loaded.
Try running ‘/sbin/modprobe zfs’ as root to load them.
[root@Storage ~]# modprobe zfs
modprobe: FATAL: Module zfs not found in directory /lib/modules/4.20.10-200.fc29.x86_64
Im going to bed… not fighting this mess anymore today.
So… im still messing with this crap…
Found the solution… try to patch now.
I default to rhel world as well, but if ZoL is in the mix, I go with Ubuntu for the native support.
I understand though. You gotta keep up appearances.
There must be something missing.
Not having any luck with those steps yet…
ll /lib/modules/4.20.10-200.fc29.x86_64/
total 16940
-rw-r–r--. 1 root root 258 Feb 15 13:57 bls.conf
lrwxrwxrwx. 1 root root 40 Feb 15 13:57 build -> /usr/src/kernels/4.20.10-200.fc29.x86_64
-rw-r–r--. 1 root root 201371 Feb 15 13:56 config
drwxr-xr-x. 2 root root 42 Feb 22 22:28 extra
drwxr-xr-x. 14 root root 157 Feb 22 21:13 kernel
-rw-r–r--. 1 root root 1074886 Feb 22 22:28 modules.alias
-rw-r–r--. 1 root root 1048515 Feb 22 22:28 modules.alias.bin
-rw-r–r--. 1 root root 1692 Feb 15 13:57 modules.block
-rw-r–r--. 1 root root 8296 Feb 15 13:57 modules.builtin
-rw-r–r--. 1 root root 10721 Feb 22 22:28 modules.builtin.bin
-rw-r–r--. 1 root root 394505 Feb 22 22:28 modules.dep
-rw-r–r--. 1 root root 544122 Feb 22 22:28 modules.dep.bin
-rw-r–r--. 1 root root 363 Feb 22 22:28 modules.devname
-rw-r–r--. 1 root root 153 Feb 15 13:57 modules.drm
-rw-r–r--. 1 root root 69 Feb 15 13:57 modules.modesetting
-rw-r–r--. 1 root root 2667 Feb 15 13:57 modules.networking
-rw-r–r--. 1 root root 136744 Feb 15 13:57 modules.order
-rw-r–r--. 1 root root 562 Feb 22 22:28 modules.softdep
-rw-r–r--. 1 root root 452719 Feb 22 22:28 modules.symbols
-rw-r–r--. 1 root root 553036 Feb 22 22:28 modules.symbols.bin
lrwxrwxrwx. 1 root root 5 Feb 15 13:57 source -> build
-rw-------. 1 root root 4111527 Feb 15 13:56 System.map
drwxr-xr-x. 2 root root 6 Feb 15 13:54 updates
drwxr-xr-x. 2 root root 40 Feb 22 21:13 vdso
-rwxr-xr-x. 1 root root 8753352 Feb 15 13:57 vmlinuz
Its not showing up… I guess i have to wait until the next zfs release… or go back a kernel…
im just going to stop tonight… for real this time.
Edit 1
Actually got it…
I imported my pool and now a disk is not showing up…
EDIT 2 :
The steps are:
Make sure that you are running the kernel that you want to build the module for:
uname -srm Linux 4.20.7-100.fc28.x86_64 x86_64
Reinstall zfs and spl packages to have a clean environment:
dnf reinstall zfs-dkms spl-dkms zfs
Download ubuntu’s zfs-linux tarball:
wget https://mirrors.edge.kernel.org/ubuntu/pool/main/z/zfs-linux/zfs-linux_0.7.12-1ubuntu5.debian.tar.xz
Extract the archive
tar xjf zfs-linux_0.7.12-1ubuntu5.debian.tar.xz
Enter the patches dir:
cd debian/patches
Apply the patch to the zfs source:
sudo patch -p1 /var/lib/dkms/zfs/0.7.12/source/include/zpios-ctl.h <3204-Add-4.20-timespec-compat-fix.patch
Remove the modules that dnf built upon installation of the package:
sudo dkms remove spl/0.7.12 --all
sudo dkms remove zfs/0.7.12 --all
Build the modules anew from the patched source code:
sudo dkms --force install spl/0.7.12
sudo dkms --force install zfs/0.7.12
Load the module: modprobe zfs
Now you should be able to restart the zfs-services and mount your pool(s), or just reboot your machine to make the changes to take effect.
I’m in the same boat. Was working on a jbod. Plugged it back in and half the drives are degraded or unavailable. Think this guy might be on the way out.
I think we’re all clear now…
Just have to restore a few config files
.
.
.
Are you still up?