Fedora Server 31 Recovery

So, I have run into a little issue on a server running Fedora Server 31 that I recently built at home. I have only used this to run Docker so far. If you are not familiar with Fedora Server, it has a nice WebGUI, which is one of the reasons I chose it. Also, because I usually run Fedora Desktop on one of my other machines.

I noticed that my root file system partition was completely full earlier today. The OS is installed on a 256 SSD but this partition was only allocated 15GB automatically. I started shutting down the few containers that I had installed to see what was going on. I clicked back over to the Storage section in the GUI to see if any space on the partition had been freed, which it was. I noticed the option to rename the partition at the top of the screen. I decided to change the name to something that made more sense so I renamed it and restarted the server.

Now, it won’t boot. It goes into emergency mode. If I hit Ctl+D, it says “Not all disks have been found.” and “You might want to regenerate your initramf’s.” I’ve tried to find some instructions on how to do this but have not found anything that works b/c they all assume that the system is booted into a kernel first. I tried running “dracut --regenerate-all --force” but it doesn’t recognize the dracut command.

Checked the grub boot parameters by pressing “e” on the kernel at boot and it shows that it’s still looking for the old partition name.

I’ve tried using a live USB to boot into Rescue Mode but I can’t mount the partition to be able to edit anything. Choosing option “1” to automount leads to a “Pane is dead” message. If I choose option “3” to skip the automount, I can get to a prompt, use fdisk to list the drives and volumes but I still can’t get the one I need to mount.

Is this a lost cause or can I actually get this thing back to where it will boot without having to reinstall from scratch?

You might want to check that the UUIDs listed at both /boot/grub2/grub.cfg and /etc/fstab match with your expected devices especially now that you did some changes (renaming) to your block device.

Did you use LVM to install your partitions or did did you do it manually?

I didn’t set any of the partitions up manually. I only have one ssd in the system so I let it partition everything automatically during installation. Bulk storage is coming from my FreeNAS vis NFS.

But did fedora set up LVM automatically?

It should have according to the documentation and that’s what I see them labeled as when I run “fdisk -l” from the console in rescue mode.

so in grub you can boot with kernel parameters
I would suggest emergency or rescue
https://www.gnu.org/software/grub/manual/grub/grub.html
search for kernel parameters
if you are able to boot into either you can start your repair
https://gnu.huihoo.org/grub-0.90/html_chapter/grub_12.html
it will get you into a early boot and drop you into a single console with no tty and single user root only

if you changed the label or name of the partition it’s possible you changed the filesystem or other attributes

(I’m trying to do this from Firefox on an Android)

by partition name do you mean a uuid?
you can temporarily bypass this by listing the /dev/ location of the drive

@Knight26 - I had a similar experience on a CentOS box, and resolved it as follows. This isn’t a cookbook recipe since your situation is different, but may help you find a way forward.

Not to belabor the obvious, but if you have important data that isn’t backed up, do so as early in this process as possible. You could even do some sort of disk copy or clone before starting, perhaps using dd. Working at this level, there is significant risk of unrecoverable loss.

But wait! After typing most of what follows, I realized you might be able simply to change the partition name back to what it was originally, working in the Live USB Linux. Googling “linux partition name” turns up info on how to do this, which I do not have experience with. You might need to use the first step below to find the LVM partition.

Working in the Live USB Linux:

  1. Figure out how to mount the original drive root partition, or get more info and tell us more about this problem. If LVM was used on your drive, commands “pvscan”, “lvscan”, and “vgscan” should show available partitions within LVM.

    • If you learn the volume group and logical volume, then “sudo mount /dev/volgroup/logicalvol mountpoint”
  2. If there is a dedicated /boot partition, mount it as well.

  3. From here, I use “/mnt” as shorthand for wherever the relevant partition is mounted.

  4. Examine the file /mnt/boot/efi/EFI/fedora/grub.cfg . Find the boot options, particularly, “root=/dev/…”. For my CentOS, there were multiple “linuxefi” lines, each with boot options. But I have a Fedora system, which looks different. Look for “set-root” (multiple times; one for each boot menu entry) and “default_kernelopts” which appears to have “root=/dev/…” that is used by all boot menu entries.

  5. Can you tell what needs to happen here, how the option(s) need to change to fit the changed partition name? If so, edit it. If not, come back with more info for more advice.

  6. You may also need to edit /mnt/etc/fstab and update the info for the root filesystem. Or, if using LVM, it may be okay.

Good luck with this issue, and let us know how it goes.

P.S. If you’d like to change the partition name back, but aren’t sure of the original name, you might be able to learn it at step 4 in my previous post.

I haven’t gotten back to troubleshooting this yet. Thanks for the advice. I was still in a trial run stage with this setup so I still have the apps that were running in Docker on my FreeNAS. So I haven’t been in a hurry to get it fixed.

I did do some more research into the original problem I was trying to figure out before I screwed up the partition name though. I’ll have to give Docker it’s own volume to write to so it doesn’t fill up the root partition. This was not mentioned in any of the Docker installation guides I read. There was a kernel bug with device-mapper that prevents disc space from being freed up after a container has been deleted which is supposed to be fixed now. I don’t know if this is what was happening in my case but I should be able get around it by giving it it’s own volume and occasionally running a prune command to delete any stopped containers and their associated data if see the volume getting full.

P.P.S. If you do modify grub.cfg to get the system to boot, make a corresponding change to /etc/default/grub so that the next time the boot files are regenerated, they will be correct.

Thanks for your response with info on the status; glad this isn’t an emergency.