(beating my anti-UUID drum again sorry) IMO if one is manually maintaining /etc/fstab, one should use labels, not UUIDs. One has to label partitions carefully (that’s a good idea anyway), but the result is a readable /etc/fstab, and thus less error-prone. F.ex.
ls -l /dev/disk/by-uuid
Get the id of the partition (so that would be /dev/sdc1). 32 characters are usually for ext4, 8 characters are usually for fat32 and exfat. Not sure why you’re using exfat on an SSD though. Also, not all kernels have exfat support built-in.
This actually makes sense to me. Readability is very important. My only concern with a dual-boot system is that it’s way too easy to change volume labels in Windows (intentionally or by accident) and then get dumped to an emergency shell when you try to boot Linux.
Honestly I don’t know. Whichever is the default method of exFAT support in Fedora 33 is the most likely answer. I do not remember deliberately installing fuse, but this root partition has been around the block a few times.
/dev/sdc(1) is not the SSD. It was displaced from /dev/sdb(1) by the SSD. sdc is actually spinning rust on which I wanted to use a filesystem accessible from both Windows and Linux, but that’s a whole other rant.
If it says it’s linked against fuse, you need to upgrade to rpmfusion’s exfat-utils instead (not sure about Fedora 33, specifically).
If not, you could try going the other way and set the partition type as exfat-fuse. Just a thought…
Seems others have had similar problems to yours, so may well be a bug in systemd that doesn’t get along with the shiny new exfat code. Might work better with more recent releases.
[root@localhost ~]# ldd /usr/sbin/mount.exfat | grep -i fuse
ldd: /usr/sbin/mount.exfat: No such file or directory
EDIT: Weird, it mounted with the PARTUUID in /etc/fstab for a change. I’ll try a couple of reboots and see if that remains true. I’ve been seeing a LOT of updates lately. Something may have changed…
EDIT2: No, it didn’t automount /dev/sdc1 a 2nd time. And exfat-fuse is not recognized as a valid fstype.
You could try adding _netdev to the mount options… see if mounting later in the boot process works better at picking up the UUIDs. Just a shot in the dark, though.
The thing to do is boot-up with init=/bin/bash or even inside the initrd shell, and try some commands, instead of after the system has booted.
That, or upgrade to Fedora Rawhide, and file a bugreport if it hasn’t already been fixed.
Hi,
Here is a small linux solution I though i’ll share something i didn’t find the solution for online so google can pic it up.
I am running fedora 33 workstation and was having issue running podman container error while loading shared libraries: libc.so.6: cannot change memory protections
I did some re-installation of podman and the selinux thing, reset some right, no luck.
My containers folder was by default in /home/vlycop/.local/share/containers/ and i didn’t want to move it to /var/lib/containers
The issue was with selinux obviously, wrong type. So i fixed it by doing:
sudo semanage fcontext -a -t container_home_t "/home/vlycop/.local/share/containers"
sudo semanage fcontext -a -t container_file_t "/home/vlycop/.local/share/containers/(.*)?"
sudo restorecon -R -v /home/vlycop/.local/share/containers
it’s maybe not the perfect type, but it work and i am just starting with selinux.
I run a NAS with Fedora OS and btrfs on the HDD array. I also use a small 250 GB Samsung M.2 as the boot and OS device.
I was having a lot of problems with the root partition running out of space overnight. All kinds of services would crash and leave things in a bad state.
I finally tracked it down to two things: the default install of mlocate and the new btrfs subvolume and snapshots I had added last week.
I use snapper to make timeline snapshots of important directories and I had recently moved my /home/zlynx/Backups into its own subvolume and snapshot setup. This is where my other devices write their backup files.
It turns out that mlocate's database builder likes to recurse everything including each snapshot. It was bad before but I hadn’t noticed. But once it started trying to build 20 GB database files, which it does in a temporary copy (so that’s 40 GB total) things were no longer sustainable.
The solution? Rather than figure out exclusion masks for mlocate I removed the package. I never use it anyway.
If you have a LOT of files on a system watch out for this.
I was in the habit of using locate, that habit gained in the days of spinning drives. When I adopted btrfs and snapper /var/lib/mlocate/mlocate.db quickly became a problem. In /etc/updatedb.conf PRUNE_BIND_MOUNTS was no good, because updatedb didn’t distinguish a subvolume mount from a bind mount, so skipped /home. So I’ve had to keep /etc/updatedb.conf revised with PRUNENAMES including the names of directories typically used for snapshots, like snapper’s “.snapshots” and PRUNEPATHS with the places I manually put them, and other problematic mounts, such as the btrfs top level mount.
IMO if btrfs is used distros should not install problematic packages like mlocate by default.
Say I have foo: bar in default.yml. How do I access it later? I assume that without "{{ item }}", I would just use "{{ foo }}", or if I used - include_vars: my_vars, I would use "{{ my_vars['foo'] }}} or "{{ my_vars.foo }}}. But not sure what it means to put a variable after include_vars: that doesn’t appear to have been defined yet. Or does the item variable have some implicit value?
Does RHEL/CentOS 7 not have a python3 library for selinux? I installed python3 and set the ansible interpreter to it, but it’s erroring out No module named 'seobject'. I’ve installed every python/selinux/policy package I can think of/find in yum and still no luck.
Jul 22 03:25:07 pve01 kernel: Uhhuh. NMI received for unknown reason 3d on CPU 15.
Jul 22 03:25:07 pve01 kernel: Do you have a strange power saving mode enabled?
Jul 22 03:25:07 pve01 kernel: Dazed and confused, but trying to continue
I recently had cause to run Memtest86 anyway (my CMOS battery died and you get memory errors with my RAM and Mobo combo unless you manually set the memory speed to 1333). So the RAM has rencently had 10+ full passes of a mix of Memtest86 8.4 and 9.1.
I also ran another three passes of Memtest86 9.1 overnight, last night, without errors. I’m now running more passes with 9.1 but with just tests 5 and 8 enable as I’ve read they’re the tests that can find errors with the CPU’s memory controller.
The errors occurred when the system was just going about it’s daily business of taking ZFS snapshots, and running a handful of LXCs. So system load would have been light. Which makes me wonder whether the issue is to do with power saving?.. I found one or two users with similar looking syslog messages who said that disabling c states stopped the error.
There was no apparent problem with the server itself. It didn’t reboot or freeze (unless it unfroze by the time I happened to see the errors about an hour after they happened).