OS: Ubuntu 20.04 LTS
Software of interest: MergerFS and Snapraid
Continuing my “series” of dumb questions on here I figured I would ask one that I really have no clue on. Now this might be related to me using MergerFS (which I still haven’t gotten up and running due to some permission issues), but I do wonder what is supposed to happen when I actually mount a drive.
So what I’m on about is this:
I format and set up my drives using Gdisk and partition it with ext4.
I set up fstab using the nano command to open it (I had some weird issues with Gedit so I just used nano instead).
Then I do the mount -a command to mount all drives.
At which point the drives start working their ass off as if I was reading or writing content to them. This continues even if I restart the system, which leads me to believe it’s an issue with how fstab is set up or maybe MergerFS is causing some issues?
Anyway, here is how my fstab is currently looking:
# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=148d7b6f-f9e9-4ee5-855f-8f9bbe5b825c / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
UUID=E5D2-1786 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
# Storage Drives
UUID="18b58779-ad4b-45da-97b9-586f002566e5" /mnt/disk1 ext4 defaults 0 0
# Parity Drives
UUID="931acf58-50c1-42d3-8677-2f19502e1060" /mnt/parity1 ext4 defaults 0 0
# MergerFS setup
/mnt/disk* /mnt/storage fuse.mergerfs defaults,nonempty,allow_other,use_ino,cache.files=off,moveonenospc=true,dropcacheonclose=true,minfreespace=200G,fsname=mergerfs 0 0
If anyone knows what is going on then I’d be really grateful for a helping hand, or even just a nudge in the right direction.
just open a root terminal and you should be good for any permissions you need.
the only other time i have issues with permissions is the chmod *** filename, as certain files need certain 3 number configs or the system wont run them. chmod calculator
the permission issues is a different matter entirely, and requires me to run things at root even after I do “chown” for the specific folders. For now my main issue is just the hard-drives behaving like I’m transferring thousands of files even though they are virtually empty (freshly formated).
Sadly that changed nothing, I’m also not entirely sure why I needed to create the folder “storage” instead of just having it inside “mnt”. That said I followed guides so I know very little to start with.
I wanted to make sure you’d started afresh. Apparently that didn’t work and something else is amiss. What exactly, I don’t know, it’s very difficult troubleshooting remotely w/o direct access. (no, I don’t want to ATM, thx!)
Remove all the changes I proposed and revert to the settings you had before.
Alright, so I did a fresh install yet again. This time I just used the GUI to format and mount my drives just to check if the issue would still prevail after doing that. Turns out it did, and the drives start working like crazy the second they get mounted in Ubuntu 20.04 LTS…
I’m thinking my next test is going to be to install Windows 10 on the machine and see if I can get them working there without an issue. If I can’t then I suspect this might be a hardware issue more than anything else.
Edit: If nothing else the drives seem to still be usable even if they are always in a state of “working” when not in use which is odd.
It’s decided… I tried in Windows and none of the mentioned issues arose. I then proceeded to try in PoP_OS to see if anything was different there than in Ubuntu 20.04, and it seems everything works fine there.
I’ll try some different distros, including Ubuntu 21.10 to see if it’s just an issue with 20.04 or Ubuntu as a whole. I also have Manjaro ready to try to get something non Debian based for testing as well, though at this point I don’t know if that is going to help all that much. At any rate, I’m fairly certain that as long as it’s one of the Debian based distros I should be able to set it all up how I want it, even if it’s something that isn’t designed for server work from the start.
Generally, simply mounting the drive just means to open the filesystem and make the files accessible. A couple of reads, some journal grooming, but it should all be over in a second, detailed mechanism depends on the filesystem, but in general it shouldn’t start doing any weird background scans just out of the box (unless btrfs or zfs resilvering or scrubbing kicks in, but there’s none of that with ext4)
There might be something like SMART or fstrim or fsck running in the background, that gets automatically activated as soon as you state in fstab that you want to use the drives, but that would be weird. Wouldn’t put it past canonical, they’ve done weirder stuff in the past, but odds are still low.
It might be the OS (smartctl specifically) asking the drives to do a self test in the background. In this case you’ll be able to notice the drives reading, but it’s not the OS or anything running on it that’s issuing the reads.
Typically you can use some version of top e.g. atop to sort processes running on the system by drive activity, and you should be able to see what’s asking it for reads/writes.
This kind of thing is easier to figure out while it’s happening.
I’ll look into both of those things. I did try 21.10 as well and the issue seems to happen there too. The strange thing is Pop_OS did do something with the drives, but it was more of a “standby” noise if that makes sense rather than what it sounds like when they read/write a bunch.
Now as for the drives and corrupted files… Windows seems to handle them perfectly fine without zeroing them, though I’ll give it a try in Ubuntu. The strange thing here is that the drives are brand new (Toshiba MG07ACA14TE drives), so there shouldn’t really be anything on them before I mounted them in Ubuntu the first time.
I’ll look into what is listed as going on with the drives as well like @risk mentioned, maybe it can give me some sort of answer to it all.
Lastly, I think it’s time I run a long SMART scan on the drives to see if they are alright. The short SMART scan showed them as being fine, but if this is hardware related then hopefully the scan will show it.
Independently of mounting stuff, you may want to run badblocks on the drives before you put any valuable data on them, and while keeping an eye on interesting SMART counters.
I don’t think that particular drive model is bad or anything, but 1% or 2% drive mortality within the first year is common, and half if it occurs in the first few weeks… burn in is a way to pre-empt that failure before needing to worry about the data and while it’s easier to return the drives.
This burn-in with badblocks will render the drive contents unusable and it’ll have to be repartitioned and reformated (gdisk, mkfs)
See reasoning and a pointer ti helper script here:
Yeah I read about that, but my main issue was that I have 14tb drives and he mentions an 8tb drive taking a week to get through it. I don’t really have a viable setup to do that at the moment since it’s all set up as a “testbench” when trying to get things to work. Once I do have it up and working I figure I might do prolonged testing like that.
Thanks, I mostly just used this as instructions for setting it up.
Like I mentioned in my edit above though, I think it’s all down to jbd2/sdb1-8 and jbd2/sdc1-8. From what I understand that should be the filesystem journal (just what a random google search gave me). Though I don’t quite understand why it needs to always update that.
Edit: While I don’t specifically know that these are the processes, I think these are what makes sense based on the naming.