(solved) Noob asking for some help: What is supposed to happen after mounting a drive?

Some info:
OS: Ubuntu 20.04 LTS
Software of interest: MergerFS and Snapraid

Continuing my “series” of dumb questions on here I figured I would ask one that I really have no clue on. Now this might be related to me using MergerFS (which I still haven’t gotten up and running due to some permission issues), but I do wonder what is supposed to happen when I actually mount a drive.

So what I’m on about is this:

  • I format and set up my drives using Gdisk and partition it with ext4.
  • I set up fstab using the nano command to open it (I had some weird issues with Gedit so I just used nano instead).
  • Then I do the mount -a command to mount all drives.

At which point the drives start working their ass off as if I was reading or writing content to them. This continues even if I restart the system, which leads me to believe it’s an issue with how fstab is set up or maybe MergerFS is causing some issues?

Anyway, here is how my fstab is currently looking:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda2 during installation
UUID=148d7b6f-f9e9-4ee5-855f-8f9bbe5b825c /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sda1 during installation
UUID=E5D2-1786  /boot/efi       vfat    umask=0077      0       1
/swapfile                                 none            swap    sw              0       0

# Storage Drives
UUID="18b58779-ad4b-45da-97b9-586f002566e5" /mnt/disk1   ext4 defaults 0 0
# Parity Drives
UUID="931acf58-50c1-42d3-8677-2f19502e1060" /mnt/parity1 ext4 defaults 0 0
# MergerFS setup
/mnt/disk* /mnt/storage fuse.mergerfs defaults,nonempty,allow_other,use_ino,cache.files=off,moveonenospc=true,dropcacheonclose=true,minfreespace=200G,fsname=mergerfs 0 0

If anyone knows what is going on then I’d be really grateful for a helping hand, or even just a nudge in the right direction.

permission issues?
just open a root terminal and you should be good for any permissions you need.
the only other time i have issues with permissions is the chmod *** filename, as certain files need certain 3 number configs or the system wont run them.
chmod calculator

the permission issues is a different matter entirely, and requires me to run things at root even after I do “chown” for the specific folders. For now my main issue is just the hard-drives behaving like I’m transferring thousands of files even though they are virtually empty (freshly formated).

Comment out the MergerFS line (just add a # in front of it) to disable it, for now.

On both storage disk lines, the UUID number is enclosed within quotation marks, remove those like the / partition a few lines up. Reboot.

(like this: )

I noticed a wildcard in the file name for MergerFS. Don’t. It’s most likely the reason you’re having issues.

1 Like

I did as you mentioned, but the issue is still there. As for the wildcard, which one was it that you were thinking about?

It’s in the last line. Copy these commands line by line, we’re slightly altering your fstab as well.

sudo su #provide password on request
apt install libfuse-dev
mkdir /storage
mkdir /storage/disk1
mkdir /storage/parity1

Next, copy the contents below in /etc/fstab, comment out the relevant existing lines:

> # Storage Drives
UUID=18b58779-ad4b-45da-97b9-586f002566e5 /storage/disk1   ext4 defaults 0 0
# Parity Drives
UUID=931acf58-50c1-42d3-8677-2f19502e1060 /storage/parity1 ext4 defaults 0 0
# MergerFS setup
/storage/disk1 /mnt/storage fuse.mergerfs defaults 0 0

Reboot your system. Report back on success or failure.

1 Like

Sadly that changed nothing, I’m also not entirely sure why I needed to create the folder “storage” instead of just having it inside “mnt”. That said I followed guides so I know very little to start with.

I wanted to make sure you’d started afresh. Apparently that didn’t work and something else is amiss. What exactly, I don’t know, it’s very difficult troubleshooting remotely w/o direct access. (no, I don’t want to ATM, thx!)

Remove all the changes I proposed and revert to the settings you had before.

1 Like

Yeah I know, it’s always difficult to do so remotely to start with. I also would never willingly allow someone access to my computer remotely, so no need to worry about that.

Thanks for taking a look at it though, if nothing else I do learn from it all so it’s a net positive at the end of the day even if it doesn’t fix the underlying issue.

I’ll probably reinstall Ubuntu and start over again, though at the end of the day I am starting to suspect that the guides I am following is the issue. Unless of course I just have faulty hardware.

1 Like

Alright, so I did a fresh install yet again. This time I just used the GUI to format and mount my drives just to check if the issue would still prevail after doing that. Turns out it did, and the drives start working like crazy the second they get mounted in Ubuntu 20.04 LTS…

I’m thinking my next test is going to be to install Windows 10 on the machine and see if I can get them working there without an issue. If I can’t then I suspect this might be a hardware issue more than anything else.

Edit: If nothing else the drives seem to still be usable even if they are always in a state of “working” when not in use which is odd.

It’s decided… I tried in Windows and none of the mentioned issues arose. I then proceeded to try in PoP_OS to see if anything was different there than in Ubuntu 20.04, and it seems everything works fine there.

I’ll try some different distros, including Ubuntu 21.10 to see if it’s just an issue with 20.04 or Ubuntu as a whole. I also have Manjaro ready to try to get something non Debian based for testing as well, though at this point I don’t know if that is going to help all that much. At any rate, I’m fairly certain that as long as it’s one of the Debian based distros I should be able to set it all up how I want it, even if it’s something that isn’t designed for server work from the start.

Generally, simply mounting the drive just means to open the filesystem and make the files accessible. A couple of reads, some journal grooming, but it should all be over in a second, detailed mechanism depends on the filesystem, but in general it shouldn’t start doing any weird background scans just out of the box (unless btrfs or zfs resilvering or scrubbing kicks in, but there’s none of that with ext4)

There might be something like SMART or fstrim or fsck running in the background, that gets automatically activated as soon as you state in fstab that you want to use the drives, but that would be weird. Wouldn’t put it past canonical, they’ve done weirder stuff in the past, but odds are still low.

It might be the OS (smartctl specifically) asking the drives to do a self test in the background. In this case you’ll be able to notice the drives reading, but it’s not the OS or anything running on it that’s issuing the reads.

Typically you can use some version of top e.g. atop to sort processes running on the system by drive activity, and you should be able to see what’s asking it for reads/writes.

This kind of thing is easier to figure out while it’s happening.

2 Likes

That’s what can happen if there’s a corrupt file system. Are you sure you recreated and formatted all partitions? If so, maybe zeroing out the drives is needed.

I had a corrupt btrfs once, and any attempt to mount it restarted some failing thread in the kernel with 100% i/o, even booting from a really old systemrescuecd USB stick.

1 Like

I’ll look into both of those things. I did try 21.10 as well and the issue seems to happen there too. The strange thing is Pop_OS did do something with the drives, but it was more of a “standby” noise if that makes sense rather than what it sounds like when they read/write a bunch.

Now as for the drives and corrupted files… Windows seems to handle them perfectly fine without zeroing them, though I’ll give it a try in Ubuntu. The strange thing here is that the drives are brand new (Toshiba MG07ACA14TE drives), so there shouldn’t really be anything on them before I mounted them in Ubuntu the first time.

I’ll look into what is listed as going on with the drives as well like @risk mentioned, maybe it can give me some sort of answer to it all.

Lastly, I think it’s time I run a long SMART scan on the drives to see if they are alright. The short SMART scan showed them as being fine, but if this is hardware related then hopefully the scan will show it.

Independently of mounting stuff, you may want to run badblocks on the drives before you put any valuable data on them, and while keeping an eye on interesting SMART counters.

I don’t think that particular drive model is bad or anything, but 1% or 2% drive mortality within the first year is common, and half if it occurs in the first few weeks… burn in is a way to pre-empt that failure before needing to worry about the data and while it’s easier to return the drives.

This burn-in with badblocks will render the drive contents unusable and it’ll have to be repartitioned and reformated (gdisk, mkfs)

See reasoning and a pointer ti helper script here:

https://perfectmediaserver.com/hardware/new-drive-burnin.html#badblocks

1 Like

Yeah I read about that, but my main issue was that I have 14tb drives and he mentions an 8tb drive taking a week to get through it. I don’t really have a viable setup to do that at the moment since it’s all set up as a “testbench” when trying to get things to work. Once I do have it up and working I figure I might do prolonged testing like that.

:+1: - putting things into some enclosure and into a closet/corner first before messing with software and doing long burnin, sounds good.

badblocks takes “first block” and “last block” as command line args - it’s possible to use those to get “resume”-like functionality, but I don’t know if there’s a good way to remember how far it got.

It can be scripted for sure, e.g. split disk size into 10G chunks and a for loop over them.

1 Like

I installed atop to check that out, and from what I can see both the drives are currently listed as “busy writing 1%”. Not that I have any clue how I can figure out what is writing to them though.

Actually, scratch that… I think it’s all down to jbd2/sdb1-8 and jbd2/sdc1-8, which from what I can see should be the filesystem journal. Not that I quite get what that does yet though.

e.g.

[details=“ugly screenshot from a phone ssh session, yuck”]


[/details]

If you type “?”, you’ll get a screenfull of densely packed instructions, including how to reorder columns, change sorting, change the reporting interval etc… I don’t know OTOH, will check now.

Edit: capital “D” changes the sorting key so processes using most disk IO are at top (default is CPU), lower case “i” changes the reporting interval (default is 10s).

1 Like

Thanks, I mostly just used this as instructions for setting it up.

Like I mentioned in my edit above though, I think it’s all down to jbd2/sdb1-8 and jbd2/sdc1-8. From what I understand that should be the filesystem journal (just what a random google search gave me). Though I don’t quite understand why it needs to always update that.

Edit: While I don’t specifically know that these are the processes, I think these are what makes sense based on the naming.