(solved) Noob asking for some help: What is supposed to happen after mounting a drive?

I wonder what’s included in “defaults” mount options for ext4.

e.g. there’s this atime or access time thing, every time you read a file or list a directory, even if it is in ram, the filesystem can record that it’s been accessed, which eventually causes a write to a disk… ; and because you want your filesystem metadata to be consistent even if you unplug a disk half way while messing with the metadata; the filesystem uses a “journal”, … it writes to a log like place somewhere a checksummed message “at 2pm I was about to change the access time of file Foo from 1.49pm to 2pm”… and then it overwrites the metadata…
… this way, when power comes back, the filesystem can look at the journal will have a bogus non checksum matching entry, that’ll be skipped, or it’ll have an ok entry and it can see if filesystem metadata for file Foo matches what it should be.

This is a good thing, it’s just annoying and slow, and lots of people just disable “access time updates” by passing noatime in fstab… or by passing relatime, which only updates (and therefore only journals) this “access time update change” when the difference is e g. 1day or greater.

There’s other options that might be too conservative and would be worth checking.

1 Like

This time I purely set up the drives using the GUI rather than fstab (mostly because I wanted to test it quickly) so I haven’t actually set up things myself in fstab. Though I assume it’s set up with defaults.

Do you have an example of how I would set up a drive to not continuously update the metadata? I figure it might be worth a check to see if that is what is causing this issue or not.

# Storage Drives
UUID="18b58779-ad4b-45da-97b9-586f002566e5" /mnt/disk1   ext4 noatime 0 0
# Parity Drives
UUID="931acf58-50c1-42d3-8677-2f19502e1060" /mnt/parity1 ext4 noatime 0 0
# MergerFS setup
/mnt/disk* /mnt/storage fuse.mergerfs noatime,nonempty,allow_other,use_ino,cache.files=off,moveonenospc=true,dropcacheonclose=true,minfreespace=200G,fsname=mergerfs 0 0

defaults implies “async” which I don’t “like”, but who am I to judge.

All options here: ext4(5) — e2fsprogs — Debian unstable — Debian Manpages

… and in man mount and in https://www.kernel.org/doc/Documentation/filesystems/ext4.txt

1 Like

I see, so you just replace defaults with noatime then. That makes sense. Thanks, I’ll look into trying this out later today.

1 Like

So I finally got around to doing the fstab test with defaults switched to noatime and it seems to have done nothing. I also did a full format of the drives in question using the Overwrite existing data with zeroes (Slow) setting in gdisk, which btw took me 2 days to finish… In either case that did nothing either.

I’m really stomped on what the issue is to be honest. I’ll update the thread OP and header to reflect what actually seems to be the issue though. Also I’ll upload an audio recording of how the hard-drive sounds (just my phone placed 15cm away from the drive and recording). No idea if that will do anything, but this is at least how both drives sound constantly after mounting them no matter which mounting options is getting used.

Couldn’t find somewhere people would trust to go to, so I just uploaded the soundfile to soundcloud. Maybe Hard-Drive noises could become the new trending music? Anyway, here is a link: Stream episode Just Hard-Drive Noise by Northman podcast | Listen online for free on SoundCloud

That doesn’t sound like a healthy “hard drive is working” type of noise at all, hmmm.

I’d be worried to have them continue to do that.


I wonder, if you were to ask the kernel to shutdown the drives/ports on the controller, would the drives stop … or would they keep sounding stupid.

No idea, but they stop immediately if I un-mount them. They also do not do that stuff when mounted in Win 10 and formated in NTFS, so it’s a Linux thing even if I have no idea what that thing is.

Didn’t actually end up updating the OP and thread title, but I am narrowing down things.

I just installed Fedora 35WS and set that up to see if I could get a different result than in Ubuntu and Pop_OS. I figured since Fedora is Red Hat based instead of Debian based it might give me a different result. So I installed it like normal and went through everything fine as expected, until I came to mounting those 14tb drives. I format them to ext4 and mount them just to end up with the exact same issue as in Ubuntu and Pop_OS.

This had me scratching my head for a while, as it should be distanced enough from Ubuntu/Debian to give me a different result (though not as different as Windows). Which had me trying to figure out similarities with each time I had an issue, and one thing was always there ext4… So knowing that it worked fine with NTFS, I figured I would try a different filesystem and see if that would fix things. I remembered that the PMS guide mentioned that both ext4 and XFS were good choices for this type of server, so I figured I would give XFS a try before I would have tried NTFS in Linux. And… it works now. None of the issues I had with ext4 have shown up when using XFS, and I have no idea why.

If anyone has any clue why ext4 would cause these issues while XFS and NTFS does not, then I would love to hear some ideas/speculation.

Mostly just me posting answers to my own thread here, but I finally heard back from Toshiba regarding the matter of the drive’s behaviour in ext4 compared to xfs and ntfs.

This is a direct quote:

If you format a drive with the ext4 Format, an indexing of the whole drive will start in the background.
Such kind of index process will create noises like the one you recorded.

As the drive is 14TB it can take quite a few hours before this process will be finished.

After a few days the process should be completed and the drive will work more silent, as long as no user data will be transferred.

As other file systems, like XFS or NTFS do not start this index process, the drive will be silent as soon as the format process will be finished.

In this case the mentioned noise is to be considered as normal and no sign of a defective drive.

So I suppose the indexing process would take quite some time to finish then, but given the noises I was somewhat too worried to leave it working like that for a few days.

1 Like

Thank you, this was news to me.

Now, I found a slightly more detailed explanation of this at hectic geek.

The building of the index node is rate-limited so that performance using the drive is not affected much. The article says this limit is “16 Mb/s” but I expect that should be 16 MB/s, megabits/second seems not applicable to storage. For 14 TB this is of the order of a million seconds, 12 days or so! Ridiculous, IMO. Perhaps the rate limit is higher these days, but still.

It would annoy me greatly, so I’ll be using -E lazy_itable_init=0,lazy_journal_init=0 if I’m ever making a big ext4 volume. I normally use btrfs but I’ve found that slower copying VMs around so use ext4 for them.

12 days seems really freaking long. I only used 6 days to do the “burn-in” on the new drives after all using the following code:

badblocks -b 4096 -wsv /dev/sdX

At any rate I have more or less just started to use xfs on my drives at this point instead of ext4. Just the ability to skip indexing seems to make it worth it, and the benefits vs upsides of using xfs compared to ext4 in a storage setting seems to be more or less equal before you take indexing into account (from my understanding anyhow).

The weird thing is though that the 2tb drive I have set up took less than a minute to index using ext4 so I am not entirely leaving this all up to indexing being the only issue as Toshiba claims.