Looking at my RAIDs I see that the status bounces quite a lot from clean to active. There is a lot of activity on the RAID.
It’s a RAID 6 with 8 Hard Disks.
My guess is that is OK. From what the googles said, though a little murky, is that it is.
Is this happening because writing data causes the parity to be updated, thus becoming active, and after that it falls back to clean when done?
It’s just not crystal clear.
It’s not resyncing is it? I see no indications that it is.
Thank you all so much for your help!
With parity RAID, you’re most certainly going to get a lot of I/O because of parity calculations. Let’s also assume the drives themselves are not crapping out bad sectors left and right AND are “RAID-friendly” (TLER-capable/NON-shingled architecture/etc.). I would be suspicious of any thin provisioning mechanisms resident on the array especially if the base fs is say ZFS or BTRFS. Did you configure it with LVM first and then establish the file system?
Regardless, thin-provisioning + parity on spinning rust in this day and age is not ideal, IMO unless it’s being used for archival/dormant storage purposes. Just my 2c.