Which SSD filesystem for static files storage

I’m looking to move my instrument/plugin collection to a separate disk, and its in the order of hundreds of gigabytes.
I want it to be an SSD, but I don’t want Linux to keep writing to it, journaling, etc. It’s just static data, except when I install a new plugin, and there’s no need for the OS to write anything to it, ever, unless I copy files to that disk myself. And I don’t want to mount it as ‘ro’.

What filesystem/configuration should I use for that disk?

Is your concern more of bitrot or performance? The answer the jumped to mind is ZFS but it will need to periodically scrub to guarantee the data. Not sure if that’s what you’re looking for?

My concern is to have fast loading of the (mostly large) files, without the degradation of the SSD caused by the constant writing that the OS does all on its own.
Once the files are installed on the disk, there’s no reason for any writes to occur, unless I install new files on it.

If journaling is an issue, then the only FS left is ext2. JFS and XFS have good results in performance on large and not-so-large files, but both are journaling file systems. So is BTRFS, but as it’s a COW FS, the original data-set remains untouched. Still, it’ll probably write to it at some point and apparently, you’re quite adamant about preventing that. No idea why, modern SSDs are perfectly capable of handling journaling file systems w/o ever getting low on remaining cell capacity.

1 Like

Maybe I’m over-concerned about writes - one of my old SSD’s gives me S.M.A.R.T. warning on every boot, because of power cycle count, not bad blocks…

I don’t know what you’ve experienced in the past to make you worry about this. I don’t see writes to my drives unless I write something.

The exceptions are “atime” which is short for “access time”. That will write an update to the timestamps every time a file is read. That’s why I always mount my drives with “noatime” as a mount option. There’s also “relatime” if you like that better.

Then there’s SELinux relabel which happens whenever your operating system updates its selinux policy packages. Only relevant if you use SELinux. I think AppArmor just uses file paths so it isn’t a thing there.

Some file indexers write extended attributes to the files as they are processed. That helps them know if the file was already indexed and if so, what data was found in it.

I happen to like using btrfs as my usual filesystem. I would recommend it to you as well. Since you are only doing reads, for the most part, its sometimes slow performance during writes will not bother you. And the CRC checks for data errors are really great.

For the rest of the potential writes I mentioned, I don’t think they’re a big deal. I personally don’t care if Fedora wants to relabel every file now and then for SELinux updates.

But definitely try adding “noatime” to your /etc/fstab lines.

1 Like

I really understand the fear of prematurely killing your ssd.
But a bit more over provisioning and making sure TRIM is set up properly should be enough to last until the next upgrade probably.

And if you still worry then, maybe it’s just better to channel that paranoia into backups. :stuck_out_tongue_winking_eye:

frhun, when I think about it, the drive will probably fail of other causes first. And the data does not need backing up, just re-installing if it gets damaged.

Manufacturers program (fairly) arbitrary values to various SMART attributes and the SMART programs out there only react to whatever value is programmed. I have an old Crucial SSD that has exceeded its lifespan about 3x over and it still worked fine. This was an OS disk (so lots of R/W action!) in a server. I’ve since replaced it, but IIRC it had an cumulative uptime of 65-70k hrs, the replacement SSD already clocked 20,000+hrs in a different setup.

Yes, my old SSD with the warnings works perfectly too, just the annoying pop-up warning of imminent failure because of power cycle count appearing on every boot since a couple of years now…

Consider disabling these popup warnings.

I’m using a NVMe boot drive in my home NAS and it’s a funny thing. Its statistics are mostly writes! Because a boot drive is small and servers have a ton of RAM. It mostly gets cached on boot, and so the disk is essentially read one time on boot.

But then every time there’s a package update it gets written again. And of course the various log files which are mostly write-only.

Yep, that drive is 30 TB read and 35 TB written.

The reads are even helped because I run a “btrfs scrub” every week on all of the drives. Without that it would be even less.

Oh yeah, here’s from a workstation with a 500 GB NVMe boot drive and using the XFS filesystem, so no weekly scrubbing:

Data Units Read:                    1,003,677 [513 GB]
Data Units Written:                 5,779,452 [2.95 TB]
1 Like

XFS typically has the edge when it comes to large file read performance, and will do minimal background things compared to a cow fs. Just be sure to use bare XFS with no LVM.

As mentioned previously, I wouldn’t worry about journaling killing your SSD. It’s not going to cause enough writes to matter.

Okay, thanks everyone - Now I think I’m just not going to worry about it, and install the disk as usual.

Now i worry, because you just described my boot drive setup.
o_O

What is it that LVM does to make that combination worse?

Nothing is wrong with LVM. The questions as I understood it was how to eke out every last bit of performance for large file reads and minimize writes. The starting point for that is XFS with no additional layers and then you’d want to tweak from there (turn of atime, etc).

If you want volume management, snapshots, etc, then LVM is what you’d typically use for most Linux distributions, esp for boot/os drive.

There, 4TB SSD installed, with just plain old Ext4.
It just sruck me, though - I forgot to partition it before formatting, but it still worked - the disk is now one big partitionless /dev/sdb.
This means it does not show any partition flags in gparted. It seems to be working ok, though - is this a problem, except for the fact that it’s now outside of LVM?