NTFS on Linux for a backup drive

So I recently bought an external 8tb WD elements drive to for backups from my Linux NAS. I decided to try using NTFS as the FS on it since I wanted the backups to be as compatible as possible with both Linux and Windows.

My backup script is below

     newdate=`date +%g%m%e`
     mkdir /mnt/8tb/$newdate
     cp -al /mnt/8tb/olddate/* /mnt/8tb/$newdate/
     rsync -avh --delete    /sharedfolders/ /mnt/8tb/$newdate

Rsync runs at about 30-40mb per second, with about 70-80 of one core being used. That sucks since the drive is capable of 150~mb per second on windows.

Also, after backing up my over 5tb of data, I moved it to a windows machine, and discovered that the drive was 88% fragmented!!! That is crazy.
88-per-fragmented


My mount options-
defaults
noauto - only mount when I want it to
user - allow user mount
uid=1000, gid=100, and umask=000 - Set permissions wide open as desired for this drive
hide_dot_files and hide_hid_files - sync the different ways to make the file hidden
windows_names - make sure windows does not puke at unsupported characters
noatime and big_writes - somewhat successful tuning parameters


So overall, I will probably go with another FS next time and just live with not being able to access it on windows, or only accessible with a third party program install.

1 Like

Once all coppied and set up, I presume the drive worked fine?
How long does a defrag take on a large USB drive like that?

It is working fine on Linux, if a bit slow.

The defrag takes like 2 days for 5tb of 88% fragmented data.

initial run of defrag would take a very long time but after the initial run it is shortened dramatically!
partitioning the drive into gives an advantage of data being written to localized sections and is easier to defrag a partition than the whole drive.

The defrag time is to be expected for the speed and size of the drive, and the amount and fragmentation of the data.

My problem is not the defrag time, it is that NTFS-3g is writing that data so poorly it is very much fragmented.

Thats usually why i only write f.a.t. partitions if i want them to be read by any flavour of windows.
(actually any os can read a f.a.t. partition)
NTFS format is not only writing data to the block but in the margins of the block borders.
this does allow for more data to be stored in the particular format scheme but often poses problems with other os’s
the other downside is that it virtually doubles the fragmentation rate.
considering windows os partition covers the nearly the entire drive (unless you specifically partition the drive before installing windows)
read/ write functions will span the entire partition. While this initially speeds up the read write process it gets drastically worse as the drive gets fragmented.
regular de-fragmentation keeps it at a manageable level ! (I recommend running defrag every week or two (even if windows says it does not need it)

as I mentioned before setting up multiple partitions on very large drives not only protects data but makes the defrag process much easier and faster.

ntfs3 used by linux is a poor application of the format scheme and you would be better off using F.A.T.

Yep.

I actually just formatted the drive to BTRFS since this looks far enough along for at least a RO windows mount if not a RW mount. The drive is writing at the full 150mb per second it is capable of now.

1 Like

Please excuse my ignorance, but Isn’t FAT limited to like 2TB? Or is that ExFAT? Or do I have it wrong altogether

Fat32 is 2tb partition and 4gb file size
ExFat is 128pb partition and 128pb file size

Many OSes have somewhat smaller creation limits for actual max partitions and file sizes. Exfat also generally requires a separate package install on Linux.

1 Like

Thanks. Didn’t realise a separate package was sometimes needed.

I’ve used NTFS for drives shared between windows and Linux, but not noticed, or looked for fragmentation.
Typically I’d share a steam drive/partition, so will be checking that out tonight

correct thats why i multi partition large drives
may seem like a pain in the butt to do but its a safer bet for file recovery options as well
getting systems in the shop for file recovery and general tune up is something that does make money but nothing burns my biscuits more than having to defrag a large drive that never been defragged before!
takes a lot of bench time and power usage.
and fast turnover makes more money!

Apart from the standard rule that you should always have more than one backup :wink: I wouldn’t trust ExFAT with the data, since it’s not journaled.

I don’t know which OS is reading/writing more, but I would recommend using the better supported FS for the OS you write the most. Since there are ext4 drivers for Windows, you could use ext4 instead of NTFS and have support on both operating systems.

However, it has been a while since I used Windows, so I have no idea how good those drivers are :wink: