TerraMaster DAS - data integrity and SMART

Hello. This is my first post here.

I’ve got an improvised NAS built using a fully populated RAID5 DAS (“direct-attached storage”) -TerraMaster D5-300 - connected to a raspberry pi 4.

It’s been working great for a year so far, but the enclosure doesn’t let me read the SMART data off the drives - it acts as a hardware RAID only with a JMS576 SATA-to-USB bridge. The filesystem on it now is ext4. I want to have a way of controlling whether the drives are OK and the data doesn’t rot. But there is no way to “scrub” on ext4, as far as I know.

The only thing I found is to possibly set up ext4 metadata checksums, but I’m not sure if this is a real feature any more (and viable on such a setup).

Thanks in advance!

Anyone?

I just saw there was a topic on whether to be polite here: Am I wrong to be polite on here?. I am usually polite. What more can I do to find someone who could help?

I the meantime I found out that on newer kernels, checksumming may be enabled by default anyway. But it’s hard to find out if it is. Can anyone at least point to where I can check this?

On a different tech forum I posted my fight to fix/diagnose GPD P2 Max. It was very detailed, with screenshots, photos etc. Not one answer followed. What am I doing wrong?

Thanks…

hello
you are correct that the jmicron controller is obscuring the hdd smart data from the host because it is set to raid mode, if you wanted to view smart data you should be able to configure the enclosure to run in single disk mode via the terramaster raid manager (mac and windows only) but then you would loose the hardware raid functionality.
in my opinion this would not be much of a loss, hardware raid on anything but areca/broadcom/microchip/atto chipsets is asking for trouble.

scrubbing is a hardware raid feature, as opposed to a file system feature. I’m pretty sure the terramaster doesn’t support scrubbing.

ext4 metadata checksumming isn’t really that useful for data integrity, its more useful to make sure that filesystem structures don’t get wiped out by an errant program.

If it were me, I’d set the enclosure to single disk mode and let the linux on the raspberry pi handle the raid (5) via software raid. that way you can scrub and have access to the smart data (scrubbing it way more important than smart data imo).

1 Like

Thank you! But wouldn’t the RAID parity calculations be a bit too much to handle for a mere Raspberry Pi 4?

I considered what you suggested before setting up the device but thought it would be a lot of hassle to do, and apart from this I thought that running a RAID5 over USB3 will saturate the bus - wouldn’t much more data need to travel back and forth in this situation?

There would definitely be some overhead to the parity calculations; I’m have a surprisingly difficult time finding mdadm raid 5 benchmarks for the pi 4. but my assumption would be you would still be able to get a couple hundred megabytes per second speed out of a software raid 5 on the pi 4.

You won’t have to worry about saturating the usb3 bus, it will be the cpu that bottlenecks you first. software raid 5 won’t create much extra traffic over the usb3 bus.

1 Like

regarding getting more responses faster on forum posts. I’ve found it helps to have an inflammatory title, lol.
it’s kind of along the vein of cunningham’s law: " the best way to get the right answer on the internet is not to ask a question; it’s to post the wrong answer ."

For example I wanted to create a thread asking about linuxcnc performance and I was going to title it “Linux CNC Performance?” but then decided on “Has LinuxCNC caught up to Windows XP yet?”

So you suggest converting this to a JBOD. I will probably need to test drive this first, maybe using some flash drives.

But quick googling suggests at least ZFS isn’t a good idea: https://www.reddit.com/r/raspberry_pi/comments/ljwkl9/zfs_nas_experiment/

Yes, JBOD is what you want; the only reason I mentioned single disk mode is because I’ve seen some of the weirder controllers say JBOD but really implement a JBOD Span (as opposed to JBOD independent disk, which is normally what I think of when I hear JBOD).

Yeah ZFS isn’t a good idea, its has so much overhead it isn’t even funny.
So I found a youtube video benchmarking a degraded raid 5 array on a raspberry pi 4 running at ~66MB/s, so hundreds of MB/s in a healthy state should be achievable, even with the parity calculations

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.