Help me think here. If I have a file server with ecc memory, does the server that writes to the file server also have have ecc? Everyone says you’ll need ecc for zfs, but no one mentions the computers that writes to that computer. Or are everyone just assume you’re running everything on the same server, or that the main server has ecc?
It wouldn’t hurt, but it won’t make a difference to any data transferred over the network. If the data is good when it leaves one machine, but is bad when received by another, then that’s not something ECC is going to fix.
Sometimes there’s corruption that sneaks in during the transfer from other sources, and sometimes the transfer just dies. That’s a whole other rabbit hole.
I can say from firsthand experience that backing up data that a non-ECC system corrupted just means you’ll get the same garbage back out of the one that does have ECC. So no, you’re not thinking incorrectly about it. Just have to determine which battles you’re willing to fight, and where.
How much do you care?
Anything mission critical is running ECC
If an office worker opens a file, makes an entry, resaves, and another worker cannot open the file then corruption occurred but any decently redundant server can rollback the change.
If an employee keeps the same file open for a week and pushes a corrupt file to the server. We have a different problem.
Everything from forced logoffs at end of day to hardware life cycles and regular maintenance with stress testing is in place to prevent this type of lost productivity.
Encryption is a great sanity check for network data transfers, but if a bad stick of RAM is introducing errors locally, you put trash in and get trash out.