Magnetic disk rejuvenation, usage of Linux "dd" when if=/dev/sda and of=/dev/sda?

Hello to all,

First off, I apologize if this is in the wrong section of the forum. It is both a hardware and software question.

After trying to find an answer with various search engines it was suggested that I try this forum. The aim of this query is centered around magnetic disk rejuvenation

Background:
I discovered this entry on “Stack Exchange”:
(seems that http/https links are not permitted in posts here)
“superuser.com_questions_1198135_is-it-safe-to-use-dd-to-rejuvinate-a-hard-drive-by-setting-of-if”

The question was can you: sudo dd if=/dev/sda1 of=/dev/sda1 ?? <<<Note, source and destination are the SAME!

The article suggests that the author of the question had to answer his own question with some empirical work of his own. There were no other replies that confirmed that this works. But it still begs the question does this really work? Where is the proof? If somehow the sectors read off of a hard disk were buffered someplace on a ramdisk or something, such that one would know with certainty that the outcome was what was intended, namely magnetic fluxes are re-written. This of course would save a lot of time as SATA to SATA will always be faster than SATA to USB 2.0 (in some of my cases). Windows is very “particular” about messing with boot partitions, I do not know if I would have the courage to re-image a C:!!

So I kindly ask if an experienced user here has in depth knowledge of “dd” that would shine some clarity on my question? This question is mostly irrelevant for SSDs, but not for magnetic storage.

Thanks for your time and patience.

This concept came up in the SSD data retention thread:

Most SSDs will let their cells rot away so this is very relevant to them, hdsentinel, badblocks, diskfresh and I think spinrite where all mentioned in the thread to refresh data on drives. Very convincing results can be gleaned from the thread on the performance degradation *most SSDs suffer from after having data at rest for more than a year or two.

I wouldn’t expect a magnetic annealing to affect a HDDs for atleast 2-3 decades assuming they weren’t operating in some kind of strong external magnetic field so this isn’t particularly helpful to them.
One of the reasons HDDs are so robust against magnetic annealing induced bitrot is that there is only an “off” and an “on” state, while SSD cells are commonly at 16 different charge states to store information; if HDD manufactures decided to encode data in the intensity of a magnetic bit on the drive then I would be much more worried about refreshing them.

2 Likes

“if HDD manufactures decided to encode data in the intensity of a magnetic bit on the drive then I would be much more worried about refreshing them.”

The question of course is can “dd” do the refresh trick in situ for magnetic disks?

But as to your comment, what about SMR, shingled magnetic recording? AFAIK that is layered hence intensity plays a part. I have avoided SMR for many reasons.

Thanks for you reply!

QLC is the worse example of charge states and why NOBODY should use a QLC SSD for cold storage. That is data suicide.

I don’t see any reason why it wouldn’t work, but I wouldn’t trust it. You’d be better off imaging the disk to a copy and then re-imaging.

Yes it can with the right command; but does more harm than good if the reason it’s being done is to “refresh” the disk.

tangent

I feel like the reason this magnetic refresh is talked about is a remnant of the MFM and RLL drives of yore; back then the hdd geometry often had to be manually specified and that geometry could be written out to the drives via a low level format, it was sometimes advantageous to re-write a hdd to enforce sector/track boundaries (and the data in general as the magnetic media back then was low coercivity).

Today, the term “low level format” has lost all meaning. Geometric formatting is done at the factory and cannot be changed by the end user.

SMR does have some track overlap, but there is effectively no difference in magnetic field strength among the the actual bits.
I think SMR could have been decent if there was a mature software stack that managed the unique nature of how it likes to be written to, but since that didn’t happen and people expect it to be used in existing paradigms, it didn’t have a chance.

Yeah QLC is much worse than TLC, but a well designed controller can help mask alot of the problems it presents.
I’d prefer a QLC SSD that implemented a weak cell charge fixing algorithm built in over a TLC SSD that never went back and re-wrote weak cells… well at least I’d prefer it if I was expecting to keep the drive multiple years and didn’t plan on many hundreds of TBs written.

Do remember that if it does have that algorithm, it’s TBW rating goes down the more aggressive it is.

1 Like

This has me wondering how manufactures factor the extra wear that algorithm creates into there TBW ratings, because technically it doesn’t decrease the TBW over a short period of time; only once data is old does it start using up write cycles.

1 Like

I am referring to magnetic drives ONLY. I am not talking about SSDs, of course not!

Well here is the thing about the imaging/refresh for magnetic drives:

It is clear that I have already backed up the files BEFORE doing any of the files!

a) Select the partition you wish to image and re-size it to be as small as the O/S will allow. This saves space and time, and of course it takes time to do this.

b) Run sha256sum against that partition and save the hash.

c) Use “dd” to save the partition image to an external device.

d) Run sha256sum against the partition image created in c) and save the hash.

e) Compare the hashes created in b) and d) and MAKE SURE they are exactly the same.

f) Refresh the partition selected in a) with the image that was created in c) using “dd”.

g) Run sha256sum against that refreshed partition and save the hash.

h) Hope and pray that ALL hashes matched!

i) Resize the partition selected in a) to the desired size.

As you can see this is why using “dd” to do a in situ refresh is so alluring. Because it is a single step! >>>>>> sudo dd if=/dev/sda of=/dev/sda

If I am doing something stupid here (steps a) to i)) please tell me, because I cannot see any other way. In previous years I copied all the files and then copied them back, but that changes the creation dates etc, and it is of course very tedious and I found that some external devices DID NOT do as asked with all those multiple 100,000 of files. Things were getting lost or NOT copied.

Thanks for reading. I hope somebody wiser than myself has a diff answer.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.