I have an old NetApp stack with about 200+ SSDs in them non-self encrypting. Unfortunately all the drives have been pulled from the drive shelves and I cannot use them to initiate a wipe. I’ve been given the choice of shredding them or prove they have been wiped and I can take them home for a home lab. None of the drives are recognized when I put them in a system on an hardware array controller or if I connect them to a standard SATA connection. In the research I’ve done it appears the issue is likely caused by either NetApp’s native firmware locking me out or the drives being formatted/aligned in a 520b instead of 512b.
The hardware is in a data center and I have to schedule my time there and typically don’t have much time onsite while I’m there. My ask is:
1: Assistance in either confirming my game plan or helping me correct it. 2: Helping me find a way to securely erase the ssd’s DoD 3 or equivalent. Based on what I’ve found securely erasing an SSD is not as easy as wiping 1’s, 0’s & then random 1’s & 0’s because of the wear leveling.
It pains me thinking that these drives will be shredded soon if I don’t come up with acceptable solution for my employers. This is my current game plan:
Use an Linux distribution (I’m most familiar with Ubuntu):
1: Pull the drives from their sled 2: Put them in a SuperMicro sled and insert them into a chassis running a vanilla install of Ubuntu 24.04 LTS. 3: run this command to reformat the drives:
4: Use an Active@ boot disk to kill the drive. (I’m not sure this is sufficient because it is an SSD).
If it s the firmware that needs to be swapped then I’m at a loss on how to do that. I typically use the vendor tools to do this and this is would definitely be an off label use of any official tool from NetApp.
These may be SAS not SATA drives. Assuming this is correct, then, yes, they are not recognized by a SATA controller.
Find a computer with a SAS controller and hook them up to that, after that they should be “recognized” (listed by the controller) even if they cannot be accessed due to a 520b block size. If they are, the sg_format command should work.
Did you agree with the owner on what constitutes “prove they have been wiped”? Think about that to make sure you’re not taking on unnecessary risks when you take the drives.
Thank you for the quick reply. I initially tried them in a 36 bay chassis with a SuperMicro (Broadcom SAS 3108) RAID controller. It was running window though and the disks didn’t show up in the OS. So I don’t think that is the issue.
They have asked for a confirmed DoD 3 or equivalent wipe. ( a pass of all 1’s, a pass of all 0’s, and a pass of random.) We have used the Active@ kill disk utility for this in the past and supplied the report from the application as proof.
I am a little worried because SSD’s use firmware based wear leveling and under provisioning. Typically manufacturers supply a utility to ensure the wear leveling does not get in the way of a wipe and the portion of the disk that is held for under provisioning is included in the wipe as well. Unfortunately, since this is off label use for these drives and I don’t have the NetApp controller to do the wiping with I can’t go the official NetApp way.
Write a script; something like the below might do:
#!/bin/sh
WDEV=$1
TMP0=/tmp/all0bits.dd
TMP1=/tmp/all1bits.dd
echo "Preparing files"
dd if=/dev/zero of=$TMP0 bs=64M count=1
tr '\x00' '\xff' <$TMP0 >$TMP1
echo "Overwriting $WDEV with all 0s"
blkdiscard -f $WDEV
while true; do cat $TMP0; done | dd of=$WDEV obs=4M
echo "Overwriting $WDEV with all 1s"
blkdiscard -f $WDEV
while true; do cat $TMP1; done | dd of=$WDEV obs=4M
echo "Overwriting $WDEV with all random data"
blkdiscard -f $WDEV
dd if=/dev/random of=$WDEV bs=4M
echo "Unmapping drive sectors"
blkdiscard -f $WDEV
echo "Done."
As far as I understand it, wear leveling works by swapping out used blocks with blocks from the reserve when they need to be rewritten. Therefore, by overwriting the whole drive several times, it will eventually go through all the underprovisioned and held-in-reserve blocks.
The card was running in HBA/IT mode? I’m asking because you specifically wrote “RAID controller”. In RAID mode you would only see the configured and available logical drives in the OS, not what is physically connected.
Before flashing/buying a card to/with IT mode you could check the MPT utility during boot up (don’t know the shortcut you need to press ,ctrl+something? F2?). There should be an option to list all drives.
I’m also pretty sure that they use SAS. Just search google for the model number and you should get the speed of the interface (ours are listed as 12Gbs, 6Gbs compatible, so definitely SAS3).