Hi, so I accidentally-ish a brand new 20TiB HDD.
Even newest and biggest external 3.5" USB drives are still shipping using 512e sector size (4096 bytes physical, 512 bytes logical).
If you’re thinking “isn’t that inefficient?” … or … “whyyyyy” … or … “I’ll just change it”. … don’t .
Either the firmware on the drive or the firmware of the HDD enclosure that these large external drives ship in, is dumb, and changing sector sizes will most likely brick your drive.
Most of the repair tools work on SATA drives, I haven’t tried shucking the drive to check, since I need/want the USB interface, and the drive is a bit expensive and there might be kapton tape needed to be applied over some of the pins.
Why might it be more efficient to use 4k sectors?
Well, you’re sending commands over USB, and if you need to write e.g. 1M of data, you can send either 2048 write commands (with 512) or 256 write commands (with 4k). Fewer commands is more efficient.
So you have the latest
hdparm and you run:
hdparm -I /dev/sdi
And it shows
... CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 39063650304 Logical Sector size: 512 bytes [ Supported: 4096 512 ] Physical Sector size: 4096 bytes Logical Sector-0 offset: 0 bytes device size with M = 1024*1024: 19074048 MBytes device size with M = 1000*1000: 20000588 MBytes (20000 GB) cache/buffer size = unknown Form Factor: 3.5 inch Nominal Media Rotation Rate: 7200 ...
All good stuff - 4k logical sizes are supported.
… and you read the hdparm manual and you issue:
hdparm --set-sector-size 4096 /dev/sdi and it shows you a warning that this will scramble your data, and since there’s no data on drive you issue a following:
hdparm --set-sector-size 4096 --please-destroy-my-drive /dev/sdi
… what follows is a “success” message (I didn’t capture it, I’m sorry) and an error message/crash causing your usb stack to partially freeze and your dmesg will have a bunch of “hung tasks” and “timeouts” talking to device.
… plugging in the disk into another machine will yield kernel logging the following in dmesg:
[1733124.832014] sd 6:0:0:0: [sde] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK [1733124.832020] sd 6:0:0:0: [sde] Sense not available. [1733124.832027] sd 6:0:0:0: [sde] 0 512-byte logical blocks: (0 B/0 B) [1733124.832030] sd 6:0:0:0: [sde] 0-byte physical blocks [1733124.832036] sd 6:0:0:0: [sde] Write Protect is off [1733124.832040] sd 6:0:0:0: [sde] Mode Sense: 00 00 00 00 [1733124.832046] sd 6:0:0:0: [sde] Asking for cache data failed [1733124.832048] sd 6:0:0:0: [sde] Assuming drive cache: write through [1733124.942144] sd 6:0:0:0: [sde] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK [1733124.942155] sd 6:0:0:0: [sde] Sense not available. [1733124.942178] sd 6:0:0:0: [sde] Attached SCSI disk [1733128.012128] usb 1-1: new high-speed USB device number 7 using ehci-pci [1733128.214080] usb 1-1: New USB device found, idVendor=1058, idProduct=25a3, bcdDevice=10.31 [1733128.214090] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [1733128.214095] usb 1-1: Product: Elements 25A3 [1733128.214100] usb 1-1: Manufacturer: Western Digital [1733128.214103] usb 1-1: SerialNumber: 3~~~~~~~~~~~~~4B [1733128.215383] usb-storage 1-1:1.0: USB Mass Storage device detected [1733128.216550] scsi host6: usb-storage 1-1:1.0 [1733129.243066] scsi 6:0:0:0: Direct-Access WD Elements 25A3 1031 PQ: 0 ANSI: 6 [1733129.243911] sd 6:0:0:0: Attached scsi generic sg4 type 0 [1733129.247977] sd 6:0:0:0: [sde] Unit Not Ready [1733129.247990] sd 6:0:0:0: [sde] Sense Key : Hardware Error [current] [1733129.248001] sd 6:0:0:0: [sde] ASC=0x30 <<vendor>>ASCQ=0x81 [1733309.270105] sd 6:0:0:0: tag#0 timing out command, waited 180s [1733489.286779] sd 6:0:0:0: tag#0 timing out command, waited 180s [1733669.324287] sd 6:0:0:0: tag#0 timing out command, waited 180s [1733669.324325] sd 6:0:0:0: [sde] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [1733669.324330] sd 6:0:0:0: [sde] Sense Key : Hardware Error [current] [1733669.324337] sd 6:0:0:0: [sde] ASC=0x30 <<vendor>>ASCQ=0x81 [1733669.324343] sd 6:0:0:0: [sde] 0 512-byte logical blocks: (0 B/0 B) [1733669.324346] sd 6:0:0:0: [sde] 0-byte physical blocks [1733849.341196] sd 6:0:0:0: tag#0 timing out command, waited 180s [1733849.341294] sd 6:0:0:0: [sde] Test WP failed, assume Write Enabled [1734029.387442] sd 6:0:0:0: tag#0 timing out command, waited 180s [1734029.387488] sd 6:0:0:0: [sde] Asking for cache data failed [1734029.387493] sd 6:0:0:0: [sde] Assuming drive cache: write through [1734029.469972] sd 6:0:0:0: [sde] Unit Not Ready [1734029.469985] sd 6:0:0:0: [sde] Sense Key : Hardware Error [current] [1734029.469995] sd 6:0:0:0: [sde] ASC=0x30 <<vendor>>ASCQ=0x81 [1734209.513860] sd 6:0:0:0: tag#0 timing out command, waited 180s
… the drive might still be fine if shucked, and connected to a decent SAS/SATA controller, where you might be able to format it, but this external drive seems bricked for all practical intents and purposes, … a pitty for a 20TB drive to end its life like this (about to return to Amazon using regular mail, RMA as broken).
n.b. I no longer have contacts in WD, people I knew worked there have moved on… there’s no one I know any more in a position to try to reproduce the issue at basically no cost to them other than time, or who can offer any tooling in the form of python scripts that send mysterious SCSI commands.
n.b.b. … sector writing is never atomic, regardless of sector size … by that I mean that if you yoink power from a drive while in the middle of writing a sector you may end up with an unreadable/bad sector that’s recoverable by overwriting it, true for whatever sector size. This is only an issue if you rely on software that relies on individual random 512 byte writes not blurring nearby sectors. As long as you use a cow filesystem (zfs, btrfs, bcachefs, …), or a filesystem that just happens to work in at least 4k chunks (ashift=12 or higher; realistically most filesystems read/write 4k blocks), or a 4k device-mapper volume there’s nothing to worry about.
Personally, I align my stuff on 1M boundaries, and use 4k block size LUKS on top of LVM - it works fine.