Samsung PM9A3 - strange ZFS error message - non-native block size

Hi,
today I got an Samsung PM9A3 U.2 and changed the sector size from 512b to 4k. After creating a zpool, I got the following error message:

“One or more devices are configured to use a non-native block size.”

[manja-02 ~]# zpool create -o ashift=12 tank02 /dev/nvme2n1 
[manja-02 ~]# zpool status -v
  pool: tank02
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
config:

        NAME        STATE     READ WRITE CKSUM
        tank02      ONLINE       0     0     0
          nvme2n1   ONLINE       0     0     0  block size: 4096B configured, 32768B native

errors: No known data errors

Two weeks ago I did the same with an Intel P4618 and had no problem with it.
I mean it shows that 4096 bytes are supported?

[manja-02 ~]# nvme id-ns -H /dev/nvme2n1
.....
LBA Format  0 : Metadata Size: 0   bytes - Data Size: 512 bytes - Relative Performance: 0 Best 
LBA Format  1 : Metadata Size: 0   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best (in use)

[manja-02 ~]# lsblk -o NAME,MOUNTPOINT,PHY-SEC
NAME        MOUNTPOIN PHY-SEC
sda                      4096
├─sda1                   4096
├─sda2                   4096
├─sda3                   4096
└─sda4                   4096
sdb                       512
└─sdb1                    512
sdc                    131072
sdd                    131072
nvme0n1                  4096
├─nvme0n1p1              4096
└─nvme0n1p2              4096
nvme1n1                   512
├─nvme1n1p1 /boot/efi     512
└─nvme1n1p2 /             512
nvme2n1                 32768
├─nvme2n1p1             32768
└─nvme2n1p9             32768

edit: back to 512b and zpool created without specifying ashift, now it supposedly has 4096b, WTF?

[manja-02 ~]# nvme format --lba=0 /dev/nvme2n1
You are about to format nvme2n1, namespace 0x1.
WARNING: Format may irrevocably delete this device's data.
You have 10 seconds to press Ctrl-C to cancel this operation.

Use the force [--force] option to suppress this warning.
Sending format operation ...
Success formatting namespace:1
[manja-02 ~]# zpool create tank02 /dev/nvme2n1 
[manja-02 ~]# zpool status -v
  pool: tank02
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        tank02      ONLINE       0     0     0
          nvme2n1   ONLINE       0     0     0

errors: No known data errors
[manja-02 ~]# 
[manja-02 ~]# 
[manja-02 ~]# 
[manja-02 ~]# 
[manja-02 ~]# lsblk -o NAME,MOUNTPOINT,PHY-SEC
NAME        MOUNTPOIN PHY-SEC
sda                      4096
├─sda1                   4096
├─sda2                   4096
├─sda3                   4096
└─sda4                   4096
sdb                       512
└─sdb1                    512
sdc                    131072
sdd                    131072
nvme0n1                  4096
├─nvme0n1p1              4096
└─nvme0n1p2              4096
nvme1n1                   512
├─nvme1n1p1 /boot/efi     512
└─nvme1n1p2 /             512
nvme2n1                  4096
├─nvme2n1p1              4096
└─nvme2n1p9              4096
LBA Format  0 : Metadata Size: 0   bytes - Data Size: 512 bytes - Relative Performance: 0 Best (in use)
LBA Format  1 : Metadata Size: 0   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best

I just ordered a dedicated server from Hetzner which contains a couple Samsung PM9A3 SSDs and have the exact same issue as you. I have set them both to 4K sector size and created a pool with ashift=12 which caused the exact same block size: 4096B configured, 32768B native warning.

Searching for this lead me to your post, so I guess this is a problem specifically with zfs and this SSD. Most likely, zfs is mis-detecting something or the SSD is lying to zfs.

1 Like

Update: This issue appeared while I was in a kexec’d NixOS image with a ZFS kernel module loaded well after boot. Once my system was installed and I booted straight into NixOS, this problem disappeared:

[root@lava:~]# zpool status
  pool: zroot
 state: ONLINE
config:

	NAME                                               STATE     READ WRITE CKSUM
	zroot                                              ONLINE       0     0     0
	 nvme-eui.36344830525081210025384500000001-part2  ONLINE       0     0     0
	 nvme-eui.36344830525081070025384500000001-part2  ONLINE       0     0     0

errors: No known data errors
1 Like