NVMe Namespaces - Little Known Cool Features of (most) NVMe and User-Programmable Endurance

Samsung PM953 and PM963

Fortunately, samsung properly supports nvme-cli :

# nvme id-ctrl /dev/nvme0
NVME Identify Controller:
vid       : 0x144d
ssvid     : 0x144d
sn        :       
mn        : SAMSUNG MZ1LB1T9HALS-00007              
fr        : EDA7202Q
rab       : 2
ieee      : 002538
cmic      : 0
mdts      : 9
cntlid    : 0x4
ver       : 0x10200
rtd3r     : 0x7a1200
rtd3e     : 0x7a1200
oaes      : 0
ctratt    : 0
rrls      : 0
cntrltype : 0
fguid     : 
crdt1     : 0
crdt2     : 0
crdt3     : 0
oacs      : 0xf
acl       : 7
aerl      : 3
frmw      : 0x17
lpa       : 0x3
elpe      : 63
npss      : 0
avscc     : 0x1
apsta     : 0
wctemp    : 359
cctemp    : 360
mtfa      : 0
hmpre     : 0
hmmin     : 0
tnvmcap   : 1920383410176
unvmcap   : 0
rpmbs     : 0
edstt     : 0
dsto      : 0
fwug      : 0
kas       : 0
hctma     : 0
mntmt     : 0
mxtmt     : 0
sanicap   : 0
hmminds   : 0
hmmaxd    : 0
nsetidmax : 0
endgidmax : 0
anatt     : 0
anacap    : 0
anagrpmax : 0
nanagrpid : 0
pels      : 0
sqes      : 0x66
cqes      : 0x44
maxcmd    : 0
nn        : 1
oncs      : 0x1f
fuses     : 0
fna       : 0x4
vwc       : 0
awun      : 1023
awupf     : 7
nvscc     : 1
nwpc      : 0
acwu      : 0
sgls      : 0
mnan      : 0
subnqn    : 
ioccsz    : 0
iorcsz    : 0
icdoff    : 0
ctrattr   : 0
msdbd     : 0
ps    0 : mp:8.00W operational enlat:0 exlat:0 rrt:0 rrl:0
          rwt:0 rwl:0 idle_power:- active_power:-

Note: Controller ID is a field, but abbreviated as: cntlid … in our case it’s 0x4.

# nvme list 
/dev/nvme6n1            SAMSUNG MZ1LB1T9HALS-00007               1           0.00   B /   1.60  TB    512   B +  0 B   EDA7202Q
/dev/nvme7n1            SAMSUNG MZ1LB1T9HALS-00007               1           0.00   B /   1.92  TB    512   B +  0 B   EDA7202Q

So this output might seem odd. It’s not, though. The “raw” capacity of this NVMe SSD is 1.92tb. However, one is reporting as 1.6tb and the other is reporting as 1.92tb. The endurance of the 1.6tb is significantly higher than the 1.92tb one. If endurance is important to you, you can use nvme tools to change the NVMe namespace size and the controller is aware of “unprovisioned” space and will wear level accordingly. It’s a pretty cool feature of nvme. It’s kind of like short-stroking an nvme (okay, not really, but yeah kinda to an extent).

To change the namespace sizes:

# it goes without saying literally everything in this guide will destroy all the data here . . . 
nvme delete-ns /dev/nvme0 --namespace-id=1

# if you had /dev/nvme0n1 after running this command that should be gone now. 

# The nvme tool is not super consistent. Sometimes it reports in decimal sometimes in hex?
nvme create-ns /dev/nvme0 --nsze=$((0xdf8fe2b0)) --ncap=$((0xdf8fe2b0)) -flbas 0 -dps 0 -nmic 0

# finally attach the namespace you created, controller id comes from the previous nvme id-ctrl command... 
nvme attach-ns /dev/nvme0 --namespace-id=1  --controllers=0x4

# now ls /dev/nvme0n1 should work. 

1.6tb Namespace size Example:

# nvme id-ns /dev/nvme1n1
NVME Identify Namespace 1:
nsze    : 0xba4d4ab0
ncap    : 0xba4d4ab0
nuse    : 0
nsfeat  : 0x2
nlbaf   : 1
flbas   : 0
mc      : 0
dpc     : 0
dps     : 0
nmic    : 0
rescap  : 0
fpi     : 0x80
dlfeat  : 0
nawun   : 1023
nawupf  : 7
nacwu   : 0
nabsn   : 1023
nabo    : 0
nabspf  : 7
noiob   : 0
nvmcap  : 1600321314816
nsattr	: 0
nvmsetid: 0
anagrpid: 0
endgid  : 0
nguid   : 
eui64   : 0000000000000000
lbaf  0 : ms:0   lbads:9  rp:0 (in use)
lbaf  1 : ms:0   lbads:12 rp:0 

1.92tb Namespace size Example:

# nvme id-ns /dev/nvme0n1
NVME Identify Namespace 1:
nsze    : 0x2e9352ac
ncap    : 0x2e9352ac
nuse    : 0x2e9352ac
nsfeat  : 0x2
nlbaf   : 1
flbas   : 0
mc      : 0
dpc     : 0
dps     : 0
nmic    : 0
rescap  : 0
fpi     : 0
dlfeat  : 0
nawun   : 255
nawupf  : 0
nacwu   : 255
nabsn   : 255
nabo    : 0
nabspf  : 0
noiob   : 0
nvmcap  : 400080328704
nsattr	: 0
nvmsetid: 0
anagrpid: 0
endgid  : 0
nguid   : 
eui64   : 
lbaf  0 : ms:0   lbads:9  rp:0x2 (in use)
lbaf  1 : ms:0   lbads:12 rp:0x1 



Suddenly the “weird” naming makes sense.

Is ns=0 reserved? How many namespaces can it support?


Yes ns0 is reserved.

In a virtualized scenario you can assign VMs to namespace slices and the garbage collector assures than namespaces do not share blocks and other security measures such as hw crypto are enforced. Whereas with a simple partition it may be possible to do an out of bounds read to someone else’s partition, this handily prevents that whole class of problems.

It also makes it super easy to have a low overhead way of enforcing qos across vm that share an nvme. It’s why this exists.


I remember you trying to edit the name spaces of those Toshiba drives that changed company hands 3 times


I’ve always been unclear on ssd provisioning, trim, endurance, etc. There was an OpenBSD thread about them not supporting trim. They basically said SSDs didn’t implement it correctly/consistently and that all you need to do is leave some unpartitioned space on the drive. But I always imagined it would take something more involved like this… is there a way to do this with vanilla 2.5” SSDs?

1 Like

no, and not all nvme actually support name spaces. Some controllers ‘peek’ into the disk layout by trying to understand filesystems and the layout (samsung has tons of ‘optimizations’ in their firmware for this). It used to be good advice but the ‘correct’ way to underprovision now is nvme name spaces.


Can you do this with optane (not that you’d need the endurance boost, but just curious)?

1 Like

good question. Probably? Not for endurance, but for the ‘hard partitioning’ and qos capabilities. You can pass through nvme0n1 nvme0n2 nvme0n3 etc to VMs transparently. And low overhead that way.


I wonder if this means anything for the chia people.

1 Like

Yeah, it’s actually ideal for creating multiple ZFS SLOG’s from a pair of larger optane drives, such as when there are multiple pools you want to have them.


I feel like leaving unpartitioned space still works…
Ive always left 33% of the the space unpartitioned and Id say my sata drives have lasted way longer than I think they should have.

Ive done it longer with my sata ssds but i have two sandisk 960G , one has almost 43k hours.

[root@Storage ~]# smartctl -a /dev/sdf | grep -i -e avail -e power_on
TRIM Command:     Available, deterministic, zeroed
SMART support is: Available - device has SMART capability.
  9 Power_On_Hours          0x0032   000   100   000    Old_age   Always       -       37162
232 Available_Reservd_Space 0x0033   100   100   004    Pre-fail  Always       -       100
[root@Storage ~]# smartctl -a /dev/sdg | grep -i -e avail -e power_on
TRIM Command:     Available, deterministic, zeroed
SMART support is: Available - device has SMART capability.
  9 Power_On_Hours          0x0032   000   100   000    Old_age   Always       -       42867
232 Available_Reservd_Space 0x0033   100   100   004    Pre-fail  Always       -       100
1 Like

I’m doing this on some supermicro satadoms in 2 openbsd gateways, so hopefully they have some version of:

Although not much disk io going on, so maybe it’s fine…

Would i be able to use NVMe namespaces to separate my PC’s OSes?

Currently i’m dual booting Linux and Windows - those are installed on partitions of the same SSD.

I guess, the mobo would need to be able to handle namespaces, in order to dual boot those instead of partitions.

Would a usual AMD B550 mobo be able to do this?

What i’m trying to figure out is, whether i can dual boot AND virtualize Windows. Not simultaneously, of course, but i’d like to do maintenance and install Steam updates inside a VM while i’m using my Linux daily driver. This way i could maintain an up-to-date gaming OS, and play more because i’m not demotivated by waiting for updates to install.

Regarding namespaces/nvme-cli I noticed some issue(/feature) with NVME drives attached to Broadcom 9500-16i trimode HBA. The nvme drive is presented to the OS as a scsi device and nvme-cli refuses to work with the nvme drive. Not sure if you can send some raw command directly to the drive, but for this I don’t have enough knowledge.
Not sure if the new Adaptec HBA 1200 has the same issue. Has anyone the Adaptec/Microsemi HBA 1200 in use?

I don’t have any issue with the Micron 7400 Pro M.2 connected via M.2 slot (7400 also supports namespaces, and different block sizes).

Excuse me, how can I know which controller the namespace is activated by?
I activated the namespace with a controller number other than cntlid.

Hmm, and it seems like some of the NVMe that support name spaces support only one (1).

From an interesting explanation I can only find live on a archived site:

The nn attribute indicates the maximum number of namespaces your disk supports. The device nvme0 is a U.2 drive that supports 32 namespaces and nvme1 is my M.2 boot device that only supports a single namespace.

root@smc-server thorst]# nvme id-ctrl /dev/nvme0 | grep nn
nn        : 32
[root@smc-server thorst]# nvme id-ctrl /dev/nvme1 | grep nn
nn        : 1

– [NVMe Namespaces · Drew Thorstensen]( NVMe Namespaces)

There’s a list on that page of 2020 era enterprise drives that support 16-128 namespaces.

Looks like your PM983 only supports 1 namespace; I get similar nn =1 looking at a Samsung 970 Evo Plus. I’ll check some newer NVMe drives when I get a chance.

For samsung there is the samsung dc toolkit that “may” let you create more namespaces.

Similarly toshiba enterrprise drives need some special commands to control namespace resize but its possible on drives like the xg5 as ive done it.

One other interesting thing about this output, is it looks like this drive supports multiple sector sizes. From some nvme-cli info I found, which includes an explanation on how to use ‘nvme format’ to change from say 512B to 4096B]() :

nvme id-ns /dev/nvme0n1

NVME Identify Namespace 1:
… (truncated).
lbaf 0 : ms:0 lbads:9 rp:0x2 (in use)
lbaf 1 : ms:0 lbads:12 rp:0

This drive is currently using 512B formatting and 4096B formatting is supported.

The full discussion, as well as reformatting details are here:

Conversely, looking at the same on a Samsung 980 Pro, I just see:

$ sudo nvme id-ns /dev/nvme1n1
NVME Identify Namespace 1:

lbaf 0 : ms:0 lbads:9 rp:0 (in use)

… so no extra format support there.

Anyway, while looking around at this stuff, and trying to find info – and coming to the realization that the non-enterprise NVMe drives I have are not going to let me play with setting up namespaces to create “drives” to use with ZFS – I put together this table, and list of sources, detailing some drive namespace compatibility:


Model Form Factor Supported namespaces Reference
WD Blue SN570 1 1
WD WDS500G2X0C (SN700) 1 1
WD Ultrastar SN640 U.2 128 3
WD Ultrastar SN840 U.2 128 3
HGST SN260 128 1
Intel Optane 900P 1 1
Intel Optane P1600X 1 2
Intel Optane P4801X 1 2
Intel Optane 5800X (fw. > L0310200) 128 2,4
Intel P4500 1 7
Intel P4510 / P4610 U.2 128 (non-Opal SKUs, maybe fw dependent) 3,6,7
Intel P4610 128 1
Intel P5500 (Dell firmware) 128 1
Intel D7-P5510 (SSDPF2KX076TZ) U.2 128 4
Kingston DC1500M U.2 64 8
Kioxia CM6 U.2 64 3
Kioxia CD6 U.2 16 3
Micron 7400 (MTFDKBG3T8TDZ) M.2 22110 128 4
Micron 9200 U.2 1 1
Micron 9300 U.2 32 1,3
Samsung 970 EVO Plus M.2 1 1
Samsung 980 Pro M.2 1 4
Samsung PM983 1 (maybe fw dependent, upgrade in DC Toolkit?) 4
Samsung PM9A3 U.2 32 1
Samsung PM1725a AIC (depends on fw?) 5
Samsung PM1733/PM1735 U.2 / AIC 64 3
SK Hynix PC601 HFS512GD9TNG-L2A0A 1 1

1 NVMe SSD namespaces support table | TrueNAS Community
2 Which Intel® Optane™ Data Center Drives Support Multiple...
3 https://web.archive.org/web/20221124073644/https://www.drewthorst.com/posts/nvme/namespaces/readme/
4 nvme-cli test
5 List of NVMe drives that support namespaces or other ways to divide one up | ServeTheHome Forums
6 https://web.archive.org/web/20210117005416/https://www.intel.com/content/www/us/en/support/articles/000038017/memory-and-storage/data-center-ssds.html#blade-product-list-show-content
7 How can I create multiple namespaces on DC P4510 o... - Page 2 - Solidigm - 9918
8 https://www.kingston.com/datasheets/sedc1500m_en.pdf

Here’s something about UEFI support for namespaces, although it talks about “management” of namespaces, not about UEFI support for bootable devices in multiple namespaces:
AMI Enables NVMe Namespace Management in Aptio® V UEFI BIOS Firmware



I’ll go ahead and add here that the Intel D7-P5510 (SSDPF2KX076TZ 7.68TB NVMe U.2 15mm) drives which I have a few of do support namespaces

# nvme id-ctrl /dev/nvme0 | grep nn
nn        : 128

The Optane P5800X (400GB) drives I have also support 128 namespaces

# nvme id-ctrl /dev/nvme2 | grep nn
nn        : 128
1 Like

Thanks for the extra input – I went ahead and edited the post above, adding your info (and converting the table I had to markdown, so it can be better copied/amended/etc…)