Note: Controller ID is a field, but abbreviated as: cntlid ⦠in our case itās 0x4.
# nvme list
/dev/nvme6n1 SAMSUNG MZ1LB1T9HALS-00007 1 0.00 B / 1.60 TB 512 B + 0 B EDA7202Q
/dev/nvme7n1 SAMSUNG MZ1LB1T9HALS-00007 1 0.00 B / 1.92 TB 512 B + 0 B EDA7202Q
So this output might seem odd. Itās not, though. The ārawā capacity of this NVMe SSD is 1.92tb. However, one is reporting as 1.6tb and the other is reporting as 1.92tb. The endurance of the 1.6tb is significantly higher than the 1.92tb one. If endurance is important to you, you can use nvme tools to change the NVMe namespace size and the controller is aware of āunprovisionedā space and will wear level accordingly. Itās a pretty cool feature of nvme. Itās kind of like short-stroking an nvme (okay, not really, but yeah kinda to an extent).
To change the namespace sizes:
# it goes without saying literally everything in this guide will destroy all the data here . . .
nvme delete-ns /dev/nvme0 --namespace-id=1
# if you had /dev/nvme0n1 after running this command that should be gone now.
# The nvme tool is not super consistent. Sometimes it reports in decimal sometimes in hex?
nvme create-ns /dev/nvme0 --nsze=$((0xdf8fe2b0)) --ncap=$((0xdf8fe2b0)) -flbas 0 -dps 0 -nmic 0
# finally attach the namespace you created, controller id comes from the previous nvme id-ctrl command...
nvme attach-ns /dev/nvme0 --namespace-id=1 --controllers=0x4
# now ls /dev/nvme0n1 should work.
In a virtualized scenario you can assign VMs to namespace slices and the garbage collector assures than namespaces do not share blocks and other security measures such as hw crypto are enforced. Whereas with a simple partition it may be possible to do an out of bounds read to someone elseās partition, this handily prevents that whole class of problems.
It also makes it super easy to have a low overhead way of enforcing qos across vm that share an nvme. Itās why this exists.
Iāve always been unclear on ssd provisioning, trim, endurance, etc. There was an OpenBSD thread about them not supporting trim. They basically said SSDs didnāt implement it correctly/consistently and that all you need to do is leave some unpartitioned space on the drive. But I always imagined it would take something more involved like this⦠is there a way to do this with vanilla 2.5ā SSDs?
no, and not all nvme actually support name spaces. Some controllers āpeekā into the disk layout by trying to understand filesystems and the layout (samsung has tons of āoptimizationsā in their firmware for this). It used to be good advice but the ācorrectā way to underprovision now is nvme name spaces.
good question. Probably? Not for endurance, but for the āhard partitioningā and qos capabilities. You can pass through nvme0n1 nvme0n2 nvme0n3 etc to VMs transparently. And low overhead that way.
Yeah, itās actually ideal for creating multiple ZFS SLOGās from a pair of larger optane drives, such as when there are multiple pools you want to have them.
I feel like leaving unpartitioned space still worksā¦
Ive always left 33% of the the space unpartitioned and Id say my sata drives have lasted way longer than I think they should have.
Ive done it longer with my sata ssds but i have two sandisk 960G , one has almost 43k hours.
[root@Storage ~]# smartctl -a /dev/sdf | grep -i -e avail -e power_on
TRIM Command: Available, deterministic, zeroed
SMART support is: Available - device has SMART capability.
9 Power_On_Hours 0x0032 000 100 000 Old_age Always - 37162
232 Available_Reservd_Space 0x0033 100 100 004 Pre-fail Always - 100
[root@Storage ~]# smartctl -a /dev/sdg | grep -i -e avail -e power_on
TRIM Command: Available, deterministic, zeroed
SMART support is: Available - device has SMART capability.
9 Power_On_Hours 0x0032 000 100 000 Old_age Always - 42867
232 Available_Reservd_Space 0x0033 100 100 004 Pre-fail Always - 100
Would i be able to use NVMe namespaces to separate my PCās OSes?
Currently iām dual booting Linux and Windows - those are installed on partitions of the same SSD.
I guess, the mobo would need to be able to handle namespaces, in order to dual boot those instead of partitions.
Would a usual AMD B550 mobo be able to do this?
What iām trying to figure out is, whether i can dual boot AND virtualize Windows. Not simultaneously, of course, but iād like to do maintenance and install Steam updates inside a VM while iām using my Linux daily driver. This way i could maintain an up-to-date gaming OS, and play more because iām not demotivated by waiting for updates to install.
Regarding namespaces/nvme-cli I noticed some issue(/feature) with NVME drives attached to Broadcom 9500-16i trimode HBA. The nvme drive is presented to the OS as a scsi device and nvme-cli refuses to work with the nvme drive. Not sure if you can send some raw command directly to the drive, but for this I donāt have enough knowledge.
Not sure if the new Adaptec HBA 1200 has the same issue. Has anyone the Adaptec/Microsemi HBA 1200 in use?
I donāt have any issue with the Micron 7400 Pro M.2 connected via M.2 slot (7400 also supports namespaces, and different block sizes).
Hmm, and it seems like some of the NVMe that support name spaces support only one (1).
From an interesting explanation I can only find live on a archived site:
The nn attribute indicates the maximum number of namespaces your disk supports. The device nvme0 is a U.2 drive that supports 32 namespaces and nvme1 is my M.2 boot device that only supports a single namespace.
root@smc-server thorst]# nvme id-ctrl /dev/nvme0 | grep nn
nn : 32
[root@smc-server thorst]# nvme id-ctrl /dev/nvme1 | grep nn
nn : 1
Thereās a list on that page of 2020 era enterprise drives that support 16-128 namespaces.
Looks like your PM983 only supports 1 namespace; I get similar nn =1 looking at a Samsung 970 Evo Plus. Iāll check some newer NVMe drives when I get a chance.
One other interesting thing about this output, is it looks like this drive supports multiple sector sizes. From some nvme-cli info I found, which includes an explanation on how to use ānvme formatā to change from say 512B to 4096B]() :
Anyway, while looking around at this stuff, and trying to find info ā and coming to the realization that the non-enterprise NVMe drives I have are not going to let me play with setting up namespaces to create ādrivesā to use with ZFS ā I put together this table, and list of sources, detailing some drive namespace compatibility:
Thanks for the extra input ā I went ahead and edited the post above, adding your info (and converting the table I had to markdown, so it can be better copied/amended/etcā¦)