NVMe Namespaces - Little Known Cool Features of (most) NVMe and User-Programmable Endurance

The micron 7400 m.2 22110 nvme ssd also supports 128 namespaces:

root@zephir:~# nvme id-ctrl /dev/nvme3 | grep -E “(^mn |^nn )”
mn : Micron_7400_MTFDKBG3T8TDZ
nn : 128
root@zephir:~#

2 Likes

note to future users, devices like Sabrent’s EC-SNVE (realtek rtl-9210) (https://www.amazon.com/Sabrent-Type-C-Tool-Free-Enclosure-EC-SNVE/dp/B08RVC6F9Y/) do NOT support nvme-cli. atleast not on all the systems i tried it on. don’t buy for that reason, it does read/write to a samsung 980Pro tho so it isn’t total junk.

2 Likes

Amazing, here, it says:
The PM9A3 is available in E1.S, U.2 and M.2 form factors

but, here, the only parts listed are U.2.

But, searching around, i can indeed find M.2 22110 PM9A3

Is an M.2 variant OK, or did it have problems and get deprecated/pulled? Given the dearth of PCIe 4.0 U.2 cables, a M.2 PCIe 4.0 NVMe drive might be a good solution – but not if it has other flaws (like overheating/throttling/less feature support, like namespaces or Opal/etc.)

Is there a place for comprehensive actual technical (non-marketing) information for hardware these days, or do manufacturers just make web sites with glossy pictures and dare you to engineer a solution with just guesswork and trial and error?

I highly recommend avoiding Samsung’s so called ”data center” drives.

I used to have the opposite opinion, since there are earlier models cheap on eBay with 99% of the life left in the flash. But Samsung’s firmware is hot garbage and I found out firsthand they like to get stuck in some bad state with “ERRORMOD” as the firmware version, and only showing a 1GB partition. Of course if you ask for firmware to try and reload it, Samsung will just tell you to fuck off. “Support” will only be from whoever sold it to you, which is basically going to be non-existent.

Their newer PM9A3 model is still full of the same issues, though the unofficial Samsung magician might work for at least updating sometimes.

See: PM9A3 Firmware / ERRORMOD related to fw version | ServeTheHome Forums

If you still want to play chicken with Samsung’s firmware, have backups and hardware redundancy like I fortunately did.

TL:DR Stay away.

1 Like

Well, to be fair, when researchers engage in a snark fest because hacking a drive’s firmware can bypass the Opal implementation – and recommend that responsible drive manufactures should sign their code and defend against rogue firmware mods, it’s not surprising that firmware gets tight and only comes from controlled channels and can only be flashed by official apps:

So, do you recommend the U.2 Intel P5510? What connection/cabling solution have you found to get PCIe 4.0 speed out of it (and the Optane P55800X)?

Great list!

Based on its datasheet the U.2 Kingston DC1500M supports namespaces (and PLP):

I was eyeing it before discovering from your list that M.2 Micron 7400 Pro also can do namespaces (and PLP as well, based on its datasheet), which can spare me from buying adapters and cables for U.2.

1 Like

Can’t find its idle power consumption figures. Do you have them by any chance?

Sorry, no I don’t have them. I only see in the documentation that they mention 8.25 W power consumption during sequential read/write. for the m.2 drives.

Update:
ps 0 : mp:8.25W operational enlat:0 exlat:0 rrt:0 rrl:0
rwt:0 rwl:0 idle_power:3.10W active_power:-
ps 1 : mp:7.50W operational enlat:10 exlat:10 rrt:0 rrl:0
rwt:0 rwl:0 idle_power:3.10W active_power:-
ps 2 : mp:7.50W operational enlat:10 exlat:10 rrt:0 rrl:0
rwt:0 rwl:0 idle_power:3.10W active_power:-
ps 3 : mp:7.50W operational enlat:10 exlat:10 rrt:0 rrl:0
rwt:0 rwl:0 idle_power:3.10W active_power:-
ps 4 : mp:5.50W operational enlat:10 exlat:10 rrt:0 rrl:0
rwt:0 rwl:0 idle_power:3.10W active_power:-

… you can also use multiple nvme name spaces it’s like a partition but it’s a hardware partition so you can get a couple of cheap two terabyte nvme drives and let’s say make 512 gigs or a terabyte of the nvme one name space and use a mirror of those for your metadata and then use the rest of the nvme for you know a raid z mirror for your virtual machine storage or your docker container storage or whatever you want to do…"
TrueNAS: Full Setup Guide for Setting Up Portainer, Containers and Tailscale #Ultimatehomeserver - YouTube

Has anyone actually done this? That is, create namespaces on an NVMe drive to use as “available disks” to use in ZFS pools – like, make namespace devices to be cache, log, or special metadata devices, and/or devices for a data vDev?

Is there any limiting factor when doing this? Like do all NVMe that support namespaces have whatever is needed under the hood (queues, caches, ability to flush writes, etc.) under the hood for ZFS devices to work as well in this circumstance as if on a total drive?

I’d imagine an OS kernel supporting ZFS would typically typically support NVMe/namespaces/etc. – so compatibility isn’t an issue for a data drive, but what about boot devices?

Can a namespace device be used for an EFI partition? Would that depend upon if the UEFI of your machine supported NVMe namespaces, or would just any UEFI which can find a EFI partition on an NVMe drive be able to find it on a namespaced NVMe drive (because that’s part of the spec, or whatever).

What about Windows? Does Windows see NVMe namespaces as separate devices? If so, just as data drives, or could a NVMe namespace be used as the Windows install/boot device?

How about compatibility with SED, self-encryption drive, functionality (e.g., Opal, Enterprise, IEEE 1667). I think the SED specs include some number (16 maybe) “Locking Ranges” that can be defined on a drive to encrypt, so that ranges of the disk space can be hardware encrypted, rather than the whole disk (and different users can have different access to those ranges).

Do NVMe namespaces perhaps work compatibly with SED locking ranges, where you can just define the encryption to go along with the namespace divisions?

Thanks for any thoughts, pointers to things to research.

I have been playing around with NVME namespaces over the past couple of days in TrueNAS CORE.

root@freenas[~]# nvmecontrol ns create -s 8192000 -c 8192000 -n 0 -L 0 -d 0 nvme1
namespace 1 created
root@freenas[~]# nvmecontrol ns create -s 8192000 -c 8192000 -n 1 -L 0 -d 0 nvme1
namespace 2 created
root@freenas[~]# nvmecontrol ns create -s 8192000 -c 8192000 -n 1 -L 0 -d 0 nvme1
namespace 3 created
root@freenas[~]# nvmecontrol ns create -s 8192000 -c 8192000 -n 1 -L 0 -d 0 nvme1
namespace 4 created
root@freenas[~]# nvmecontrol ns attach -n 2 nvme1
namespace 2 attached
root@freenas[~]# nvmecontrol ns attach -n 3 nvme1
namespace 3 attached
root@freenas[~]# nvmecontrol ns attach -n 4 nvme1

This worked yesterday, and I was able to use all 4 namespaces but now I am only seeing 2 of the 4 namespaces in the TrueNAS UI, and I am seeing this in the Kernel

May 30 10:48:24 freenas.fusco.me nvme3: RESERVATION REPORT sqid:3 cid:127 nsid:1
May 30 10:48:24 freenas.fusco.me nvme3: INVALID OPCODE (00/01) sqid:3 cid:127 cdw0:0
May 30 10:48:24 freenas.fusco.me nvme0: RESERVATION REPORT sqid:4 cid:127 nsid:1
May 30 10:48:24 freenas.fusco.me nvme0: INVALID OPCODE (00/01) sqid:4 cid:127 cdw0:0

@wendell or Anyone have some suggestions?

I can see all 4 namespaces in nvmecontrol

root@freenas[~]# nvmecontrol devlist
 nvme1: SAMSUNG MZWLL1T6HEHP-00003
    nvme1ns1 (4000MB)
    nvme1ns2 (4000MB)
    nvme1ns3 (4000MB)
    nvme1ns4 (4000MB)

(don’t ask me why 8192000= 4 Gig, I have no idea, I was just plugging numbers…)

EDIT it’s because 512 bytes per sector x 8192000 :slight_smile:

Devices show up in geom. Notice how the LUN ID is different by the identity is the same. What I can’t explain is why it worked at all yesterday. Now only one shows up in the UI.

Geom name: nvd1
Providers:

  1. Name: nvd1
    Mediasize: 4195328000 (3.9G)
    Sectorsize: 512
    Mode: r0w0e0
    descr: SAMSUNG MZWLL1T6HEHP-00003
    lunid: 334844304bb012480025385800000001
    ident: S3HDNX0KB01248
    rotationrate: 0
    fwsectors: 0
    fwheads: 0

Geom name: nvd2
Providers:

  1. Name: nvd2
    Mediasize: 4195328000 (3.9G)
    Sectorsize: 512
    Mode: r0w0e0
    descr: SAMSUNG MZWLL1T6HEHP-00003
    lunid: 334844304bb012480025385800000002
    ident: S3HDNX0KB01248
    rotationrate: 0
    fwsectors: 0
    fwheads: 0

Do namespaces work over M.2 / U.2 NVMe to USB enclosures?

Is there a way to write a NGUID to an NVME drive?

I have 4 4TB NVME drives which all have the same NGUID and its causing VMWare to see them as 1 device with 4 paths instead of 4 unique devices. If I could change each drive to have a unique NGUID I think it would fix the problem.

Running into a problem here. Commands worked fine in FreeBSD but I can’t figure it out on Linux.

I can only see my boot drive here, not nvme1 or nvme2

root@bigbertha[/dev]# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     BTOC14120Y5U058A     INTEL SSDPEK1A058GA                      1          58.98  GB /  58.98  GB    512   B +  0 B   U5110550
root@bigbertha[/dev]#

They both exist in /dev/

root@bigbertha[/dev]# ls
autofs           input         nvme1   sdab   sdai   sde1  sdl1  sds1  sdz1  sg26  sg9              tty19  tty38  tty57    vcs2    vfio
block            ipmi0         nvme2   sdab1  sdai1  sde2  sdl2  sds2  sdz2  sg27  shm              tty2   tty39  tty58    vcs3    vga_arbiter

I was able to delete all the little 4GB namespaces I made before just fine:

root@bigbertha[~]# nvme delete-ns /dev/nvme1 --namespace-id=1
delete-ns: Success, deleted nsid:1
root@bigbertha[~]# nvme delete-ns /dev/nvme1 --namespace-id=2
delete-ns: Success, deleted nsid:2
root@bigbertha[~]# nvme delete-ns /dev/nvme1 --namespace-id=3
delete-ns: Success, deleted nsid:3
root@bigbertha[~]# nvme delete-ns /dev/nvme1 --namespace-id=4
delete-ns: Success, deleted nsid:4

Making a new one works fine, but I can’t attach it?

root@bigbertha[/dev]# nvme create-ns /dev/nvme1 --nsze=2457600000 --ncap=2457600000 -flbas 0 -dps 0 -nmic 0

create-ns: Success, created nsid:1
root@bigbertha[/dev]#
root@bigbertha[/dev]# nvme id-ns /dev/nvme1
NVME Identify Namespace -1:
nsze    : 0
ncap    : 0
nuse    : 0
nsfeat  : 0
nlbaf   : 3
flbas   : 0
mc      : 0x3
dpc     : 0x1f
dps     : 0
nmic    : 0
rescap  : 0
fpi     : 0
dlfeat  : 0
nawun   : 0
nawupf  : 0
nacwu   : 0
nabsn   : 0
nabo    : 0
nabspf  : 0
noiob   : 0
nvmcap  : 0
nsattr  : 0
nvmsetid: 0
anagrpid: 0
endgid  : 0
nguid   : 00000000000000000000000000000000
eui64   : 0000000000000000
lbaf  0 : ms:0   lbads:9  rp:0x1 (in use)
lbaf  1 : ms:8   lbads:9  rp:0x3
lbaf  2 : ms:0   lbads:12 rp:0
lbaf  3 : ms:8   lbads:12 rp:0x2
root@bigbertha[/dev]#
root@bigbertha[/dev]# nvme attach-ns /dev/nvme1 --namespace-id=1
warning: empty controller-id list will result in no actual change in namespace attachment
NVMe status: CONTROLLER_LIST_INVALID: The controller list provided is invalid(0x211c)

Help I need an adult.

root[~]# nvme id-ctrl /dev/nvme4
mn : INTEL SSDPEL1K100GA
nn : 1

Intel P4801X OPTANE M.2 100GB
I see this like it only supports one NS, right?
Yes, I confirmed it myself: https://www.intel.com/content/www/us/en/support/articles/000038017/memory-and-storage/data-center-ssds.html

1 Like

Yeah unfortunately that’s correct. To my knowledge only the the P5800x optanes support multiple namespaces.

Yep, 128 namespaces for P5800X

smartctl
=== START OF INFORMATION SECTION ===
Model Number:                       Dell Ent NVMe P5800x WI U.2 400GB
Serial Number:                      PHAL1354001B400BGN
Firmware Version:                   1.0.0
PCI Vendor ID:                      0x8086
PCI Vendor Subsystem ID:            0x1028
IEEE OUI Identifier:                0x5cd2e4
Total NVM Capacity:                 400,088,457,216 [400 GB]
Unallocated NVM Capacity:           0
Controller ID:                      0
NVMe Version:                       1.3
Number of Namespaces:               128
1 Like

Is it possible that the creation of NVMe namespaces demand a certain granulartity for the namespace sizes?

I was playing around with my 800G P5800X and tried to split it into 2 equally sized namespaces, but couldn’t exactly succeed here.

The 800G P5800X reports exactly tnvmcap : 800.166.076.416 bytes, which translate into 1.562.824.368 512-byte sectors. Half of that should be 781.412.184 512-byte sectors, but if I try to set the namespace to that, this is what I get:

# nvme create-ns /dev/nvme2 --nsze=781412184 --ncap=781412184 -flbas 0 -dps 0 -nmic 0
create-ns: Success, created nsid:1
# nvme id-ctrl /dev/nvme2 |grep mcap
tnvmcap   : 800.166.076.416
unvmcap   : 399.431.958.528

This leaves exactly 400.505.700.352 bytes for the first namespace with the size mentioned above in unvmcap for the second. Playing around with the sector count reveals that there seems to be only steps of exactly 1.073.741.824 bytes (1GiB) possible resulting in a different sector count than the count given during creation of the ns. It seems to jump to the next best value. Is that expected behavior?
This seems odd given that the total amount of sectors is not devisable by 1GiB.

I bought a 960GB Micron 7450 M.2, to play with namespaces a bit. But now that I’ve played with it a bit, I’m wondering what to actually use namespaces for in the long run.

Out of curiosity, I tried it in an RTL9210B bridge, and it only seems to expose the first namespace. It’s a bummer it doesn’t expose each namespace as a LUN.

One interesting thing to note about the 7450 Pro is that it seems to have a ROM containing an NVMe EFI driver, so it might conceivably work with UEFI machines that predate NVMe boot support. But to really test that, I’d have to temporarily reassemble a Haswell board I have lying around.

My main server machine is Ryzen 3700X on an X570D4U-2L2T, running Proxmox. I have one mirrored ZFS pool of 4TB SATA SSDs (slow 870 QVO), and one mirrored ZFS pool of 8TB SATA hard drives (WD80EFZZ). I’m actually not even close to max usage on either pool.

Before I go moving most stuff from the QVO to the 7450, I need to decide how (or even whether at all) to divide it into namespaces. So what kinds of cool things can you use namespaces for?

I checked my INTEL HBRPEKNX0202A (Intel H10 32GB Optane, 512GB flash). It sadly only has one namespace. It seems like it is not possible to differentiate between the Optane and the flash part.