After no less than 8 hours of effort, I was able to successfully get my Intel Optane p5800x to utilize 4kb sector size. The means in which I have done so was that provided by user aBav.Normie-Pleb as shown in the image below. I used the Asus Bios secure erasure (the top most 4kb, with 0 metadata). The only problem is that in doing so I had lost about 60gb of space. I didn’t know if it was a result of reduced max LBA or of something else.
In an effort to try to restore the full capacity I have spent a few hours in the Intel command line application (via windows). It is now apparent that I am only able to delete data, as otherwise I am provided with errors. But my intent is to delete everything (as shown below) with the outcome of a 4096 sector size. I am fairly inept in command line, but I can follow written scripts. While m
My intent is to redeem the entirety of the 800 GB, with the resulting format with an LBAformat of 1 (4kb), such that I can reinstall Windows 10. There’s delete. There’s sanitize. And there is format. I cannot proceed without first delete (I don’t think). But, I’m swimming in shits creek trying to conceptualize what to do and how.
Below I have provided images to give context and potential information for someone who is in the know. If anyone could comment or provide precise instructions for a series of commands, it would be more than appreciated; It would be deemed a godly act.
With love,
Drew
Edit! I just want the 60gb capacity lost from Asus erase back!
My original intent was to first set maximumLBA=native (before a format) but was granted with the error Unsupported CNS value for NVMe Identify. The MSFT NVMe driver only supports Identify Namespace and Identify Controller. HOWEVER this seems like it might be an extraneous error resulting from it being unable to handle a drive with a partition.
But why would you want to change to 4kb sector sizes on Optane? It doesn’t function like NAND and doesn’t have the same minimum page size. 512b is more optimal since the technology is bit addressable and doesnt need to conform to program/erase cycles of whole pages in the way NAND does.
I don’t play games anymore, but I remember there were three things I hated waiting for to load:
Updates (network I/O-bound) and installation (storage I/O-bound)
Starting the game executable (storage I/O-bound)
Loading a stage/map/cut scene (storage I/O-bound)
Game files can be sequential, random, or a mix of I/O. But in most cases, you’re not going to be doing I/O below 4 KiB. Solidigm’s research[1] demonstrates the mixed nature with some games having “read activity … overwhelmingly sequential in nature” and others having “a mix of sequential and random.” In their key takeaways, they found that “of the 781MB of data transferred during the level load, fully 87% was moved in the form of sequential read operations” and that “within those sequential reads, a variety of transfer sizes were represented, with very small (4 KB) and very large (2 MB) being the most common by operation count.” [2]
You can also take a look at what kind of files are in your game’s installation directory. When an overwhelming majority of files appear to be already compressed content (i.e., they are incompressible), odds are that the game will load the whole thing into memory sequentially, and you would be better served by plain NAND rather than Optane. There are two possibilities I can think of which may result in false positives: the files are actually encrypted (and therefore, also incompressible); or the files are compressed, but in chunks which are randomly accessible.
Some files may be randomly accessed on initial load, and then sequentially access when loading additional content within the already running game, in which case you have two competing metrics for “fastest load time.” Which one do you want to optimize?
This is half incorrect. The concept of emulation is applicable to NAND only because its underlying media isn’t byte-addressable and operated on in large chunks. When the SSD’s logical sector size is not the physical sector size, we call that emulation. This is also true of hard disk drives (HDDs) which require some minimum block of data to ECC-encode. For Optane, both sector sizes are emulated in the sense that its media is byte-addressable at the lowest level. I’m not even sure if it goes through any ECC layer, but if it does, it’s probably not working on anything larger than 512-byte chunks.
So. For me it is three mainly, (loading into new battlegrounds faster), that I most keenly want and have. The second is also insanely good but I care less about. The thing is Optane!!! has made it so it’s so fucking ridiculous. I’m always first. Always.
An Alteric Valley battleground that takes someone with a top tier new desktop, maybe 8-9 seconds (as few as 5 or 6 with most expensive nand). I get in about 2.5 or 3.
I have been questioned how I repeatedly become leader every time. And I have been questioned how I can already be prepared when they just come in.
It’s insane. With that said. I want to maximize this overwhelming lead; after all I spent 900 bucks on the drive. … but it’s for that reason that I’m mostly interested in redeeming the lost 60gb.
P.S. — I have just used Intel MAs cli to delete the drive successfully and then secure erase it… having done it, it’s back to 512 (so I’ll give it a comparison). But I didn’t get back the 60gb… and I was originally unable to set max lba to native (but I think i figured out how),. .I’m not a programming genius, but even with max lba size (or atleast I think max lba size) I did not regain the lost capacity.