/sigh My post was carefully worded. Read it again. A programmer is not a hardware engineer.
There are different engineers for OS-level structures (file systems and related LVM software) than there are for application layer software (defrag), than there are for disk controller driver software (Intel), then there are for the hardware controller PCB itself (WD/Seagate).
The existence and relative popularity/usefulness of defrag software proves that OS-level engineers typically do not care about how files actually get written to storage media.
In order to write a single file in a sequential block, that potentially requires effort by every engineer type that I listed above. This effort is non-trivial due to the many associated layers that must somehow do that AND remain backwards compatible, hence requires substantial incentives to even try such a feat.
Except swap files are neither preallocated nor contigious, only in part. They can dynamically grow, fragmenting them without any regard for their contiguity on the storage media. In addition, preallocated files are not guaranteed to be contigious in the first place because OS-level engineers can't be bothered to care. Their solution? Just move it to a different partition if you actually care about that.
I just checked right now, my swapfile is in 957 fragments and that is after making sure it was in a static 2048 MB allocation after I did a fresh install, to prevent it from becoming even more fragmented. The reason if of course that even static swap files are actually dynamic in the sense that they get re-allocated at every boot. And, since allocation cannot gurantee either sequential or contiguity, even if starting from a large group of empty sectors, this will just naturally happen over time as the disk fills up over time and the OS is rebooted constantly.
It is the answer to this question:
so where should the OP go next to learn more about how exactly computers work and technology next?
Ultimately is his choice, and seems to have benefited from the discussion so it is worthwhile.
Anyway, my answer is to think of things in terms of stacks, and different engineers designing different layers that have to interact in a compatible way, necessarily black-boxing every layer from every other layer.
I do acknowledge that sometimes you can gleam information through multiple layers, like defrag software getting a picture of a file system, that it assumes has some passing relationship to sector allocation at the actual disk. There is no way to know that because there are too many layers between a file system and bits on a disk. It is an assumption, not guaranteed, that can be checked experimentally, and should be treated as such. My USB flash drive does not have platters, but my OS thinks it does. Check for yourself:
wmic diskdrive get interfaceType,model,totalHeads /format:list
Your answer seems to point more towards focusing on APIs/addressing schemes believing they relate to the inner workings of the media. (?)
My core disagreement being that not focusing explicitly on numerous layers involved in the overall design takes into account that the internal workings of whatever structures on whatever layer being discussed too much. Frankly, that is less important than understanding the entire system overall.
The second disagreement being more specific in that that layers normally obfuscate the internal workings and that obfuscation is deeply magnified when crossing the software-controller layer in the OS with the PCB controller on a physical disk. My SSD does not have platters! Layers lie to each other, every day, all day long. These deliberate lies and their necessity in the name of compatibility are a core take-away from understanding computers, first and foremost, as stacks. We should not pretend to know, and cannot reasonably to know what lies beyond a layer from the perspective of any other layer. We can make reasonable assumptions in light of how the technology was engineered and run experiments to falsify our conclusions, but, from the perspective of another layer, we cannot know.
This is why computers work as well as they do. As long as a layer gets the request fulfilled by a layer below it, everything just works. An OS does not care about disks and fragmentation and contiguity because it does not have to and therefore should not. Specific applications can be created that care, but the OS doesn't have to; that's the point. An OS, from the perspective of an application requesting a file, only reads to fulfill requests from/to file systems. As long as the LVM given that request, returns the file, what does it matter to the application if that file is in a directory that is part of local media, hdd or ssd, USB flash media, or a network share speaking a compatible protocol, on a GPT disk or exFat or ZFS file system? Myprogram.exe does not have to care where cat.jpeg actually is in order to display it, just like OS-level software does not have to and should not take into account storage media characteristics. It cannot reasonably expect the storage media to always be the same, work the same, or benefit from the same optimizations and necessarily adds non-trivial complexity to the existing engineering design due to the numerous layers involved.
Edits 1 & 2: their vs there. Grammar so hard :(