SSD Capacity Misrepresentation

I understand using GB because its standard scientific notation, but mislabeling GiB as GB in the OS is shoddy

Now how do we make linux conform to the standards of the everything?

OS's have been using those lables, since BEFORE the IEEE, or the JEDEC, (probably both, or the ISO) decided that we shouldn't notate 2^x as interchangeable with 10^x.

thats ~61GB is just as bad, there are no varying degrees of incorrect, in the current system it gets reported as ~59GB

Why can't a HDD/SSD manufacturer start to produce drives that are actually multiples of GiB and market it as a feature? It would make them stand above everyone else and bring them publicity. People with OCD would also thank them.

It starts with the fact that the drives were able to hold 1billion bytes before it was able to hold 1GiB. Since selling a drive that is 930~ MB isn't as impressive as selling a drive labeled 1GB.

Then four years later, the standards were mandated that GB != GiB. and the marketing department wins.

While the computers have been programmed and will probably continue to label things GB, while counting 2^x instead of 10^x.
This goes beyond HDD manufacturers, you would have to change the way we think about things.

Take networking for example, we say Gigabit ethernet, and they mean Giga, meaning it's 1000Mbits.

But what is 1000Mbits? Well, it's 1,000,000 Kbits, which is 1,000,000,000 bits, which is right around 125,000,000Bytes.

So while we say gigabit speeds, we really mean 125MegaByte speeds.

We think of networking in bits, mainly because it's how the card measures it all.

So, 10Gbit ethernet, is really 1.25GigaByte speeds.

The whole thing is screwy and while @Eden is correct in his original statement, it wasn't like that before 1998.

Megabits are another thing entirely. They are a measure of transmission speed, not storage capacity, and there's absolutely no reason to measure those two things in the same units. Bits are what hardware transmits at the lowest level, and at that point we don't care at all that bytes have 8 bits and whether a KB is 1024 or 1000 bytes. At transmission level what matters is how many times per second we can send and receive a high or low voltage level.

Your quite right.

In networking we say gigabit but we mean Gb/s which is correct as it uses the metric prefix giga meaning 10^9.

the windows OS uses giga to mean 1024^3 (2^30) which is wrong

It was only wrong after 1998.

@DeusQain
1024^n has never been equal to 1000^n
the terminology giga (10^9) was formally introduced at the CGPM 1960.

I bring up bits, and make my connection to transmission speeds because HDDs and SSDs are still storing information in bits, the little pieces.
We have to continue to remember that the computer operates in a binary world, so 8 bits=1byte is and will be important.

Here's some food for thought. Currently we use 4096 as a block size on a given storage medium, which is composed of 8 blocks of 512 bytes which is a 4096 bit block.
The square root of 4096 is 64, which has a square root of 8. So, 8x8 bits = 8bytes x 64 = 512bytes x 8 = 4096bytes. It all boils down to the little pieces.

Gibibyte didn't exist until much much later.

Hi micro,

you got it the wrong way around, harddrive manifactures are using kilo, mega, giga and tera, which are straight 1000s.
Computer operating systems are using kibi, mebi, gibi and tebi which are 1024s, even though somtimes some systems will actually use the wrong postfix, and for example say that a file is 1GB while it is clearly 1GiB.

Fucking hell, even Seagate can't decide how they want to use it.

Before we had GB drives, MegaByte was used and it is what we would now call a MebiByte.

http://knowledge.seagate.com/articles/en_US/FAQ/172191en?language=en_US <--- these guys just say MB (binary) to signify while at the same time explaining why GB = 1000MB then says MB (binary) on the same page to mean something different. <--- WTF is this shit?

@DeusQain, I have more far more experience of digital electronics than you think. byte's are usually but not necessarily 8-bit however for the purpose of this conversation the number of bits in a byte is irrelevant.

A Byte has been unambiguously defined since 1956, it is therefore perfectly reasonable after CGPM 1960 to write gigabyte meaning 10^9 bytes.

What you refer to is the elevation of IEEE 1541-2002 to a full-use standard by the IEEE Standards Association which actually occurred in 2005. this defined the symbol B as byte officially replacing bell the measure of sound intensity. but this has effectively become a semantic argument...

The point is microsoft among others misuse the postfix GB in the OS thats the source of confusion, if they had used GiB everything would be fine.

anyway my question remains unanswered: Why do microsoft count in GiB and not GB?

Bottom line? Microsoft's UI dev is not the same as the Storage engine Dev.

1 Like