Can we hack 32bit cpus?

Is it possible to get 32bit to count past 2030 or whatever the end of life is? Would back dating the clock to an earlier date work?

Sounds to me like planned obsolescence.

Is it? I thought 32bit can only count to a specific amount before becoming a brick. Which was sometime in 2030.

You’re talking about 32bit integer overflow, I guess. If so, then yes there’s ways around that. Numerical/scientific libraries like NumPy use those kind of tricks frequently in order to work with values too large(or small) to represent accurately otherwise.

4 Likes

Unix time uses a signed 32bit integer and on the second after 2038-01-19 03:14:07 UTC it will overflow to the year December 13 1901. The only clear hack to get a 32bit unix time to go past 2038 would be to use a unsigned integer though that would mean it couldn’t be used to represent time before the Unix epoch of 1 January 1970 00:00:00 UTC.

3 Likes

Are you referring perhaps to the Unix Epoch?
Where the time counter runs from 0 (1970) and runs out around 2,147,483,647 (2038), at which point it just dies?

Fun fact, Apple phones are unix-like, and also suffered the same bug. Apparently if you set the date to before the start of unix time, or after 32 bit counters will expire, I think the phones just died.

Since then, I think they have fixed them in hardware to act differently in case of illegal time, rather than brick

[edit, it seems Epoch was the Start of what I was calling unix time]

2 Likes

You can do software tricks to do higher bit operations than the CPU natively supports, it will just be a lot slower than native.

The year 2038 overflow is not an issue in 32 bit linux as of kernel 5.5 since it was switched over to the 64bit representation.
https://lkml.org/lkml/2020/1/29/355?anz=web

1 Like

That got fixed on 32-bit Linux systems a while ago. They switched to a 64 bit time value. However since system calls have to remain backward compatible they have to return a 32 bit time sometimes. I don’t actually know what they decided to do about that. I believe opinions ranged from always returning the maximum 32-bit time, or an error code, or truncating it and returning the lower 32 bits as junk and trusting user code to figure it out.

1 Like

The SNES couldn’t multiply but through software wizardry you could
I’m sure the same can be said for this

1 Like

This is less of a CPU problem and more of an “existing software” software problem.

Software using time_t numbers needs to have been updated long ago. Various systems have to deal with dates 20-30+ years in the future on a regular basis (e.g., your bank for your mortgage - they would have had to be dealing with post-2038 dates 12 years ago in order to deal with 30 year loans), as above Linux switched to 64 bit time_t years back.

Hopefully 32 bit processors aren’t doing anything critically important with time that relies on the actual date already.

3 Likes

There’s a few different ways to multiply :smiley:

repeated addition
or shifts and adds :slight_smile:

e.g., 5x4 =
loop through adding 4 to itself 5-1 times

OR
5 binary shifted 2 bits left

bit-shifts are easier to set up if you know what the multiple will be ahead of time. I remember figuring out/learning from somewhere years ago when dealing with 320x200 VGA pixel coordinates for example that to multiply by 320 to work out a memory address in the VGA address space, it was faster to do

X shift left 8
plus
X shift left 6

than X * 320

in x86 assembly :slight_smile:

at least on the 286 or 8086. Pretty sure modern CPUs can do a multiply (or several of them) in one clock cycle now :D. Pretty sure on the 286 or 8086 multiply or was something like 17 clocks, whilst shifts and adds were 1-2 clocks each or something.

Actually, having said that and thinking about it now, for that purpose it would have probably (maybe?) been even faster to just use a 320 entry lookup table :smiley: One memory access and bam!

Sometimes you think you’re being clever and then (later)… you realise you’re not :smiley:

2 Likes

I expect all 32 bit systems to be trashed or reused as retro gaming system or turning into a Emulation station box
XP source code released should have been all the ammo needed to upgrade all atms and POS machines

2 Likes

I think we’ll have 64 bit everything (at least as far as new devices go) in the next couple of years.

At least for anything that deals with time (i.e., isn’t a dumb embedded controller for something like the Apple Pencil).

I mean speaking of Apple Pencil, even that thing has a 32 bit ARM processor in it, 3 years ago. 64 bit processing will fit in things that size already, no doubt.

Its just a case of how long it takes for legacy old hardware to die.

1 Like

Don’t forget there are still Windows 3.x systems out there some even used on airports…

2 Likes

we can simply change 1970 to whatever year you want, that’s really not a CPU issue, that’s software.

1 Like

For desktop, probably, but there’s some special architectures out there that are still 32bit, and might remain so for good reasons.

Eg. there’s still a 32bit SPARC implementation that’s actively being developed (LEON5 was released in 2019 according to the Wikipedia SPARC page), and, since they initially started the project, likely to be in use by the ESA.

In case anyone has some mistaken impressions about 32-bit systems I thought I’d mention that the natural bit size of a CPU has nothing to do with the size of numbers it can compute with.

Remember digital calculators from 1980? Some of them were 4-bit CPUs. And yet they could calculate with unlimited precision. This is because they can compute 4 bits at a time. To do larger numbers only needs repeating the operation. Just like the math you may have learned in grade school in base 10. Do one digit at a time. Carry the 1. Do the next digit. Etc.

So 32 bit CPUs can easily handle 64 bit or 128 bit time values. Just like they can handle multiplying 2048 bit numbers for cryptography.

6 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.