This is a question I have been wondering for awhile, in fact I found the answer indirectly but forgot it.
Basically my question is, is why do 32 bit CPU's or any CPU (It does not matter though with 64 bit CPU's) store time in one 32 bit integer instead of splitting it across multiple registers, is it a matter of performance or what? we would not have a problem with time if this was the case.
This has been really bugging me, even though I will probably never need the knowledge.
best guess is it comes from a combination of things. like needing to be in one place so it is less likely bit rot/ errors can occur. performance does make sense as well but only on really old hardware where speed was a premium and so was storage space. also time is kept as a count down .hence the nightmare of writing your own time converter as well as places constantly changing time zones how they track time ETC. i am by no means a time authority just using my best guess of limited knowledge on the subject.
Well process, time is kept in one 32 bit integer or instruction, and all I was asking is why that is not split into multiple instructions. Thus eliminating the Year 2038 problem.
To answer your general question: 1. In general purpose CPU registers are very scarce resource that represent 0 cost when used by instructions (both as a data source and data output) - so having an instruction that writes data to four registers is not a performance problem per se, the problem is that moving that data to and from registers is generally kind of most costly operation. Also generally you would like to reuse the data that is in the registers as much as possible (compilers do that by them self) in a form of a cache, reading time to four registers might potentially mean that some variable that were kept in one of them would need to be moved to memory and then read from it again when it will be used again. 2. As much as decade ago, CPU time is also a limited resource. One instruction instead of 4 usually mean 4 time faster computations. 3. As far as I know x86 CPU do not have instructions that handle date/time per se (what it has is a secondary chip like HPET/RTC that is accessible via IO/interrupts). The only "time" specific instruction would be RDTSC but it is a CPU cycle counter and it is 64bit value.
The last point takes us back to the 1 and 2. as it shows that 32bit datetime/timestamp choice is more of a software (OS/API/language/library/framework) internal choice of data precision. In other cases of a software it is a legacy issue (e.g. Java "int" is 32bit even on 64bit architecture, and "long" is always used for timestamps unless you are a bad programmer).
When I was 14 years old I was quite OK with most important x86 general instructions (8088 CPU precisely). At that time (and also age) using assembler to program some stuff was more "magical" than C or Pascal and you were actually able to "take over" OS (MS-DOS ;) ) without absolutely any fuss.
Then I went more and more into general purpose programming, never actually going back to low-level programming (but that inside look into CPU stays with you - more or less everything is arguably an consequence of it).
"Java enterprise developer" is my shortest description of my whole professional career.
That said, I must make small comment on my statement: "the problem is that moving that data to and from registers is generally kind of most costly operation" - that statement taken without context might be misleading. What I meant is that instruction that have all operands in registers do not need to depend on any of CPU optimizations (that were, and still are, evolving). Majority of which are around of trying to provide constant stream of data (read from memory) to execute stream of instructions - all of those optimizations could be at any time disrupted by simple fact of executing instruction (e.g. conditional jump, memory fence).