On Mon, 4 Jan 2021, Peter Jeremy wrote:
> Alternatively, my understanding is that the Unix epoch changed on
> several occasions in the early days. Presumably the knowledge of how to
> achieve this hasn't been lost. (Though actually performing an epoch
> rollover may be more difficult today).
My understanding is that it's been 1st Jan 1970 since at least Ed5, if not
Ed6.
It's been that way since the 4th edition.
In the 3rd edition it was the number of 60Hz ticks since 1972, along with this note: "This guarantees a crisis every 2.26 years."
Rebasing the epoch would be... tricky... lots of math is done assuming an origin of 1970, and not all of it is obvious to even concerted analysis.
Less ugly would be to declare time_t to be unsigned instead of signed... It would break less code... Making time_t 64 bits also breaks code, even if you declare you don't care about binary compat since many older apps know time_t is 32-bits.
Lots of older code also knew that pointers were 32 bits and could fit into an int, that the signal bitmask fit into 32 bits, etc. I feel like we have these crises every few years and we work around them. The issues here are perfectly familiar.
True. The issues were understood for years before compilers started warning about the issue on a wide-scale basis. There's currently no warnings for many of the common time_t type handling mistakes since they aren't considered errors in other contexts. So it makes the problem less visible.
A saving grace is that the timestamp fields in Unixy filesystems are almost invariably 64 bits and have been for a few decades now. Unlike y2k, the persistence issue is largely fixed except for ersatz binary formats, and most decently-maintained software hides the width of time behind a typedef. As for Ted's vignette about hand-coded systems in PDP-11 assembler running under emulation, I think the issue here is somewhat different: in this case, by and large, the software doesn't need rewriting, but rather recompilation on a new hardware and/or OS platform, possibly with some modifications to fix assumptions about type width. That's qualitatively different from rewriting from scratch in a different language on a radically different platform. Note I'm talking about Unix and Linux here, as opposed to other systems with similar epoch issues.
A larger problem, though, is where time_t is 64 bits, but on-disk format is 32-bits... And often times recompiling old software on new systems with different sized types can be a crap shoot. For software that's well understood, sure, an analysis can be undertaken, and it will likely work. For older code, that uses tricks to compute different types of dates, it's also likely more prone to overflow even with the recompile...
Certainly there will be some breakage in 2038. But I suspect that we'll pull a y2k and the critical stuff will be mostly fixed by the time the epoch rolls over. The long tail will be annoying, as it was with y2k, but not necessarily critical.
I suspect that many of the issues can be fixed between now and then, but w/o some effort, they will persist... Though it doesn't take too many errors in a critical system for there to be a catastrophic failure. Without publicity like y2k got, it's unclear the outcome.
I'll note with pride that my state replaced its unemployment system today with a new one. The old one was only 44 years old and not even the oldest in the nation... The long hand of the past appears in unexpected locations that are resource constrained.
Warner