It was sort of a soft change over. All the did on January 1, 1983, is
disable link 0 on the IMPs. This essentially blocked NCP from working
anymore.
You could (and many of us were) run IP on the Arpanet for years before that.
In fact the weeks before we were getting ready for the change and we had our
TCP/IP version of the system up.
It crashed.
We rebooted it and set about debugging it.
It crashed again. Immediately my phone rang. It was Louis Mamakos, then
of the University of Maryland. He had tried to ping our system. I
called out the information to Mike Muuss who was across the desk from me.
We set to find the bug. It turned out that the ICMP echo handling in the
4.1c (I believe this was the version) that we'd cribbed our PDP-11/70 TCP
from had a bug where it used the incoming message to make the outgoing one,
AND it freed it, eventually causing a double free and crash. It was at
this point we realized that BSD didn't have a ping client program. Mike
set out to write one. It oddly became the one piece of software he was
most known for over the years.
The previous changeover was to long leaders on the Arpanet (Jan 1, 1981?).
This changed the IMP addressing space from eight bits (6 bits for imp
number, 2 for a host on imp) to 16 bits for imp number and 8 for a host on
imp. While long leaders were available on a port basis for years earlier,
they didn't force the change until this date. The one casualty we had a
PDP-11/40 playing terminal server running software out of the University of
Illinois called "ANTS." Amusingly the normal DEC purple and red logo
panels at the top of the rack were silkscreened with orange ANTS logos (with
little ants crawling on it). The ANTS wasn't maintained anymore and stuck
in short-leader mode. We used that option to replace that machine with a
UNIX (we grabbed one of our ubiquitous PDP-11/34's that were kicking
around). I kept the racks and the ANTS logo for the BRL Gateways. I
turned in the non-MM PDP-11/40. A year later I get a call from the
military supply people.
ME: Yes?
GUY: I need you to identify $65,000 of computer equipment you turned in.
ME: What $65,000 machine?
GUY: One PDP-11/40 and accessories.
ME: That computer is 12 years old... and I kept all the racks and
peripherals. Do you know how much a 12-year-old computer is worth?
The other major cut over was in October of 1984. This was the
Arpanet-Milnet split. I had begged the powers that be NOT to do the
change over on the Jan 1st as it always meant I had to be working the days
leading up to it. (Oct 1 was the beginning of the "fiscal" year). Our
site had problems. I made a quick call to Mike Brescia of BBN. This was
pre-EGP, and things were primarily static routed in those days. He'd
forgotten that we had routers at BRL now on the MILNET (all the others were
on the ARPANET) and the ARPANET-MILNET "mail bridge" routers had been
configured for gateways on the MILNET side.
> From: Dave Horsfall <dave(a)horsfall.org>
> the ARPAnet got converted from NCP to TCP/IP in 1983; ... have a more
> precise date?
No, it was Jan 1.
It wasn't so much a 'conversion', as that was the date on which, except for a
few sites which got special _temporary_ dispensation to finish their
preparations, support for NCP was turned off for most ARPANET hosts. Prior to
that date, most hosts on the ARPANET had been running both, and after that,
only TCP worked. (Non-ARPANET hosts on the then-nascent Internet had always
only been using TCP before that date, of course.)
Noel
Some interesting historical stuff today...
We lost Rear Admiral Dr. Grace Hopper USN (Retd) in 1992; there's not much
more than can be said about her, but I will mention that she received
(posthumously) the Presidential Medal of Honor in 2016.
As every Unix geek knows, today is the anniversary of The Epoch[tm] back
in 1970, and at least one nosey web site thinks that that is my birthdate
too...
And according to my notes, the ARPAnet got converted from NCP to TCP/IP in
1983; do any greybeards here have a more precise date?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On Fri, Dec 29, 2017 at 04:04:01AM -0700, Kevin Bowling wrote:
> Alpha generally maintained integer/ALU and clockspeed leadership for
> most of the '90s
> http://www.cs.columbia.edu/~sedwards/classes/2012/3827-spring/advanced-arch…
Wow, that first graph is the most misleading graph on CPU performance
I've ever seen. Ever.
So from 1993 to 2000 the only CPUs released were Alphas?
That era was when I was busy measuring performance across cpus and
operating systems and I don't ever remember any processor being a
factor of 2 better than its peers. And maybe I missed it, I only
owned a couple of alpha systems, but I never saw an Alpha that was
a game changer. Alpha was cool but it was too little, too late to
save DEC.
In that time period, even more so now, you had to be 2x better to get
a customer to switch to your platform.
2x cheaper
2x faster
2x more reliable
Do one of those and people would consider switching platforms. Less than
that was really tough and it was always, so far as I remember, less than
that. SMP might be an exception but we went through that whole learning
process of "well, we advertised symmetric but when we said that what we
really meant was you should lock your processes down to a processor
because caches turn out to matter". So in theory, N processors were N
times faster than 1 but in practice not so much.
I was very involved in performance work and cpu architecture and I'd love
to be able to claim that we had a 2x faster CPU than someone else but we
didn't, not at Sun and not at SGI.
It sort of make sense that there weren't huge gaps, everyone was more or
less using the same sized transistors, the same dram, the same caches.
There were variations, Intel had/has the biggest and most advanced
foundries but IBM would push the state of the art, etc. But I don't
remember anyone ever coming out with a chip that was 2x faster. I
suspect you can find one where chip A is introduced at the end of chip
B's lifespan and A == 2*B but wait a few month's and B gets replaced
and A == .9*C.
Can anyone point to a 2x faster than it's current peers chip introduction?
Am I just not remembering one or is that not a thing?
--lm
A bit off the PDPs, but to do a minor correction on mail below
The commercial version of 'UNIX' on Alpha was maybe first called
Digital Unix OSF/1, but quickly changed to Digital Unix at least with
v3 and v4.0 (A - G). From there we had a 'break' which only in part
was due to take over by Compaq and we had Tru64 UNIX v5.1A and V5.1B.
The V5.1B saw updates till B-6.
As for the Digital C compiler, I'm still using
DTCCMPLR650 installed Compaq C Version 6.5 for Compaq Tru64 UNIX Systems
When I get some old source (some even developed on SCO UNIX 3.2V4.2) I
like to run it through all compiler /OS-es I got handy. With the
Compaq C compiler and HP-UX ANSI C I mostly get pages of warning and a
few errors. By the time I 'corrected' what I think is relevant some
nasty coredumps tend to disappear :-)
Compile for a better 2018,
uncle rubl
>Date: Fri, 29 Dec 2017 21:30:11 -0500.
>From: Paul Winalski <paul.winalski(a)gmail.com>
>To: Ron Natalie <ron(a)ronnatalie.com>
>Cc: TUHS main list <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] Why did PDPs become so popular?
>Message-ID: <CABH=_VRwNXUctFPav5rHX83wfUS0twMQuBhinRZ6QEY1cB3TNQ(a)mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
>
>On 12/29/17, Ron Natalie <ron(a)ronnatalie.com> wrote:
> The Alpha was hot
> stuff for about nine months. Ran OSF/1 formerly DigitalUnix formerly
> OSF/1.
>Digital UNIX for the VAX was indeed derived from OSF/1. The port to
>Alpha was called Tru64 UNIX.
>Tru64 UNIX was initially a pure 64-bit system, with no provision for
>building or running 32-bit program images. This turned out to be a
>mistake . DEC found out that a lot of ISVs had code that implicitly
>"knew" that sizeof() a pointer was the same as sizeof(int) was the
>same as 4 bytes. Tru64 was forced to implement a 32-bit compatibility
>mode.
>There was also a problem with the C compiler initially developed at
>DECwest in Seattle. It supported ONLY ANSI standard C and issued
>fatal errors for violations/extensions of the standard. We (DEC
>mainstream compiler group) called it the Rush Limbaugh
>compiler--extremely conservative, and you can't argue with it.
Warning: off-topic info
> I was told once that McIlroy and Morris invented macro instructions
> for assembly language. And certainly they wrote the definitive
> paper on macros, with, as I recall, something like 10 decisions you
> needed to make about a macro processor and you could generate 1024
> different macro systems that way. I wonder if that ever got
> published
The suggestion that I invented macros can also be found on-line, but
it's not true. I learned of macros in 1957 or before. GE had added
a basic macro capability to an assembler; I don't know whether they
invented the idea or borrowed it. In 1959 George Mealy suggested
that Bell Labs install a similar facility in SAP (SHARE assembly
program). Doug Eastwood wrote the definition part and I handled
expansions.
Vic Vyssotsky later asked whether a macro could define a macro--a
neat idea that was obviously useful. When we went to demonstrate
it, we were chagrinned that it didn't work: definition happening
during expansion resulted in colliding calls to a low-level
string-handling subroutine that was not re-entrant. Once that
was fixed, Steve Johnson (as a high-school intern!) observed
that it allowed the macro table to serve as an addressable
memory, for which the store and load operations were MOP
(macro define) and MAC (macro call).
Probably before Steve's bright insight, Eastwood had folded
the separate macro table into the opcode table, and I had
implemented conditional assembly, iteration over a list, and
automatic generation of symbols. These features yielded
a clean Turing-complete language-extension mechanism. I
believe we were the first to achieve this power via macros.
However, with GPM, Christopher Strachey showed you don't need
conditionals; the ability to generate new macro names is
enough. It's conceivable, but unlikely, that this trick
could be done with earlier macro mechanisms.
As for publication, our macroprocessor inspired my CACM
paper, "Macro nstruction extension of compiler languages",
but the note that Steve remembers circulated only in the
Labs. A nontrivial example of our original macros in
action--a Turing machine simulator that ran completely within
the assembler--was reproduced in Peter Wegner's programming
book, so confusingly described that I am glad not to have
been acknowledged as the original author.
Doug
> From: Paul Winalski
> Lack of marketing skill eventually caught up to DEC by the late 1980s
> and was a principal reason for its downfall.
I got the impression that fundamentally, DEC's engineering 'corporate culture'
was the biggest problem; it wasn't suited to the commodity world of computing,
and it couldn't change fast enough. (DEC had always provided very well built
gear, lots of engineering documentation, etc, etc.)
I dunno, maybe my perception is wrong? There's a book about DEC's failure:
Edgar H. Schein, "DEC is Dead, Long Live DEC", Berett-Koehler, San
Francisco, 2003
which probably has some good thoughts. Also:
Clayton M. Christensen, "The Innovator's Dilemma: When New Technologies
Cause Great Firms to Fail", Harvard Business School, Boston, 1997
briefly mentions DEC.
Noel
'FUNCTION: To save the programmer effort by automatically incorporating library subroutines into the source program.'in Cobol whole 'functions' (subroutines) and even code snipplets are 'copied' into the main source file by the copy statement. That's different to preprocessor macros, -definitions, -literals and, since ansi c, function prototypes.
'FUNCTION: To save the programmer effort by automatically incorporating library subroutines into the source program.'in Cobol whole 'functions' (subroutines) and even code snipplets are 'copied' into the main source file by the copy statement. That's different to preprocessor macros, -definitions, -literals and, since ansi c, function prototypes.
Dear all,
I am starting a new “history” section of the “weird machines and security” publication PoC||GTFO (https://www.alchemistowl.org/pocorgtfo for my mirror, https://www.nostarch.com/gtfo for a printed compilation of the first 15 issues).
Ideally we would like some articles about the history of security exploits where the historical importance is emphasised: we always get authors willing to tell us about the latest and greatest web exploit but they often lack any historical perspective about what has been done before.
As PoC||GTFO has a strong emphasis on weird machines and generally forgotten hardware and software I thought that the contributors to TUHS would be ideally placed to write something about their preferred security exploits in the past. I have fond memories of taking over a machine using and NFS /home filesystem exported to the wide-world, of someone trying to hack into my MasPar via the DEC Ultrix which controlled it, etc. but I am really rather interested in other perspectives.
I hope a few of you will want to contribute something to the collection, there is still space for the January 2018 edition if anyone is so inclined.
Cheers,
Arrigo
I apologise if this is too far from the main topic, but I wanted to check
an urban legend.
There is a story - as I have heard it told - that PDPs established their
place (and popularity) in the marketplace by pointedly *not* advertising
themselves as "computers", but instead as "programmed data processors".
This was because - so the story goes - that everyone in corporations of the
time simply *knew* that "computers" came only from IBM, lived in big
datacentres, had million-dollar price-tags, and required extensive project
management to purchase; whereas nobody cared enough about a thing called a
"programmed data processor" to bother bikeshedding the
few-tens-or-hundreds-of-thousands-of-dollars purchase proposal to an
inevitable death. Thus, they flitted under the purchasing radar, and sold
like hotcakes.
I wonder: does this story have substance, please?
Aside from anything else: I draw parallels to the adoption of Linux by Wall
St, and the subsequent adoption of virtualisation / AWS by business - now
reflected in companies explaining to ISO27001 auditors that "well, we don't
actually possess any physical servers..."
- alec
--
http://dropsafe.crypticide.com/aboutalecm
I think that steep educational discounts and equipment grants from Digital to major collages also had a major impact,as did the existence of DECUS that made a lot of software readily available.
Best regards,
David Ritchie
Charles Babbage KH FRS was born on this day in 1791; pretty much the
father of the computer, a model of his "difference engine" built to the
standards of the day worked, complete with its printer. What is not so
well known about him was that he dabbled in cryptography, and broke the
Vigenère cipher; unfortunately for him it was classified as a military
secret, so one Friedrich Kasiski got the credit instead.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Somewhat tangentially related to the other thread...
So, back in the days when I was using Consensys SVR4.2, I found a bug in
tar. It goes like this:
If there is a symbolic link in the tree that you are taring, the first
one works fine. If a second symbolic link has a shorter destination name
than the first, the extra characters from the first symbolic link
destination are appended to the end of the second. So for example:
a -> abcdef
b -> zyx
Tar will actually tar these symbolic links as:
a -> abcdef
b -> zyxdef
Being that I happen to have found, on the Internet, the sources to AT&T
SVR4.2, and some other System V variants, I went and investigated.
The relevant piece from AT&T SVR4.2 tar.c (broken):
Jan 25 1993 ./ATT/ATT-SYSVr4/cmd/tar/tar.c
case S_IFLNK:
<SNIP>
i = readlink(shortname, filetmp, NAMSIZ - 1);
if (i < 0) {
vperror(0, "can't read symbolic link %s",
longname);
return;
}
From USL-4.2 (apparently fixed):
Jan 22 1994 ./USL/usl-4.2-source/src/i386/cmd/tar/tar.c
case S_IFLNK:
<SNIP>
i = readlink(shortname, filetmp, NAMSIZ - 1);
if (i < 0) {
vperror(0, ":462:Cannot read symbolic link %s",
longname);
return;
}
else
filetmp[i] = '\0'; /* since readlink does not
supply a '\0' */
From Solaris 2.5:
case S_IFLNK:
<SNIP>
i = readlink(shortname, filetmp, NAMSIZ - 1);
if (i < 0) {
vperror(0, gettext(
"can't read symbolic link %s"), longname);
if (aclp != NULL)
free(aclp);
return;
} else {
filetmp[i] = 0;
}
BSD 4.4:
case S_IFLNK:
<SNIP>
i = readlink(shortname, dblock.dbuf.linkname, NAMSIZ - 1);
if (i < 0) {
fprintf(stderr,
"tar: can't read symbolic link %s: %s\n",
longname, strerror(errno));
return;
}
dblock.dbuf.linkname[i] = '\0';
Anyone know when/why/how tar diverged between the two branches? Note the
readlink directly into dblock.dbuf, while the SVR4 variants use a temp
string and then sprintf it in later.
From: http://www.catb.org/esr/faqs/usl-bugs.txt
17. tar(1) fails to restore adjacent symbolic links properly
Arthur Krewatt <...!rutgers!mcdhup!kilowatt!krewat> reports:
SVR4 tar has another strange bug. Seems if when restoring files, you
restore one file that is a link, say "a ->/a/b/c/d/e" and there is another
link just after it called "b ->/a/b/c" tar will restore it as "b ->/a/b/c/d/e"
This just seems to be a lack of the NULL at the end of the string, like
someone did a memmov or memcpy(dest,src,strlen(src)); where it should be
strlen(src)+1 to include the NULL.
Esix cannot reproduce this under 4.0.4 or 4.0.4.1, they think it's fixed.
Hello Team TUHS:
I am having a problem with my PDP-11 SVR1 running under a recent SIMH build. My problem occurs on both MAC OS X and FreeBSD.
First, I created a six disk (RP06) and eight port TTY (DZ) kernel, with swap placed on drive 1. The system behaves beautifully as FSCK reports clean. Eight users can login with no problem.
Second, I reverted to a pristine PDP-11 SVR1 with one drive (RP06) and no DZ and booted the default kernel (gdtm) and I see the same problem described below.
Third, when using the tape driver instead of /dev/null i get the same results.
Next, here is the issue:
cd /
find . -print | cpio -ocvB > /dev/null
It runs for a short while and then shitz a core:
I am using /dev/null to take the tape driver out of the equation.
Here is the backtrace for cpio:
$c
__strout(053522,043,0,053012)
__doprnt(046652,0177606,053012)
_fprintf(053012,046652,053522)
~main(02,0177636)
Now, interestingly, I run into a similar issue when using tar:
cd /usr
tar -cvf /dev/null .
Again, this will run for a while, then drops a core. Here is the backtrace for tar:
$c
__strout(043123,02,0,045506)
__doprnt(043123,0167472,045506)
_fprintf(045506,043123,0170600)
~putfile(0170600,0170641)
~putfile(0171654,0171704)
~putfile(0172730,0172745)
~putfile(0174004,0174016)
~putfile(0175060,0175066)
~putfile(0176134,0176136)
~putfile(0177672,0177672)
~dorep(0177632)
~main(04,0177630)
This really bugging me since my SVR1 is otherwise working flawlessly. I was able to remake the entire system and custom kernels that boot with no problem.
Also, I configured my main port to run inside the AWS Lightsail and now I have access to SVR1 from anywhere in the world!
I was also wondering if doing a CPIO or TAR on the entire system was overflowing some link tables and maybe this is expected behavior for the minimal resource of the PDP-11?
Thank you for any help.
Would you expect tar or cpio to dump core if you attempted to copy large filesystems (or the entire system) on a PDP-11?
Note: All of my testing has been in single user mode.
Truly,
Bill Corcoran
> From: Dave Horsfall
> Colossus, arguably the world's first electronic computer
Well, it all depends on your definition of "computer"... :-)
I myself prefer to reserve that term for things that are programmable,
and Turing-complete. (YMMV... :-)
So I call things like Colossus and ABC "digital electronic calculating
devices" (or something similar).
None of which is to take anything away from the incredible job done by Flowers
and his cohorts. It was an amazing device, and had a major effect on history.
Noel
Tommy Flowers MBE was born on this day in 1905; an electrical and
mechanical engineer, he designed Colossus, arguably the world's first
electronic computer, which was used to break the German "Lorenz"
high-level cipher (not Enigma, as some think).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
Before we had to phase out our FTP server, I kept on it several
versions of UNIX clones and related OS that were mostly not on TUHS.
This post is to remember some of these systems. Many of these were
open sourced or widely available at some time and were related to UNIX
one way or another. I may not have the latest versions or all the
versions, but at least I do keep something.
The following ones have been open sourced at some point in time (I
believe):
ChorusOS
Coherent
EXOS
L4
Lunix
MaRTE-OS
Mach
OpenBLT
OpenSolaris
Sprite (I also own the original distribution CD)
Trix
UniFlex
agnix
amoeba
bkunix
bsd386
hurd
iunix
jaluna-c5
lsx-unix
mini-unix
minix
omu
opensolaris
starunix
thix
from tliquest.net/local/unix: uzi and various other bits
tme
tropix
tunix
unix-v8
unix-wega
uzi
xhomer
xinu
xv6
yoctix
Plus several archaic CDs with early versions of Linux,
Open/Free/NetBSD, (Walnut Creek, InfoMagic, etc. CD/ROMs)
and even the Beowulf/Extreme Linux CDs (plus I must keep around the
mirror we hosted for a long time of the Beowulf site). The hobbyist CDs
for OpenSolaris 8 (and I believe 9) with sources. Oh, and
MOSIX/OPENMOSIX.
In addition, I have many other sources whose Copyright status I'm not
aware of, but which are interesting for archival purposes.
Regarding QNX, yes, it was open sourced (at least for hobbyist use, I
have to check the license) for several distributions. I ported some
bioinformatics software and kept for some time a distribution
repository, and I'm pretty certain I must have the sources as well as
the virtual machines. I'll try to verify the licenses to see if it can
be redistributed, although I doubt they can. Oh, and I also own the
mentioned famous 3.5" diskette. I think I digitized it long ago. Would
have to check.
Off the Net, it has been possible, one time or another, to recover
executables and, sometime, even sources, of many systems. Archive.org
has -I believe- a copy of a once famous repo of abandonware with
binaries of SCO, System V, AIX, etc...
I know that AIX, ATT systemV vI, II, III and IV, Solaris V6, Tru64,
OSF-1, Dynix, Ultrix 11, BSDI, Ultrix-32 etc... have been out there at
some time or another in source code format, and binaries of IRIX, Lisa,
QNX, A/UX, xenix...
Some years ago, I had more free time and could test many systems on
emulators, and built images and accompanying scripts ready ti run. I
also made some tools to be able to transfer data in and out of old
unix versions (so I could edit the software off the virtual machine
while back-porting a screen editor to V6, v5, etc... with only vt100
support).
Not UNIX-related, I also keep copies of many other ancient operating
systems and software and hardware emulators.
Well, as I said at the beginning, everything that I had, I should still
keep while the hard disks continue spinning. If there is any
interest in adding any of these to TUHS, I can try to find a way to
send it all.
If I find time to browse through everything, I would like to upload all
the source code to GitHub (at least anything that's redistributable).
If I find the time...
But, Warren, if you are interested in anything, let me know and I'll
find a way to give you access.
j
--
Scientific Computing Service
Solving all your computer needs for Scientific
Research.
http://bioportal.cnb.csic.es
I blundered today into the GECOS field in /etc/passwd:
https://en.wikipedia.org/wiki/Gecos_field
"Some early Unix systems at Bell Labs used GECOS machines for print
spooling and various other services,[3] so this field was added to
carry information on a user's GECOS identity."
I had forgotten about this field and I don't recall it being
previously described as related to GECOS (I likely didn't take note at
the time I first encountered it).
Aside from the influence of Multics and other things on UNIX design
are there other tangible[1] manifestations of non-UNIX operating
system things like the GECOS field that were carried forward intact in
later UNIX implementations?
[1] things can be pointed at, rather than design ideas
> From: Doug McIlroy
> In fact, I don't recall being aware of the existence of the ITS
> movement at the time Unix arose.
For background, here are a few selected timeline items:
?/1967 ITS design starts
7/67 ITS first becomes operational
12/67 Multics single process boot on 645
10/68 Initial Multics milestone (8 users)
01/69 Limited Initial Multics milestone (12 users)
04/69 Bell Labs drops out of Multics
(I included the latter since I'm assuming the "time Unix arose" is after the
Bell Labs departure.)
Note: I'm not saying or implying anything about the 3 systems with this; this
is purely data.
Noel
> > are there other tangible[1] manifestations of non-UNIX operating
> > system things like the GECOS field that were carried forward intact in
> > later UNIX implementations?
>
> Job control was inspired by ITS job control capabilities. Control-Z
> does pretty much the same thing in both operating systems.
I don't think "carried forward" describes job control, which I
believe was never in any Research system. "Borrowed" would be
more accurate. In fact, I don't recall being aware of the
existence of the ITS movement at the time Unix arose.
Doug
> > In fact, I don't recall being aware of the existence of the ITS
> > movement at the time Unix arose.
> Was there ever a movement? It was just four machines at MIT.
When AT&T sued BSD Inc over (among other things) the phone number
1-800-ITS-UNIX, the joke was that MIT should have sued them too.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
Ian Zimmerman:
> How do other train systems handle [DST], e.g. the European intercity
> system?
Dave Horsfall:
UTC? It's not hard to get used to it.
=====
You misunderstand the problem.
Suppose I'm planning to board a train at 0300 on the morning
Daylight Time ends.
Now suppose the train actually departs an hour early, at 0200,
because it originated before the time change and some nerd who
never rides trains declared that it shall not wait the extra
hour until scheduled departure time.
Nerds may be happy, but the paying passengers won't be. Telling
passengers to set their watches to UTC just won't happen. (Some
of us nerds actually had our watches on GMT for a few months
back in the years that gave the `Nixon table' in old ctime.c
its (informal) name, but gave up because it was just too damn
much work to stay in sync with the real world.)
Once upon a time, before railways had radio communications and
proper track-occupancy signalling, the consequences were more
serious than that: if you run an hour ahead of schedule, you
risk colliding with another train somewhere. That is why it
was the railways that first accepted and promoted standard time
zones.
Nowadays it's not about scheduling and safety, just about
having an acceptable user interface.
In a similar vein, I know of at least one case in which Amtrak
changed the official departure time of a train (that requires
advance reservations and often runs full) from 0000 to 2359,
because people would get confused about which day midnight
falls on and show up a day late. (Actually the Amtrak
timetable read 1200 midnight and 11:59 PM, but 24-hour time
is one of the changes I agree people should just accept
and use.)
Norman Wilson
Toronto ON
My question about SOL got me thinking a bit. It would be nice to have
section in TUHS of any early clones that could be collected. The two that
I can think of that probably should be there are (other feel free to point
ones that we should try to find):
1.) Idris, which was fairly true to V6 (enough that the one time I test it,
things from pretty much just worked). It was notable from being first.
Although the C compiler and the 'anat' (the assembler) were a tad
different. It the system that got Bill Plauger in trouble @ USENIX @ UDEL
when he was booed for a 'marketing' talk.
2.) CRDS (pronounced Cruds by those of use that use it at the time) -
Charles River Data Systems. It was a UNIX-like system, although I do not
think really attempted to hold to a V7 API much more than intent. Although
if my memory serves me, one of the unique features was the use of Reed &
Kanodia synchronization in its kernel [REED79], which I was a always a
fan. The system was slow as sin bit it ran on a 68000. [CRUDS system, a
Fortune box and our Vax/750 running BSD4.1 were the systems Masscomp used
to bootstrap].
Clem
[REED79] D.P. Reed and R.K. Kanodia, "Synchronization with Eventcounts and
Sequencers"