I just noticed this:
Sep 2018: The Multics History Project Archives were donated to the Living
Computer Museum in Seattle. This included 11 boxes of tapes, and 58 boxes of
Multics and CTSS documentation and listings. What will happen to these items
is unknown.
https://multicians.org/multics-news.html#events
That last sentence is ironic; I _assume_ it was written before the recent news.
I wonder what will happen to all such material at the LCM. Anyone know?
Noel
> From: Larry McVoy
{Moving this to COFF, as it's not UNIX-related. I'll have another reply there
as well, about the social media point.}
> The amazing thing, to me, is I was a CS student in very early 1980's
> and I had no idea of the history behind the arpanet.
I don't think that was that uncommon; at MIT (slightly earlier, I think -
-'74-'77 for me) the undergrad's weren't learning anything about networking
there either, then.
I think the reason is that there wasn't much to teach - in part because we
did not then know much about networking, and in part because it was not yet
crystal clear how important it would become (more below on that).
There was research going on in the area, but even at MIT one doesn't teach
(or didn't then; I don't know about now) on-going research subjects to
undergrads. MIT _did_ have, even then, a formal UROP ('undergrad research
opportunities') program, which allowed undergrads to be part of research
groups - a sheer genius idea - which in some fast-moving fields, like CS, was
an inestimable benefit to more forward undergrads in those fields.
I joined the CSR group at LCS in '77 because I had some operating system
ideas I wanted to work on; I had no idea at that point that they were doing
anything with networks. They took me on as the result of the sheerest chance;
they had just gotten some money from DARPA to build a LAN, and the interface
was going to be built for a UNIBUS PDP-11, and they needed diagnostics, etc
written; and they were all Multicians. I, by chance, knew PDP-11 assembler -
which none of them did - the MIT CS introductory course at that point taught
it. So the deal was that I'd help them with that, and I could use the machine
to explore my OS ideas in return.
Which never really happened; it fairly became clear to me that data
networking was going to have an enormous impact on the world, and at that
point it was also technically interesting, so I quickly got sucked into that
stuff. (I actually have a written document hiding in a file drawer somewhere
from 1978 or so, which makes it plain that that I'm not suffering 20-20
hindsight here, in talking about foreseeing the impact; I should dig it up.)
The future impact actually wasn't hard to foresee: looking at what printed
books had done to the world, and then telgraphs/telephones, and what
computers had already started to do at that point, it was clear that
combining them all was going to have an incredible impact (and we're still
adapting to it).
Learning about networking at the time was tricky. The ARPANET - well, NCP and
below - was pretty well documented in a couple of AFIPS papers (linked to at
the bottom here:
https://gunkies.org/wiki/ARPANET
which I have a very vague memory I photocopied at the time out of the bound
AFIPS proceedings in the LCS library). The applications were only documented
in the RFC's.
(Speaking of which, at that level, the difference between the ARPANET and the
Internet was not very significant - it was only the internals, invisible to
the people who did 'application' protocols, that were completely different.
HTTP would probably run just fine on top of NCP, for instance.)
Anything past that, the start of the internet work, that, I picked up by i)
direct osmosis from other people in CSR who were starting to think about
networks - principally Dave Clark and Dave Reed - and then ii) from documents
prepared as part of the TCP/IP effort, which were distributed electronically.
Which is an interesting point; the ARPANET was a key tool in the internet
work. The most important aspect was email; non-stop discussion between the
widely separated groups who were part of the project. It also made document
distribution really easy (which had also been true of the latter stages of
the ARPANET project, with the RFC's). And of course it was also a long-haul
network that we used to tie the small internets at all the various sites
(BBN, SRI, ISI - and eventually MIT) into the larger Internet.
I hate to think about trying to do all that work on internets, and the
Internet, without the ARPANE there, as a tool.
Noel
Kevin Bowling wrote in
<CAK7dMtAH0km=RLqY0Wtuw6R7jXyWg=xQ+SPWcQA-PLLaTZii0w(a)mail.gmail.com>:
|On Wed, Aug 14, 2024 at 11:59 AM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
|> On Wednesday, August 14th, 2024 at 9:45 AM, Clem Cole <clemc(a)ccc.com> \
|> wrote:
|>> ...
|>> The issue came when people started using the mail system as a programmat\
|>> ic messaging scheme (i.e., fork: some_program | mail user) and other \
|>> programs started to parse the output.
|>> ...
|> Mail as IPC...that's what I'm reading from that anyway...now that's \
|> an interesting concept. Did that idea ever grow any significant \
|> legs? I can't tell if the general concept is clever or systems abuse, \
|> in those days it seems like it could've gone either way.
|
|I like Clem's answer on mail IPC/RPC.
|
|To add I have heard some stories of NNTP being used once upon a time
|at some service providers the way ansible/mcollective/salt might be
|used to orchestrate UNIX host configurations and application
|deployments. The concept of Control messages is somewhat critical to
|operations, so it's not totally crazy, but isolating article flows
|would give me some heartburn if the thing has privileged system
|access.. would probably want it on a totally distinct
|instance+port+configuration.
|
|Email and Usenet both have some nice properties of implementing a
|"Message Queue" like handling offline hosts when they come back. But
|the complexity of mail and nntp implementations lean more towards
|system abuse IMO.
The IETF will go for SML (structured email)
https://datatracker.ietf.org/group/sml/about/
which then goes for machine interpretable email message( part)s.
|> I guess it sorta did survive in the form of automated systems today \
|> expecting specially formatted emails to trigger "stuff" to happen.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Matt - I'm going to BCC: TUHS and move this to COFF - since while UNIX was
certainly in the mix in all this, it was hardly first or the only place it
happenned.
On Wed, Aug 14, 2024 at 2:59 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> On Wednesday, August 14th, 2024 at 9:45 AM, Clem Cole <clemc(a)ccc.com>
> wrote:
>
> >
> > ...
> > The issue came when people started using the mail system as a
> programmatic messaging scheme (i.e., fork: some_program | mail user) and
> other programs started to parse the output.
> > ...
>
> Mail as IPC...that's what I'm reading from that anyway...now that's an
> interesting concept.
It's kind of funny the history. ARPANET gives us FTP as a way to
exchange files. So, people figure out how to hack the mailer to call FTP
to send a file remotely and set up a submit a cron/batch submission, a.k.a
RJE. This is encouraged by DARPA because part of the justification of the
ARAPNET was to be able to share resources, and this enables supercomputers
of the day to be able to provide cycles to DARPA folks who might not have
access to larger systems. Also, remember, mailers were local to systems
at that point.
So someone gets the bright idea to hooker the mailer into this system --
copy the "mail file" and set up a remote job to mail it locally. Let's
just say this prioves to be a cool idea and the idea of intersystem email
begins in the >>ARPANET<< community.
So the idea of taking it to the next level was not that far off. The
mailer transports started to offer (limited) features to access services.
By the time of Kurt's "delivermail" but he added a feature, thinking it was
system logs that allowed specific programs to be called. In fact, it may
have been invented elsewhere but before Eric would formalize "vacation" -
Jim Kleckner and I hacked together a "awk" script to do that function on
the UCB CAD 4.1 systems. I showed it to Sam and a few other people, and I
know it went from Cory to Evans fairly quickly. Vacation(1) was written
shortly there after to be a bit more flexible than our original script.
Did that idea ever grow any significant legs?
I guess the word here is significant. It certainly was used where it made
sense. In the CAD group, we had simulations that might run for a few
days. We used to call the mailer every so often to send status and
sometimes do something like a checkpoint. It lead to Sam writing syslogd,
particularly after Joy created UNIX domain sockets. But I can say we used
it a number of places in systems oriented or long running code before
syslogd as a scheme to log errors, deal with stuff.
> I can't tell if the general concept is clever or systems abuse, in those
> days it seems like it could've gone either way.
>
> I guess it sorta did survive in the form of automated systems today
> expecting specially formatted emails to trigger "stuff" to happen.
Exactly.
ᐧ
Greg, this needs to move to COFF, so I'm BCCing TUHS in my reply. (My error
in the original message was that I should have BCC'd everyone but COFF, so
replies were directed there. Mei culpa).
However, since I have seen different people on all these lists bemoan the
loss of the LCM+L, I hope that by the broader announcement, a number of you
will consider the $36/yr membership to help Stephen and his team to be able
to keep these systems running and the at least the "labs" port of the old
LCM+L mission alive.
On Thu, Aug 1, 2024 at 9:56 PM Gregg Levine <gregg.drwho8(a)gmail.com> wrote:
> Hello!
> Pardon me for asking Clem, but would you mind naming the survivors?
The details are still coming out from Stephen and friends -- I would
recommend listening to his presentation and then maybe joining the List
Server at SDF by sending a (plain text) email to majordomo(a)sdf.org with
the subject and body containing the line: subscribe museum-l
I have an idea what these Toads are, and of course what Multics happened to
> be, but that's it.
>
LCM+L owned a real Honeywell 6180 front panel. The folks in their lab
interfaced it to a microcontroller (I think it was an RP3 or 4, but it
could be something like a BeagleBone, I never knew). It was running
Multics Release 12.8 on a SimH-derived Honeywell 6180 [I'm not sure if
those changes ever made it back to OpenSIMH - I have not personally tried
it myself]. This system seems to have been moved to SDF's new site.
Also, a number of the MIT Multics tapes had been donated to the LCM+L.
These have survived, and the SDF has them. I'll not repeat Stephen's
report here, but he describes what they have and are doing.
Miss Piggy is the PDP 11/70 that Microsoft purchased and used for their SW
original development. It has been running a flavor of Unix Seventh Edition
- I do not know what type of updates were added, but I expect the DEC v7m
and the V7 addendum to be there. You can log in and try it yourself by ssh
menu(a)sdf.org" and picking Miss Piggy in the UNIX submenu. Miss Piggy
used to live and be on display at the LCM+L, but Stephen and the SDF were
involved in its admin/operation. Stephen says in his presentation that they
are trying to get Miss Piggy back up and running [my >>guess<< is that the
"Miss Piggy" instance on the SDF menu is currently running on an OpenSIMH
instance while the real hardware is being set up at the new location].
In the early 1980s, as DEC started to de-commit to the 36-bit line after
they introduced the 32-bit Vax systems, a number of PDP-10 clones appeared
on the market. For instance, the System Concepts SC-40 was what
Comp-U-Serve primarily switched to. Similarly, many ex-Stanford AI types
forked to create the Toad Systems XXL, a KL10 clone. SDF and LCM+L owned
several of these two styles of systems and were on display and available
for login. Since Twenex.org is live (and has been) and Stephen shows a
picture of the SC40, again, I am (again) >>guessing<< that these have all
been moved to the new location for SDF.
Stephen mentioned in his presentation that they have the LCM-L's Vax7000
but do not yet have the 3-phase power in their computer room. He suggested
that it is one of the most popular machines in the SDF menu, and they
intend to make it live shortly.
It is unclear what became of some of the other items. It was pointed out
that running a CDC6500 is extremely expensive to operate from a power
standpoint, so they offer an NOS login using the DTCyber simulator. He
never mentioned what became of the former Purdue machine that the LCM owned
and had restored.
I am interested in knowing what happened to the two PDP-7s. I know that at
least one was privately owned, but was being restored and displayed at the
LCM+L. It was one of these systems that Unix V0 was resurrected and ran
for the UNIX 50th Anniversary Party that the LCM+L hosted. The LCM+L had
some interesting peripherals. For instance, the console for Miss Piggy was
a somewhat rare ASR37 [which is Upper/Lower case and the "native" terminal
for Research Unix]. I hope they have it also. The LCM+L had a number of
different types of tape transports for recovering old data. Stephen
mentioned that they have some of these but did not elaborate.
Clem
I think this is off topic for TUHS and more appropriate for COFF.
Gregg Levine wrote:
> Pardon me for asking Clem, but would you mind naming the survivors? I
> have an idea what these Toads are, and of course what Multics happened
> to be, but that's it.
We don't know exactly yet, but according to the video, there's a VAX
7000 and a DEC-2020. The TOAD computers are XKL's PDP-10 remake;
there's also another one called SC-40. Stephen also mentions Multics
tapes were rescued.
Maybe the best way to see what is there right now, is to dial into
"ssh menu(a)sdf.org"
[-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-]
-+- SDF Vintage Systems REMOTE ACCESS -+-
[-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-]
[a] multics Multics MR12.8 Honeywell 6180
[b] toad-2 TOPS-20 7(110131)-1 XKL TOAD-2
[c] twenex TOPS-20 7(63327)-6 XKL TOAD-2
[d] sc40 TOPS-20 7(21733) SC Group SC40
[e] lc ITS ver 1648 PDP-10 KS10
[f] ka1050 TOPS-10 6.03a sim KA10 1050
[g] kl2065 TOPS-10 7.04 sim KL10 2065
[h] rosenkrantz OpenVMS 7.3 VAX 7000-640
[i] tss8 TSS/8 PDP-8/e
[j] ibm4361 VM/SP5 Hercules 4361
[k] ibm7094 CTSS i7094
[l] cdc6500 NOS 1.3 DTCyber CDC-6500
[z] bitzone NetBSD BBS AMD64
[1] Proceed to the UNIX Systems sub-menu
[2] Information about Vintage Systems at SDF.ORG
And the Unix section:
[a] misspiggy UNIX v7 PDP-11/70
[c] lcm3b2 UNIX SVR3.2.3 AT&T 3B2/1000-70
[d] guildenstern BSD 4.3 simh MicroVAX 3900
[e] snake BSD 2.11 PDP-11/84
[f] hkypux HP/UX 10.20 HP9000/715
[g] truly TRU64 5.0 DEC Alpha 500au
[h] three SunOS 4.1.1 Sun-3/160
[i] indy IRIX 6.5 SGI Indy R5000
[j] ultra Ultrix 4.5 simh MicroVAX 3900
Please excuse the wide distribution, but I suspect this will have general
interest in all of these communities due to the loss of the LCM+Labs.
The good folks from SDF.org are trying to create the Interim Computer
Museum:
https://icm.museum/join.html
As Lars pointed out in an earlier message to COFF there is a 1hr
presentation on the plans for the ICM.
https://toobnix.org/w/ozjGgBQ28iYsLTNbrczPVo
FYI: The yearly (Bootstrap) subscription is $36
They need to money to try to keep some of these systems online and
available. The good news is that it looks like many of the assets, such as
Miss Piggy, the Multics work, the Toads, and others, from the old LCM are
going to be headed to a new home.
ᐧ