Does anyone know if there is a book or in-depth article about the Sprint/Spartan system, named Safeguard, hardware and software?
There is very little about it available online (see http://www.nuclearabms.info/Computers.html) but it was apparently an amazing programming effort running on UNIVAC.
Arrigo
moving to COFF
On Wed, Jan 22, 2020 at 1:06 PM Pete Wright <pete(a)nomadlogic.org> wrote:
> I also seem to remember him telling me about working on the patriot
> missile system, although i am not certain if i am remembering correctly
> that this was something he did at apollo or at another company in the
> boston area.
>
The Patriot was/is Raytheon in Andover, MA not Apollo (Chelmsford - two
towns west). Cannot speak for today, but when it was developed the source
code was in Ada. I knew the Chief Scientist/PI for the original Patriot
system (who died of a massive stroke a few years back -- my wife used to
take care of his now 30-40 yo kids when they were small and she was a tad
younger).
During the first Gulf War, he basically did not sleep the whole first
month. As I understand it, Raytheon normally took 3-6 months per SW
release. During the war, they put out an update every couple of days and
Willman once said they were working non-stop on the codebase, dealing with
issues they have never seen or have been simulated. I gather it was quite
exciting ... sigh. We got him to give a couple of talks at some local
IEEE functions describing the SW engineering process they had used.
Willman was one of the people that got me to respect Ada and the job his
folks had to do. To once told me, that at some point, Raytheon had a
contract supporting the Polaris System for the US Navy. The Navy had long
ago lost the source. They had disassembled and were patching what they
had. Yeech!!!! He also once made another comment to me ( in the late
1980s IIRC) that the DoD wanted Ada because they want the source to be part
of the specifications and wanted a language that was more explicit that
they could use for those specs. I have no idea how much that has proven
to be true.
On 1/17/20, Rob Pike <robpike(a)gmail.com> wrote:
> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> together.
>
> But history had other ideas.
>
[moving to COFF since this isn't specific to historic Unix]
For applications (or groups of related applications) that are already
distributed across multiple machines I'd say "cluster as a single
system" definitely makes sense, but I still stand by what I said
earlier about it not being relevant for things like workstations, or
for server applications that are run on a single machine. I think
clustering should be an optional subsystem, rather than something that
is deeply integrated into the core of an OS. With an OS that has
enough extensibiity, it should be possible to have an optional
clustering subsystem without making it feel like an afterthought.
That is what I am planning to do in UX/RT, the OS that I am writing.
The "core supervisor" (seL4 microkernel + process server + low-level
system library) will lack any built-in network support and will just
have support for local file servers using microkernel IPC. The network
stack, 9P client filesystem, 9P server, and clustering subsystem will
all be separate regular processes. The 9P server will use regular
read()/write()-like APIs rather than any special hooks (there will be
read()/write()-like APIs that expose the message registers and shared
buffer to make this more efficient), and similarly the 9P client
filesystem will use the normal API for local filesystem servers (which
will also use read()/write() to send messages). The clustering
subsystem will work by intercepting process API calls and forwarding
them to either the local process server or to a remote instance as
appropriate. Since UX/RT will go even further than Plan 9 with its
file-oriented architecture and make process APIs file-oriented, this
will be transparent to applications. Basically, the way that the
file-oriented process API will work is that every process will have a
special "process server connection" file descriptor that carries all
process server API calls over a minimalist form of RPC, and it will be
possible to redirect this to an intermediary at process startup (of
course, this redirection will be inherited by child processes
automatically).
Originally, I meant to reply to the Linux-origins thread by pointing to
AST's take on the matter but I failed to find it. So, instead, here is
something to warm the cockles of troff users:
From https://www.cs.vu.nl/~ast/home/faq.html
Q: What typesetting system do you use?
A: All my typesetting is done using troff. I don't have any need to see
what the output will look like. I am quite convinced that troff will
follow my instructions dutifully. If I give it the macro to insert a
second-level heading, it will do that in the correct font and size, with
the correct spacing, adding extra space to align facing pages down to
the pixel if need be. Why should I worry about that? WYSIWYG is a step
backwards. Human labor is used to do that which the computer can do
better. Also, using troff means that the text is in ASCII, and I have a
bunch of shell scripts that operate on files (whole chapters) to do
things like produce a histogram by year of all the references. That
would be much harder and slower if the text were kept in some
manufacturer's proprietary format.
Q: What's wrong with LaTeX?
A: Nothing, but real authors use troff.
N.
>From the museum pages via the KG-84 picture to wiki. Reading a bit on
crypto devices, stumbling over M-209 and
"US researcher Dennis Ritchie has described a 1970s collaboration with
James Reeds and Robert Morris on a ciphertext-only attack on the M-209
that could solve messages of at least 2000–2500 letters.[3] Ritchie
relates that, after discussions with the NSA, the authors decided not
to publish it, as they were told the principle was applicable to
machines then still in use by foreign governments.[3]"
https://en.wikipedia.org/wiki/M-209
The paper
https://cryptome.org/2015/12/ReedsTheHagelinCipherBellLabs1978.pdf
ends with
"The program takes about two minutes to produce a solution on a DEC PDP-11/70."
No info on the program coding.
More info around the story from Richie himself
https://www.bell-labs.com/usr/dmr/www/crypt.html
A post in a Facebook IBM retirees group of an IBM PC museum (in
Germany?) made be follow up with reference to Glenn's museum. Glenn
showed me around the museum last time I saw him at Centaur, but I did
not know until today about the Web presence at http://www.glennsmuseum.com/.
Rise of the Centaur (https://www.imdb.com/title/tt5690958/) includes
Glenn discussing some of the museum.
Charlie
--
voice: +1.512.784.7526 e-mail: sauer(a)technologists.com
fax: +1.512.346.5240 Web: https://technologists.com/sauer/
Facebook/Google/Skype/Twitter: CharlesHSauer
-TUHS, +COFF, in line with Warren's wishes.
On Sun, Jan 12, 2020 at 7:36 PM Bakul Shah <bakul(a)bitblocks.com> wrote:
> There is similar code in FreeBSD kernel. Embedding head and next ptrs
> reduces
> memory allocation and improves cache locality somewhat. Since C doesn't
> have
> generics, they try to gain the same functionality with macros. See
>
> https://github.com/freebsd/freebsd/blob/master/sys/sys/queue.h
>
> Not that this is the same as what Linux does (which I haven't dug into) but
> I suspect they may have had similar motivation.
>
I was actually going to say, "blame Berkeley." As I understand it, this
code originated in BSD, and the Linux implementation is at least inspired
by the BSD code. There was code for singly and doubly linked lists, queues,
FIFOs, etc.
I can actually understand the motivation: lists, etc, are all over the
place in a number of kernels. The code to remove an element from a list is
trivial, but also tedious and repetitive: if it can be wrapped up into a
macro, why not do so? It's one less thing to mess up.
I agree it's gone off the rails, however.
- Dan C.
All, can we move this not-really-Unix discussion to COFF?
Thanks, Warren
P.S A bit more self-regulation too, please. You shouldn't need me to point
out when the topic has drifted so far :-)
Noel Chiappa writes:
> The code is still extant, in 'SYSTEM; TV >'. It only worked (I think)
> from Knight TV keyboards
(This isn't TUHS material, but I can't resist. CC to COFF.)
There is also a Chaosnet service to call for the elevator or open the
door, so it can be used remotely. The ITS program ESCE uses this
service. I suppose there must have been something for the Lisp machines
too, maybe even a Super-Hyper-Cokebottle keyboard command.
Steffen - I think your reply just proved my point. I am as asking people
what they really value/what they really need. Clearly, you and I value
different things. And that is fine. You can disagree with my choices, but
accept it is what *I value*, and I understand *you value something
different*.
FWIW: I will not identify who sent it me because he sent it to me
privately, but someone else commented to me on my post that I had nailed
it. He observed "I see this sort of not respecting other people's choices
as a general trend these days. We want tolerance for our *views* but at the
same time, we are becoming more intolerant of *other* people's views!"
For a smile, this was yesterdays comic and expresses the issue well:
https://www.comicskingdom.com/brilliant-mind-of-edison-lee/2020-01-08
Answering, but CCing COFF if folks want to continue. This is less about
UNIX and more about how we all got to where we are.
On Wed, Jan 8, 2020 at 11:24 PM Jon Steinhart <jon(a)fourwinds.com> wrote:
> Clem, this seems like an unusual position for you to take. vim is
> backwards
> compatible with vi (and also ed), so it added to an existing ecosystem.
>
No, really unusually when you think about it. vim is backward compatible
except when it's not (as Bakul points out) - which is my complaint. It's
*almost* compatible and those small differences are really annoying when
you expect one thing and get something else (*i.e.* the least astonishment
principle).
The key point here is for *some people*, those few differences are not an
issue and are not astonished by them. But for *some of the rest of us*
(probably people like me that have used the program since PDP-11 days) that
only really care about the original parts, the new stuff is of little value
and so the small differences are astonishing. Which comes back to the
question of good and best. It all depends on one what you value/where you
put the high order bit. I'm not willing to "pay" for the it; as it gives
me little value.
Doug started this thread with his observation that ex/vi was huge compared
to other editors. * i.e.* value: small simple easy to understand (Rob's old
"*cat -v considered harmful*" argument if you will). The BSD argument had
always been: "the new stuff is handy." The emacs crew tends to take a
similar stand. I probably don't go quite as far as Rob, but I certainly
lean in that direction. I generally would rather something small and new
that solves a different (set of) problem(s), then adding yet another wart
on to an older program, *particularly when you change the base
functionality *- which is my vi *vs. *vim complaint*.* [i.e. 'partial
credit' does not cut it].
To me, another good example is 'more', 'less' and 'pg'. Eric Schienbrood
wrote the original more(ucb) to try to duplicate the ITS functionality (he
wrote it for the PDP-11/70 in Cory Hall BTW - Ernie did not exist and
4.1BSD was a few years in the future - so small an simple of a huge
value). It went out in the BSD tapes, people loved it and were happy. It
solved a problem as we had it. Life was good. Frankly, other than NIH,
I'm not sure why the folks at AT&T decided to create pg a few years later
since more was already in the wild, but at least it was a different program
(Mary Ann's story of vi *vs*. se if probably in the same vein). But
because of that behavior, if someone like me came to an AT&T based system
with only pg installed, so those of us that liked/were used to more(ucb)
could install it and life was good. Note pg was/is different in
functionality, it's similar, but not finger compatible.
But other folks seem to have thought neither was 'good enough' -- thus
later less(gnu) was created adding a ton of new functionality to Eric's
program. The facts are clear, some (ney many) people >>love<< that new
functionality, like going backward. I >>personally<< rarely care/need for
it, Eric's program was (is) good enough for me. Like Doug's observation
of ed *vs.* ex/vi; less is huge compared to the original more (or pg for
that matter). But if you value the new features, I suspect you might
think that's not an issue. Thanks to Moore's law, the size in this case
probably does not matter too much (other than introducing new bugs). At
least, when folks wrote did Gnu's less, the basic more(ucb) behavior was
left along and if you set PAGER=more less(gnu) pretty much works as I
expect it too. So I now don't bring Eric's program with me, the same way
Bakul describes installing nvi on new systems (an activiity I also do).
Back to vi *vs.* nvi *vs.* vim *et. al.* Frankly, in my own case, I do
>>occaisonally<< use split screens, but frankly, I can get most of the same
from having a window manager, different iterm2 windows and cut/paste. So
even that extension to nvi, is of limited value to me. vim just keeps
adding more and more cruft and its even bigger. I personally don't care
for the new functionality, and the size of it all is worrisome. What am I
buying? That said, if the new features do not hurt me, then I don't really
care. I might even use some of the new functionality - hey I run mac OS
not v7 or BSD 4.x for my day to day work and I do use the mac window
manager, the browser *et al*, but as I type this message I have 6 other
iterm2 windows open with work I am doing in other areas.
Let me take a look at this issue in a different way. I have long been a
'car guy' and like many of those times in my youth spent time and money
playing/racing etc. I've always thought electric was a great idea/but there
has been nothing for me. Note: As many of you know my work in computers has
been in HPC, and I've been lucky to spend a lot of time with my customers,
in the auto and aerospace industry (*i.e.* the current Audi A6 was designed
on one of my supercomputer systems). The key point is have tended to
follow technology in their area and tend to "in-tune" with a lot of
developments. The result, except for my wife's minivan (that she preferred
in the years when our kids were small), I've always been a
die-hard German-engineered/performance car person. But when Elon announced
the Model 3 (like 1/2 the techie world), I put down a deposit and waited.
Well why I was waiting, my techie daughter (who also loves cars), got a
chance to drive one. She predicted I would hate it!!! So when my ticket
finally came up, I went to drive them. She was right!!! With the Model 3,
you get a cool car, but it's about the size of a Corrolla. Coming from
Germans cars for the last 35 years, the concept of spending $60K US in
practice for a Corrolla just did not do it for me. I ended up ordering the
current Unixmobile, my beloved Tesla Model S/P100D.
The truth is, I paid a lot of money for it but I *value *what I got for my
money. A number of people don't think it's worth it. I get that, but I'm
still happy with what I have. Will there someday be a $20K electric car
like my Model S? While I think electric cars will get there (I point out
the same price curve on technology such microwave ovens from the 1970so
today), but I actually doubt that there will be a $20K electric vehicle
like my Model S.
The reason is that to sell this car because it as to be expensive for
technology-based reasons, so Tesla had to add a lot of 'luxury' features
like other cars in the class, other sports cars, Mercedes, *et al*. As
they removed them (*i.e.* the Model 3) you still get a cool car, but it's
not at all the same as the Model S. So the point is, if I wanted an
electric car, I had to choose between a performance/luxury *vs*.
size/functionality. I realized I valued the former (and still do), but I
understand not everyone does or will.
Coming back to our topic, I really don't think this is a 'get my lawn'
issue as much, as asking someone what they really value/what they really
need. If you place a high-value you something, you will argue that its
best; if it has little value you will not.
below... -- warning veering a little from pure UNIX history, but trying to
clarify what I can and then moving to COFF for follow up.
On Wed, Jan 8, 2020 at 12:23 AM Brian Walden <tuhs(a)cuzuco.com> wrote:
> ....
>
> - CMU's ALGOL68S from 1978 list all these ways --
> co comment
> comment comment
> pr pragmat
> pragmat pragmat
> # (comment symbol) comment
> :: (pragmat symbol) pragmat
> (its for UNIX v6 or v7 so not surprising # is a comment)
> http://www.softwarepreservation.org/projects/ALGOL/manual/a68s.txt/view
Be careful of overthinking here. The comment in that note says was it was
for* PDP-11's *and lists V6 and V7 was *a possible target*, but it did not
say it was. Also, the Speach and Vision PDP-11/40e based systems ran a
very hacked v6 (which a special C compiler that supported CMU's csv/cret
instructions in the microcode), which would have been the target systems.
[1]
To my knowledge/memory, the CMU Algol68 compiler never ran anywhere but
Hydra (and also used custom microcode). IIRC there was some talk to move
it to *OS (Star OS for CM*) I've sent a note to dvk to see if he remembers
it otherwise. I also ask Liebensperger what he remembers, he was hacking on
*OS in those days. Again, IIRC Prof. Peter Hibbard was the mastermind
behind the CMU Algol68 system. He was a Brit from Cambridge (and taught
the parallel computing course which I took from him at the time).
FWIW: I also don't think the CMU Algol68 compiler was ever completely
self-hosting, and like BLISS, required the PDP-10 to support it. As to why
it was not moved to the Vax, I was leaving/had left by that time, but I
suspect the students involved graduated and by then the Perq's had become
the hot machine for language types and ADA would start being what the gvt
would give research $s too.
>
>
> ...
>
> But look! The very first line of that file! It is a single # sitting all
> by itself. Why? you ask. Well this is a hold over from when the C
> preprocessor was new. C orginally did not have it and was added later.
> PL/I had a %INCLUDE so Ritchie eventaully made a #include -- but pre 7th
> Edition the C preprocessor would not be inkoved unless the very first
> character of the C source file was an #
>
That was true of V7 and Typesetter C too. It was a separate program (
/lib/cpp) that the cc command called if needed.
> Since v7 the preprocessor always run on it. The first C preprocessor was
> Ritchie's work with no nested includes and no macros. v7's was by John
> Reiser which added those parts.
>
Right, this is what I was referring too last night in reference to Sean
comments. As I said, the /bin/cc command was a shell script and it peaked
at the first character to see if it was #. I still find myself starting
all C programs with a # on a line by itself ;-)
Note that the Ritchie cpp was influenced by Brian's Ratfor work, so using #
is not surprising.
This leads to a question/thought for this group, although I think needs to
move to COFF (which I have CC'ed for follow up).
I have often contended, that one of the reasons why C, Fortran, and PL/1
were so popular as commercial production languages were because they could
be preprocessed. For a commercial place where lots of different targets is
possible, that was hugely important. Pascal, for instance, has semantics
that makes writing a preprocessor like cpp or Ratfor difficult (which was
one of the things Brian talks about in his "*Why Pascal is not my favorite
Programming Language <http://www.lysator.liu.se/c/bwk-on-pascal.html>*"
paper). [2]
So, if you went to commercial ISV's and looked at what they wrote in. It
was usually some sort of preprocessed language. Some used Ratfor like a
number of commercial HPC apps vendors, Tektronix wrote PLOT10 in MORTRAN.
I believe it was Morgan-Stanley had a front-end for PL/1, which I can not
recall the name. But you get the point ... if you had to target different
runtime environments, it was best for your base code to not be specific.
However ... as C became the system programming language, the preprocessor
was important. In fact, it even gave birth the other tools like autoconfig
to help control them. Simply, the idiom:
#ifdef SYSTEMX
#define SOME_VAR (1)
... do something specific
#endif /* SYSTEMX */
While loathsome to read, it actually worked well in practice.
That fact is I hate the preprocessor in many ways but love it for what it
for the freedom it actually gave us to move code. Having programmed since
the 1960s, I remember how hard it was to move things, even if the language
was the same.
Today, modern languages try to forego the preprocessor. C++'s solution is
to throw the kitchen sink into the language and have 'frameworks', none of
which work together. Java's and its family tries to control it with the
JVM. Go is a little too new to see if its going to work (I don't see a lot
of production ISV code in it yet).
Note: A difference between then and now, is 1) we have few target
architectures and 2) we have fewer target operating environments, 3) ISV
don't like multiple different versions of their SW, they much prefer very
few for maintenance reasons so they like # 1 and #2 [i.e. Cole's law of
economics in operation here].
So ... my question, particularly for those like Doug who have programmed
longer and at least as long as I, what do you think? You lived the same
time I did and know the difficulties we faced. Is the loss of a
preprocessor good or bad?
Clem
[1] Historical footnote about CMU. I was the person that brought V7 into
CMU and I never updated the Speach or Vision systems and I don't think
anyone did after I left. We ran a CMU V7 variant mostly on the 11/34s (and
later on a couple of 11/44s I believe) that had started to pop up.
Although later if it was a DEC system, CS was moving to Vaxen when they
could get the $s (but the Alto's and Perq's had become popular with the CMU
SPICE proposal). Departments like bio-engineering, mech ee, ran the
cheaper systems on-site and then networked over the Computer Center's Vaxen
and PDP-20's when they needed address space).
[2] Note: Knuth wrote "Web" to handle a number of the issues, Kernighan
talks about - but he had to use an extended Pascal superset and his program
was notable for not being portable (he wrote for it for the PDP-10
Pascal). [BTW: Ward Cunningham, TW Cook and I once counted over 8
different 'Tek Pascal' variants and 14 different 'HP Basics'].
A recent thread makes me wonder which languages would people like to
learn? (I confess to trying, as Dave does, but time prevents anything
more that learing syntax and writing toy programmes. One must write
something substantial -- not synonomous with large -- to really learn a
language.)
Erlang, Smalltalk, Prolog, Haskell, and Scheme come to mind...
N.
On Wed, Dec 18, 2019 at 1:52 PM Paul McJones <paul(a)mcjones.org> wrote:
> Computer History Museum curator Dag Spicer passed along a question from
> former CHM curator Alex Bochannek that I thought someone on this list might
> be able to answer. The paper "The M4 Macro Processor” by Kernighan and
> Ritchie says:
>
> > The M4 macro processor is an extension of a macro processor called M3
> which was written by D. M. Ritchie for the AP-3 minicomputer; M3 was in
> turn based on a macro processor implemented for [B. W. Kernighan and P. J.
> Plauger, Software Tools, Addison-Wesley, Inc., 1976].
>
> Alex and Dag would like to learn more about this AP-3 minicomputer — can
> anyone help?
[I recommend that follow-ups go to coff, which is Cc'ed here]
I took a short stab at this, but can find little beyond references in the
aforementioned M4 paper.
I did, however, run across this:
https://www.cia.gov/library/readingroom/document/cia-rdp78b04770a0001001100…
This appears to be a declassified letter written to the US Air Force at
Bowling Green Air Force Base in regards to spare parts fo the AP-3
computer; dated October 19, 1966. The list of parts seem reasonable for a
minicomputer, and it further seems reasonable to believe that this may be
related to the same type of computer referenced in the M4 paper. However,
details of the sending party have been redacted, and there is nothing
pointing to the identity of the manufacturer.
Sadly, that's all that seems available. I wonder if, perhaps, Doug McIlroy
(Cc'ed directly to float this to the top of his stack) can shed more light
on the topic?
- Dan C.
'Interesting overview, but I have my doubts about its accuracy. Lisp
seems to have been too popular in the mid-1980s, and at the same time
he claims that Ada was the most popular language. Both seem highly
unlikely to me. '
fully agree. I never saw a lisp, ada job offer!
In the mid/late 1980s pascal and C were popular languages, whereas therestill were lots of cobol/fortran, mostly cobol, job offers.
Inventor of the Z3 (arguably the world's first programmable computer), we
lost it in the 1943 Berlin bombing; we lost the inventor himself in 1995
on this day.
How things could be different...
-- Dave
Ken Iverson was given unto us in 1920; a pioneer in computer science, he gave
us APL (actually not a bad language; it was, err, *concise* and grew on you
after a while[*]) which was used to develop the microcode for the /360 series.
[*]
My brother (a car freak) knew that I was into computers, and wanted me to work
out the final drive ratios of various gearbox and diff combinations etc. I
blew him away the next day with pages of output (he thought it would take me
ages) generated with a one-line APL\360 program :-)
-- Dave
Around 1997 I and others had a problem with SCO UNIX 3.2V.4.2 on
'faster' Pentium CPUs. Faster defined as probably 200MHz or more.
There were at least two patches as far as I remember and maybe SLS
uod464a.
I didn't look at that time but now I'm wondering if other Unixes had
similar problems. Either commercial versions or free ones.
Anyone here who encountered such problems on other Unixes?
One patch had
"This is due to executing an invalid instruction in kernel mode (trap
6 is for an invalid instruction; a user process which does this will
simply die with a core dump). If your particular problem is a double
panic and it doesn't leave a system memory dump in whatever device
you've chosen for dumps (usually /dev/swap), apply the following
patch.
This is due to a problem in the kernel's querytlb() routine, which may
allow the Pentium to execute a 386-specific instruction which is not
supported on the Pentium. The cure involves patching a kernel module
using _fst. (see part 1 on where to find /etc/_fst). Go into the
/etc/conf/pack.d/kernel directory. We're going to work on locore.o, so
make a backup and then run _fst -w locore.o - The conversation between
you and _fst goes like this (the * is a prompt from _fst; don't type
it or any of _fst's responses):"
https://scofaq.aplawrence.com/FAQ_scotec3ktrap6.html
A second one was
"> Follow the additional instructions below ONLY if you now get
> a k_trap type 0 panic after following the instructions in
> IT os/2366. To correct a k_trap 0, do the following:
>
> # cd /etc/conf/pack.d/pit
> # cp Driver.o Driver.orig
> # _fst -w Driver.o
> * spinwait+2D?w F989 FEE2
> * $q
> # cd /etc/conf/cf.d
> # ./link_unix -y
>
> Reboot your system. The above patch corrects a problem with
> a software delay loop that was optimized out by the compiler
> and which can cause panics on faster processors."
Cheers,
uncle rubl
We gained Rear Admiral Grace Hopper on this day in 1906; known as "Amazing
Grace", she was a remarkable woman, both in computers and the Navy. She coined
the term "debugging" when she extracted a moth from a set of relay contacts from
a computer (the Harvard Mk I) and wrote "computer debugged" in the log, taping
the deceased Lepidoptera in there as well. She was convinced that computers
could be programmed in an English-like language and developed Flow-Matic, which
in turn became, err, COBOL... She was posthumously awarded the Presidential
Medal of Freedom in 2016 by Barack Obama.
-- Dave
On Tue, 10 Dec 2019, G. Branden Robinson wrote:
>> Who'ld've thought that two computer greats would share the same
>> birthday?
>
> Anyone who thinks there are at least 23 greats would bet that way. ;-)
Yeah, I know; I'd temporarily forgotten the Birthday Paradox :-(
-- Dave
> On Dec 9, 2019, at 5:30 PM, Doug McIlroy <doug(a)cs.dartmouth.edu> wrote:
>
> Moo and hunt-the-wumpus got quite a lot of play
> both in the lab and at home. Wump was an instant
> hit with my son who was 4 or 5 years old at the
> time.
>
> Amusingly, I speculated on how to generate degree-3
> graphs for wump, but obviously not very deeply. It
> was only much later that I realized the graph
> always had the same topology--a dodecahedron.
You know, maybe we’ve been looking at this wrong the whole time (I blame Yob).
Maybe the caves aren’t the vertices of a dodecahedron. Maybe they’re the faces of an icosahedron.
Adam
Bit hard to classify this one; separate posts since COFF was created?
Augusta Ada King-Noel, Countess of Lovelace (and daughter of Lord Byron), was
born on this day in 1815; arguably the world's first computer programmer and a
highly independent woman, she saw the potential in Charles Babbage's
new-fangled invention.
J.F.Ossanna was given unto us on this day in 1928; a prolific programmer, he
not only had a hand in developing Unix but also gave us the ROFF series.
Who'ld've thought that two computer greats would share the same birthday?
-- Dave
> From: Lars Brinkhoff
>> PARC's MAXC appears in the mid-1970s.
> Maybe this is a good time to ask if anyone knows whether any of those
> diverse systems has software preserved? Specifically, the
> implementation of the NCP and 1822 Host-to-IMP protocols?
Both MAXC's were PDP-10 re-implementations, and ran TENEX. So the basic
system is still around, not sure if they had any interesting local hacks
(well, probably PUP support; MIT tried to put it in MIT-XX, so it may
still exist on thats backup tapes).
> From: Rob Gingell gingell at computer.org
> A collection of maps of the ARPAnet over time is available from the
> Computer History Museum
Interesting; I also have a large collection of maps:
http://mercury.lcs.mit.edu/~jnc/tech/arpanet.html#Maps
all with hi-res versions (click on the thumbnails). Some of the ones
at the CHM I don't have, but the coverage time-wise is similar.
I also have a modest collection of hosts.txt files, ranging from
Jul-77 to Apr-94.
> I've forgotten at this point whether the assignments were documented in
> RFCs or other assigned numbers documents from the Network Information Center
The first 'Assigned numbers' RFC was #739, from November 1977. It never
contained host addresses. There were a very few early RFC's which contained
host adresses (#226, #229, #236), but pretty quickly host addresses were done
via the hosts.txt file, distributed from the NIC. (Given the churn rate, using
RFC's didn't make sense.) Early RFCs about this are #606 and #627. There were
a bunch of RFC's that reported on 'host status' (e.g. how their software was
doing), but their goal was different.
PS: A number of people are leaving out the definite article before 'ARPANET';
this seems to be popular these days (especially in the UK it seems, not sure
why), but it is incorrect.
Also, the correct spelling is all capitals (check e.g. through old RFCs).
Until of course the AP gets their hands on it (I'm breathlessly awaiting
their announcement that the U.S. President's reference is to be referred
to as the 'white house').
Noel