I don't believe this was sent here yet. BASIC is much maligned, but was
important nonetheless.
- Dan C.
---------- Forwarded message ---------
From: Tony Patti via Internet-history <internet-history(a)elists.isoc.org>
Date: Sun, Nov 17, 2024, 3:50 PM
Subject: [ih] NYT: Thomas E. Kurtz, a Creator of BASIC Computer Language,
Dies at 96
To: <internet-history(a)elists.isoc.org>
https://www.nytimes.com/2024/11/16/technology/thomas-kurtz-dead.html
(published yesterday November 16, 2024)
"At Dartmouth, long before the days of laptops and smartphones,
he worked to give more students access to computers.
That work helped propel generations into a new world."
Me too, I owe it all to BASIC.
Because 5 decades earlier, via an ASR 33 Teletype and acoustic coupler at
110 baud
to a remote HP 2100, BASIC was my introduction to computers and programming.
Tony Patti
(ARPAnet NIC IDENT "TP4")
--
Internet-history mailing list
Internet-history(a)elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
(Moving this thread over to COF, since we've gotten pretty far afield
from the TUHS list's charter.)
On Sat, Nov 09, 2024 at 04:23:34PM -0600, G. Branden Robinson wrote:
> > The Linux Foundation does not exclusively own the copyright on the
> > Linux kernel. The copyright is jointly owned by all of the
> > contributors of the Linux kernel. This makes it quite unlike the FSF
> > projects, where contributions to FSF project require a copyright
> > assignment[1].
>
> That's a myth. It is the FSF's stated _preference_, but it is a
> negotiable point. For example, Thomas Dickey negotiated reversion of
> copyright to himself when becoming ncurses maintainer 26 years ago.[A]
In the web site I quoted, the fact that there is an option not to do
the copyright assignment was apparently conveniently ommitted. And in
the early 1990's, I *personally* tried negotiating to not do the
copyright assignment directy with the FSF, and I was told to go to
heck. Given that I *had* taken a legal class from the MIT Sloan
School (Legal issues for the I/T Manager), I knew what the word
"indemnify" meant, and there was no way in the world I was going to
sign the damned FSF copyrioght legal paperwork, and I told the FSF so.
The only other alternative was my not contributing to the GNU Project.
The FSF may have since relaxed their poition in the past 30 years, but
it's not something that they've really admitted (again, see the FSF
web page I referenced). My theory is that the only reason *why* they
relaxed their position was that it would have made GNU even more
irrelevant if they hadn't (e.g., people don't have to contribute to
GCC; if it's more friendly and easier to contribute to Clang.)
> But is it true for less prestigious projects or individual contributors
> with no clout to speak of?
Well, apparently in the early 1990's I didn't have any clout in the
eyes of the FSF. :-)
Probably for the best, all things considered.
> With respect to the Linux kernel in particular, it seems the GPL _in
> practice_ imposes no obligations. That was my point. Little
> enforcement is visible. As far as "public shaming" goes, I've seen it
> from the FSF and the Software Freedom Conservancy, not from the LF.
>
> Give me examples of the LF leaning on infringers and getting results!
> I want them!
OK, I see where you are coming from here. And I think the main isue
is that the goals of the Linux community are quite different from that
of the FSF. (And note that I say the Linux community, since these
atttudes predate the founding of the Linux Foundation by **years** and
existed across many developers, some of whom, like me, weren't yet
hired by a Linux corporation; I was at MIT, and my day job was TL for
MIT Kerberos and IPSEC working group chair for the IETF as well as
serving on MIT Network Operations.)
The Linux attitude was a focus on the social contract between
*developers*. If you improve the Linux kernel, we expect that you
contribute those changes back. So what we care about is the company
that has 9,000 out of tree patches, representing hundreds of engineer
years of SWE investment. And here, this is where in practce, GPL
social contract becomes self-enforcing. It is in the interest of the
company who is interested in keeping up with upstream to get the
changes back upstream.
The FSF and Richard Stallman has a much bigger focus on the ability of
users to be able to get the sources for GPL'ed code, make changes, and
then install that changed sources on their hardware. That's a fine
goal, and I respect that some people might have that as a very strong
policy preference and something that they care about. It's just that
it's a very different goal than what most Linux kernel developers care
about. (And again, this wasn't shaped by my employers; I and many of
the people I know had these preferences long before the Linux
companies formed and started hiring us.)
So take for example, the hypothetical someone who makes a tiny change
to the Linux kernel to create a crappy AI gadge in a square orange
box. Call it, for the sake of argument, the "Squirrel S1". :-)
As far as the Linux kernel community is concerned, the Squirrel S1 is
irrelevant. It has no interesting technology that we would care
about, and while it might be sad that a user might not be able to
change the software in the S1, either because the manufacturer didn't
meet their GPL oligations, or the hardware was locked down and the
GPL2 does't have an anti-Tivo clause it it, in my opinion, the
enforcement is self-executing. If you're a user, and can't make
changes, and you want to, then don't fork over $199 for the Squirrel
S1!
From the FSF's Free Softare perspective, they obviously have a very
different goal. They believe all users should have the ability to
access the source code and modify software on a Squirrel S1, whether
they want to doit or not, and regardless of whether that might cause
the device to become more expensive. They believe this is a core user
freedom, that should never be abograted.
I respect those people who have those feelings. But obviously people
in the BSD camp don't share those priorities --- and in the Linux
kernel community, while we believe the GPL2 is a great way of
expressing the social expectations between developers, we don't
necessarily share the same attitudes as Mr. Stallman.
Could someone who has some copyright ownership try do some vexatious
lawsuits in order to (legally) extort money out of companies who are
infringing the GPL? Sure; although I'll note that for the targets of
those lawsuits, I'm not so sure that they would see that much
difference between a Patrck McHardy and the SFC. And at least
personally, the amount of help that I would give a Patric McHardy and
an SFC lawsuit is pretty much the same; zero, and my personal opinion
is that they are not really helpful, since my goal is to have more
companies being *happy* to contribute to Linux; not to feel coerced
and forced to contribute by sullenly dropping a bunch of code to
comply with the GPL and then walking away.
> > So why do companies join the Linux Foundation? Well, there are a
> > number of benefits, but one very important one is that it provides a
> > way for companies to directly collaborate with funded programs to make
> > improvements to Linux without worrying about anti-trust conerns.
>
> Are these concerns anything more than notional?
Well, I was at the IBM Linux Technology Center when we were first
working on standardizing ISO/IEC 23360-2:2006. This was well after
the FTC consent decree was dissolved in 1996, and while a Republican
(George W. Bush) was president --- and I can tell you that it *was*
something that my employer at the time very much cared about. We got
very clear instructions about what we could and could't do when we
participated with OSDL and Linux Foundation work groups, and we had
madatory training regarding how to not get in trouble with anti-trust
enforcers.
> But I do sympathize with WG14 and the Austin Group; following recent
> developments with C23 and POSIX 2014, it seems that ISO is bent
> on giving them a hard time. Maybe ISO/IEC, or certain players within,
> are trying to shed some mass, and/or don't see C and Unix as worth
> standardizing anymore. Old and busted. What's the new hotness?
ISO/IEC participation has always been heavyweight, and companies are
quite strategic about the understanding the cost benefit tradeoffs of
participating in ISO. This has been true for years; and once various
European government customers stopped requiring ISO standardization,
IBM and HP pretty quickly stopped funding the standards tech writer
and those of us who were on the US National Body represenatives to
ISO/IEC for 23360.
(And not just the US; the various companies working on the ISO/IEC
23360 effort had carefully made sure that to have their employees in
other country's national bodies, to make sure the fix was in. This
was not too different from what Microsoft was accused of doing while
standardizing ISO/IEC 29500, although not to the same scale; there
were many more countries' national bodies involved with ISO/IEC 29500.
So when you say "ISO" is giving the Austin Group a hard time, I'd ask
the question, "who on ISO"? And what company do they work for; or if
they are an independent contractor, which company might be paying them
at the time; and what the agenda of those company(s) might be?)
Am I super cynical about ISO/IEC standards? Perhaps. :-)
- Ted
P.S. Obviously, not *everyone* in the Linux ecosystem feels this way.
For example, there are many people in Debian who are much more aligned
with the FSF. After all, they are one of the few distros that will
use the GNU/Linux terminology demanded by Stallman.
But I have talked to many Linux kernel developers over the past 30+
years, and I think have a pretty good sense of what the bulk of the
"old-timers" priorities have been. After all, if we had been much
more aligned with the FSF's philosophies, perhaps we would have worked
on GNU/HURD isntead. :-)
Moving over to COFF from TUHS. The following was Larry McVoy:
> I don't consider myself to be that good of a programmer, I can point to
> dozens of people my age that can run circles around me and I'm sure there
> are many more. But apparently the bar is pretty low these days and I
> agree, that's sad.
>
>
It's hard not to feel like the bar is lower. I feel like since Steve
Grandi retired at NOIRLab, I and Josh Hoblitt are the only people left who
actually understand how IP networks work. And I'm not great, never was,
but I know a lot more than...everyone else.
And kids these days, well, I'm not very fluent in TypeScript and I really
don't understand why every damn thing needs to be asynchronous especially
if you're just awaiting its completion anyway. But, hey, it ain't that
hard to do.
But again, there's a part of me that wonders how relevant the skills I miss
*are* anymore. I'm a software developer now, but I always thought of
myself as basically a sysadmin. It's just that we had automated away all
of what I started out doing (which was, what, 35-ish years ago?) by 20
years ago, and staying ahead of the automation has made me, of necessity, a
software developer now.
But I was also thinking of Larry saying he wouldn't last a week in today's
workplace, and I'm not sure that's true.
I mean, there's a lot of stuff that you once COULD say that would these
days get you a quick trip through HR and your crap in a box and a walk to
the curb...but I am a pretty foul-mouthed individual, and I have said nasty
things about people's code, and, indeed, the people who are repeat
offenders with respect to said code, and nevertheless I have had
surprisingly few issues with HR these last couple decades. So in some
sense it really DOES matter WHAT it is that's offensive that you're saying,
and I am living-and-still-employed proof.
If you generally treat people with respect until they prove they don't
deserve it, and you base your calumny on the bad technical decisions they
make and not their inherent characteristics, then it really ain't that hard
to get along in a woke workplace. And I say this as an abrasive coworker,
who happens to be a cis het white dude from a fairly-mainstream Christian
background and the usual set of academic credentials.
Let's face it: to do a good job as a software developer or generally an IT
person, you do not need a penis. You do not need to worship the way most
people at your workplace do. You do not need a college degree, let alone
in CS. You do not need to be sexually attracted to the opposite sex. You
do not need to have the same gender now that you were assigned at birth.
You do not need two (or given the current state of the art, ANY) working
eyes. Or hands. You do not need to be under 40. You do not need to be
able to walk. You do not need pale skin. And anyone who's saying shit
about someone else based on THAT sort of thing *should* be shown the curb,
and quickly. And the fact that many employers are willing to do this now
is, in my opinion, a really good thing.
On the other hand, if someone reliably makes terrible technical decisions,
well, yeah, you should spend a little time understanding whether there is a
structural incentive to steer them that way and try to help them if they're
trainable, but sometimes there isn't and they're not. And those people,
it's OK to say they've got bad taste and their implementations of their
poor taste are worse. And at least in my little corner of the world, which
is quasi-academic and scientific, there's a lot of that. Just because
you're really really good at astronomy doesn't mean you're good at writing
intelligible, testable, maintainable programs. Some very smart people have
written really awful code that solved their immediate problems, but that's
no way to start a library used by thousands of astronomers. But whether or
not they're competent software engineers ain't got shit to do with what
they have in their pants or what color their skin is.
And it's not always even obvious bigotry. I don't want to work with toxic
geniuses anymore. Even if the only awful things they do and say are to
people that they regard as intellectually inferior and are not based on
bullshit as above...look, I'd much rather work with someone who writes
just-OK code and is pleasant than someone who writes brilliant code and
who's always a quarter-second from going off on someone not quite as smart
as they are. Cleverness is vastly overrated. I'd rather have someone with
whom I don't dread interacting writing the stuff I have to interface with,
even if it means the code runs 25% slower. Machine cycles are dirt cheap
now. The number of places where you SHOULD have to put up with toxicity
because you get more efficient code and it actually matters has been pretty
tiny my entire adult lifetime, and has been shrinking over that lifetime as
well. And from a maintainability standpoint...if I encounter someone
else's just-OK code, well, I can probably figure out what it's doing and
why it's there way, way more easily than someone's code that used to be
blazing fast, is now broken, and it turns out that's because it encodes
assumptions about the runtime environment that were true five years ago and
are no longer correct.
That said, it's (again, in my not-necessarily-representative experience)
not usually the nonspecific toxic genius people who get in trouble with
HR. The ones who do, well, much, MUCH, too often, are the people
complaining about wokeness in the workplace who just want to be able to say
bad things about their coworkers based on their race or gender (or...)
rather than the quality of their work, and I'm totally happy to be in the
"That's not OK" camp, and I applaud it when HR repeats that and walks them
out the door.
Adam
At 2024-10-02T16:42:59-0400, Dan Cross wrote:
> On Wed, Oct 2, 2024 at 2:27 AM <arnold(a)skeeve.com> wrote:
> > Also true. In the late 80s I was a sysadmin at Emory U. We had a Vax
> > connected to BITNET with funky hardware and UREP, the Unix RSCS
> > Emulation Program, from the University of Pennsylvania. Every time I
> > had to dive into that code, I felt like I needed a shower
> > afterwards. :-)
>
> Uh oh, lest the UPenn alumni among us get angry (high, Ron!) I feel I
> must point out that UREP wasn't from the University of Pennsylvania,
> but rather, from The Pennsylvania State University (yes, "The" is part
> of the name). UPenn (upenn.edu) is an Ivy in Philly; Penn State
> (psu.edu) is a state school in University Park, which is next to State
> College (really, that's the name of the town) with satellite campuses
> scattered around the state.
There's another method of distinguishing UPenn from Penn State. Permit
me to share my favorite joke on the subject, from ten years ago.
"STATE COLLEGE, Pa. -- Construction workers tore down Penn State's
iconic Joe Paterno statue on campus two years ago -- but this town might
not be without one for much longer.
Two alumni already have received the OK from the borough to install a
projected $300,000 life-sized bronze sculpture downtown, about two miles
from the original site." -- ESPN ([1])
"The key difference is that the new statue will look the other way."
-- Chris Lawrence
Regards,
Branden
[1] https://www.espn.com/college-football/story/_/id/10828351/joe-paterno-honor…
On Tue, Oct 1, 2024 at 9:13 AM <arnold(a)skeeve.com> wrote:
> This goes back to the evolution thing. At the time, C was a huge
> step up from FORTRAN and assembly.
>
Certainly it's a step up (and a BIG step up) from assembly. But I'd say C
is a step sidewise from Fortran. An awful lot of HPTC programming involves
throwing multidimensional arrays around and C is not suitable for that.
-Paul W.
On Tue, Oct 1, 2024 at 10:07 AM <arnold(a)skeeve.com> wrote:
[regarding writing an Ada compiler as a class project]
> Did you do generics? That and the run time, which had some real-time
> bits to it (*IIRC*, it's been a long time), as well as the cross
> object code type checking, would have been real bears.
>
> Like many things, the first 90% is easy, the second 90% is hard. :-)
>
> I was in DEC's compiler group when they were implementing Ada for VAX/VMS.
It gets very tricky when routine libraries are involved. Just figuring
out the compilation order can be a real bear (part of this is the cross
object code type checking you mention).
From my viewpoint Ada suffered two problems. First, it was such a large
language and very tricky to implement--even more so than PL/I. Second, it
had US Government cooties.
-Paul W.
[-->COFF]
On 2024-10-01 10:56, Dan Cross wrote (in part):
> I've found a grounding in mathematics useful for programming, but
> beyond some knowledge of the physical constraints that the universe
> places on us and a very healthy appreciation for the scientific
> method, I'm having a hard time understanding how the hard sciences
> would help out too much. Electrical engineering seems like it would be
> more useful, than, say, chemistry or geology.
I see this as related to the old question about whether it is easier to
teach domain experts to program or teach programmers about the domain.
(I worked for a company that wrote/sold scientific libraries for
embedded systems.) We had a mixture but the former was often easier.
S.
>
> I talk to a lot of academics, and I think they see the situation
> differently than is presented here. In a nutshell, the way a lot of
> them look at it, the amount of computer science in the world increases
> constantly while the amount of time they have to teach that to
> undergraduates remains fixed. As a result, they have to pick and
> choose what they teach very, very carefully, balancing a number of
> criteria as they do so. What this translates to in the real world
> isn't that the bar is lowered, but that the bar is different.
>
> - Dan C.
Taken to COFF...
Hi Arnold,
> In main(), I *think* I'm assigning to the global clientSet so that
> I can use it later. But because of the 'err' and the :=, I've
> actually created a local variable that shadows the global one, and in
> otherfunc(), the global clientSet is still nil. Kaboom!
>
> The correct way to write the code is:
>
> var err error
> clientSet, err = cluster.MakeClient() // or whatever
I think this is a common problem when learning Go, like assigning
getchar()'s value to a char in C. It was back in ’14 anyway, when I saw
https://www.qureet.com/blog/golang-beartrap/ which has an ‘err’ at an
outer scope be unwritten by the ‘:=’ with the new, assigned-to ‘err’
going un-checked.
The author mentions ‘go vet’ highlights these cases with -shadow, which
is off by default.
https://pkg.go.dev/github.com/golangci/govet#hdr-Shadowed_variables
suggests that's still the case.
--
Cheers, Ralph.
[moving to COFF as this has drifted away from Unix]
On Sat, Sep 28, 2024 at 2:06 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have a somewhat different view. I have a son who is learning to program
> and he asked me about C. I said "C is like driving a sports car on a
> twisty mountain road that has cliffs and no guard rails. If you want to
> check your phone while you are driving, it's not for you. It requires
> your full, focussed attention. So that sounds bad, right? Well, if
> you are someone who enjoys driving a sports car, and are good at it,
> perhaps C is for you."
>
> If you really want a language with no guard rails, try programming in
BLISS.
Regarding C and C++ having dangerous language features--of course they do.
Every higher-level language I've ever seen has its set of toxic language
features that should be avoided if you want reliability and maintainability
for your programs. And a set of things to avoid if you want portability.
Regarding managed dynamic memory allocation schemes that use garbage
collection vs. malloc()/free(), there are some applications where they are
not suitable. I'm thinking about real-time programs. You can't have your
missle defense software pause to do garbage collection when you're trying
to shoot down an incoming ballistic missile.
-Paul W.
Poul-Henning also suggests this link as well ...
Warren
----- Forwarded message from Poul-Henning Kamp -----
There is also 3B stuff in various other subdirectories on that site,
for instance: https://www.telecomarchive.com/six-digit.html
----- End forwarded message -----
Moving to COFF ,..
From: "Rich Salz" <rich.salz(a)gmail.com>
To: "TUHS main list" <tuhs(a)tuhs.org>
Cc: "Douglas McIlroy" <douglas.mcilroy(a)dartmouth.edu>
Sent: Monday, September 30, 2024 4:03:15 PM
Subject: [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
On Mon, Sep 30, 2024 at 3:12 PM Steffen Nurpmeso < steffen(a)sdaoden.eu > wrote
noone ever told them that even the eldest C can be used in a safe
way;
Perhaps we have different meanings of the word safe.
void foo(char *p) { /* interesting stuff here */ ; free(p); }
void bar() { char *p = malloc(20);
foo(p);
printf("foo is %s\n", p);
foo(p);
}
Why should I have to think about this code when the language already knows what is wrong.
No one would make the claim that programming in machine "language" is safe.
No one would make the claim that programming in assembly "language" is safe.
I've always viewed C as a portable assembler. I think the real issue has nothing to do with the "safety" of C, but rather the "safety" of your-choice-of-C-libraries-and-methods.
My $.02
Jim
FWIW, I just saw this in code generated by bison:
(yyvsp[-4].string_val), (yyvsp[-2].string_val), (yyvsp[0].string_val)
(IIUC) referencing the addresses under the top of the stack when passing
$2, $4, $6 into a function from an action (skipping a couple of
tokens). So the sign just depends on which way the stack is growing.
As for range checking of pointers into a malloc'd block of memory, the
pointer could have just 2 things: the address at the start of the block
and the pointer itself, some moving address in the block; and then
before the start of the block malloc could stash the address of the end
of the block (where it could be referenced by all pointers into the
block). So instead of a triple word, the pointer is a double word, and
the malloc'd block has an extra word before it. This must have been
done before by someone, somewhere.
I don't think of pointers and arrays in C as the same thing, but rather
array references as an alternate syntax for pointer arithmetic (or vice
versa).
- Aron
Moved to Coff, because it's about programming style, not history.
> Perhaps I'm missing something? Clever arithmetic in the index
> calculation aside, this is semantically different than using an actual
> negative integer to index into an array? Moreover, if the intent is to
> start the sequence with 0, why set `fib(0)` to 1? How is this
> substantially different from the usual way of writing this:
I said the Fibonacci example was silly. Maybe you'll be more convinced by
the binomial-coefficient program below.
The array of interest is fib. base is simply scaffolding and doesn't appear
in the working code. You won't find the ith Fibonacci in base[i]; it's in
fib(i). But fib(-1) exists. What's important is that the C convention of
array indexes beginning at 0 has been circumvented.
I could be accused of subterfuge in depending on the semantics of static
storage to initialize fib(-1) to zero. Subterfuge or not, it's customary C
usage. The binomial-coefficient program relies on "out-of-bounds" zeros
abutting two sides of a triangle.
int base[N][N+2];
#define binom(n,i) base[n][(i)+1]
void fill() {
binom(0,0) = 1;
for(n=1; n<N; n++)
for(i=0; i<=n; i++)
binom(n,i) = binom(n-1,i) + binom(n,i-1);
}
I think the offset algorithm above looks better than the more typical one
below.
The two programs happen to have identical character counts.
int binom[N][N+1];
void fill() {
for(n=0; n<N; n++) {
binom[n][0] = 1;
for(i=1; i<n; i++)
binom[n][i] = binom[n-1][i] + binom[n][i-1];
binom[n][n] = 1;
}
}
Doug
A summary of a couple of longer posts.
Ralph Corderoy and I used different C syntax than to access an MxN array A,
whose subscripts begin at M0 and N0 in the respective dimensions. Here's a
somewhat simplified version of both. In our examples, M0 and M0 were
negative.
Mine:
int base[M][N];
#define A(i,j) base[i-M0][j-N0]
Ralph's
int base[M][N];
int (*A)[N] = (int(*)[N])&base[-M0][-N0];
In my scheme element references must be written A(i,j). Ralph retains C
array syntax, A[i][j].
Doug
Mea culpa.
I thought I could offer a simple example, but my binomial-coefficient
program is wrong, and loses its force when corrected. For a convincing
example, see the program in
https://digitalcommons.dartmouth.edu/cs_tr/385/
Unfortunately you have to read a couple of pages of explanation to see what
this program is up to. It's a fun problem, though.
Doug
Moved to COFF.
On 2024-09-20 11:07, Dave Horsfall wrote (in part):
>
> Giggle... In a device driver I wrote for V6, I used the expression
>
> "0123"[n]
>
> and the two programmers whom I thought were better than me had to ask me
> what it did...
>
> -- Dave, brought up on PDP-11 Unix[*]
>
> [*]
> I still remember the days of BOS/PICK/etc, and I staked my career on Unix.
Working on embedded systems, we often used constructs such as a[-4] to
either read or modify stuff on the stack (for that particular
compiler+processor only).
S.
> I you have historical resources on Plan 9 or Inferno, or are reminded of
> any interesting tidbits, you can also share them here, as this list is
> already recognized by historians as a legitimate source.
Can someone tell me where the original "here" of the quoted message is?
Thanks,
Doug
Hi all, Edouard asked me to pass this e-mail on to both TUHS and COFF lists.
Cheers, Warren
----- Forwarded message from Edouard Klein <edouardklein(a)gmail.com> -----
Subject: History tract during the next IWMP9 in Paris next May
Date: Thu, 29 Aug 2024 22:46:30 +0200 (1 week, 4 days, 19 hours ago)
Dear Unix history enthusiasts,
The 11th International Workshop on Plan 9 will be held in Paris on May
22-24 2025.
One of the focus area this year will be Plan 9's history and its
influence on later computer science and industry trends.
The history team at the CNAM (where the conference will be held) has
agreed to help us prepare for the event and stands ready to record oral
histories, or any other format that would make the participants happy.
They had organized in 2017 a "colloque" at which Clem spoke (and I
listened somewhere in the audience) on UNIX:
https://technique-societe.cnam.fr/colloque-international-unix-en-europe-ent…
I will keep the list posted as our efforts pan out, but I thought I'd
get the word out as soon as possible.
I you have historical resources on Plan 9 or Inferno, or are reminded of
any interesting tidbits, you can also share them here, as this list is
already recognized by historians as a legitimate source.
The program committee members, many (if not all) of whom roam this very
list, would welcome any proposal or contributions in this area :)
The CfP is at:
http://iwp9.org/
Looking forward to read what you care to share, or to seeing you in
person in Paris,
Cheers,
Edouard.
----- End forwarded message -----
I just noticed this:
Sep 2018: The Multics History Project Archives were donated to the Living
Computer Museum in Seattle. This included 11 boxes of tapes, and 58 boxes of
Multics and CTSS documentation and listings. What will happen to these items
is unknown.
https://multicians.org/multics-news.html#events
That last sentence is ironic; I _assume_ it was written before the recent news.
I wonder what will happen to all such material at the LCM. Anyone know?
Noel
That's where it all began for Unix in Oz (the Dept of Power Systems paid
for the Ed5 tape, as I recall). I'm told that the campus has changed so
much it's now unrecognisable...
https://www.openday.unsw.edu.au/planner
-- Dave
I'm cleaning out my desk as retirement looms (a few more months) and
found my Sun coffee mug!
https://udel.edu/~mm/sun/
The back of the mug says such good things, yet...
Mike Markowski
> From: Larry McVoy
{Moving this to COFF, as it's not UNIX-related. I'll have another reply there
as well, about the social media point.}
> The amazing thing, to me, is I was a CS student in very early 1980's
> and I had no idea of the history behind the arpanet.
I don't think that was that uncommon; at MIT (slightly earlier, I think -
-'74-'77 for me) the undergrad's weren't learning anything about networking
there either, then.
I think the reason is that there wasn't much to teach - in part because we
did not then know much about networking, and in part because it was not yet
crystal clear how important it would become (more below on that).
There was research going on in the area, but even at MIT one doesn't teach
(or didn't then; I don't know about now) on-going research subjects to
undergrads. MIT _did_ have, even then, a formal UROP ('undergrad research
opportunities') program, which allowed undergrads to be part of research
groups - a sheer genius idea - which in some fast-moving fields, like CS, was
an inestimable benefit to more forward undergrads in those fields.
I joined the CSR group at LCS in '77 because I had some operating system
ideas I wanted to work on; I had no idea at that point that they were doing
anything with networks. They took me on as the result of the sheerest chance;
they had just gotten some money from DARPA to build a LAN, and the interface
was going to be built for a UNIBUS PDP-11, and they needed diagnostics, etc
written; and they were all Multicians. I, by chance, knew PDP-11 assembler -
which none of them did - the MIT CS introductory course at that point taught
it. So the deal was that I'd help them with that, and I could use the machine
to explore my OS ideas in return.
Which never really happened; it fairly became clear to me that data
networking was going to have an enormous impact on the world, and at that
point it was also technically interesting, so I quickly got sucked into that
stuff. (I actually have a written document hiding in a file drawer somewhere
from 1978 or so, which makes it plain that that I'm not suffering 20-20
hindsight here, in talking about foreseeing the impact; I should dig it up.)
The future impact actually wasn't hard to foresee: looking at what printed
books had done to the world, and then telgraphs/telephones, and what
computers had already started to do at that point, it was clear that
combining them all was going to have an incredible impact (and we're still
adapting to it).
Learning about networking at the time was tricky. The ARPANET - well, NCP and
below - was pretty well documented in a couple of AFIPS papers (linked to at
the bottom here:
https://gunkies.org/wiki/ARPANET
which I have a very vague memory I photocopied at the time out of the bound
AFIPS proceedings in the LCS library). The applications were only documented
in the RFC's.
(Speaking of which, at that level, the difference between the ARPANET and the
Internet was not very significant - it was only the internals, invisible to
the people who did 'application' protocols, that were completely different.
HTTP would probably run just fine on top of NCP, for instance.)
Anything past that, the start of the internet work, that, I picked up by i)
direct osmosis from other people in CSR who were starting to think about
networks - principally Dave Clark and Dave Reed - and then ii) from documents
prepared as part of the TCP/IP effort, which were distributed electronically.
Which is an interesting point; the ARPANET was a key tool in the internet
work. The most important aspect was email; non-stop discussion between the
widely separated groups who were part of the project. It also made document
distribution really easy (which had also been true of the latter stages of
the ARPANET project, with the RFC's). And of course it was also a long-haul
network that we used to tie the small internets at all the various sites
(BBN, SRI, ISI - and eventually MIT) into the larger Internet.
I hate to think about trying to do all that work on internets, and the
Internet, without the ARPANE there, as a tool.
Noel
Kevin Bowling wrote in
<CAK7dMtAH0km=RLqY0Wtuw6R7jXyWg=xQ+SPWcQA-PLLaTZii0w(a)mail.gmail.com>:
|On Wed, Aug 14, 2024 at 11:59 AM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
|> On Wednesday, August 14th, 2024 at 9:45 AM, Clem Cole <clemc(a)ccc.com> \
|> wrote:
|>> ...
|>> The issue came when people started using the mail system as a programmat\
|>> ic messaging scheme (i.e., fork: some_program | mail user) and other \
|>> programs started to parse the output.
|>> ...
|> Mail as IPC...that's what I'm reading from that anyway...now that's \
|> an interesting concept. Did that idea ever grow any significant \
|> legs? I can't tell if the general concept is clever or systems abuse, \
|> in those days it seems like it could've gone either way.
|
|I like Clem's answer on mail IPC/RPC.
|
|To add I have heard some stories of NNTP being used once upon a time
|at some service providers the way ansible/mcollective/salt might be
|used to orchestrate UNIX host configurations and application
|deployments. The concept of Control messages is somewhat critical to
|operations, so it's not totally crazy, but isolating article flows
|would give me some heartburn if the thing has privileged system
|access.. would probably want it on a totally distinct
|instance+port+configuration.
|
|Email and Usenet both have some nice properties of implementing a
|"Message Queue" like handling offline hosts when they come back. But
|the complexity of mail and nntp implementations lean more towards
|system abuse IMO.
The IETF will go for SML (structured email)
https://datatracker.ietf.org/group/sml/about/
which then goes for machine interpretable email message( part)s.
|> I guess it sorta did survive in the form of automated systems today \
|> expecting specially formatted emails to trigger "stuff" to happen.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Matt - I'm going to BCC: TUHS and move this to COFF - since while UNIX was
certainly in the mix in all this, it was hardly first or the only place it
happenned.
On Wed, Aug 14, 2024 at 2:59 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> On Wednesday, August 14th, 2024 at 9:45 AM, Clem Cole <clemc(a)ccc.com>
> wrote:
>
> >
> > ...
> > The issue came when people started using the mail system as a
> programmatic messaging scheme (i.e., fork: some_program | mail user) and
> other programs started to parse the output.
> > ...
>
> Mail as IPC...that's what I'm reading from that anyway...now that's an
> interesting concept.
It's kind of funny the history. ARPANET gives us FTP as a way to
exchange files. So, people figure out how to hack the mailer to call FTP
to send a file remotely and set up a submit a cron/batch submission, a.k.a
RJE. This is encouraged by DARPA because part of the justification of the
ARAPNET was to be able to share resources, and this enables supercomputers
of the day to be able to provide cycles to DARPA folks who might not have
access to larger systems. Also, remember, mailers were local to systems
at that point.
So someone gets the bright idea to hooker the mailer into this system --
copy the "mail file" and set up a remote job to mail it locally. Let's
just say this prioves to be a cool idea and the idea of intersystem email
begins in the >>ARPANET<< community.
So the idea of taking it to the next level was not that far off. The
mailer transports started to offer (limited) features to access services.
By the time of Kurt's "delivermail" but he added a feature, thinking it was
system logs that allowed specific programs to be called. In fact, it may
have been invented elsewhere but before Eric would formalize "vacation" -
Jim Kleckner and I hacked together a "awk" script to do that function on
the UCB CAD 4.1 systems. I showed it to Sam and a few other people, and I
know it went from Cory to Evans fairly quickly. Vacation(1) was written
shortly there after to be a bit more flexible than our original script.
Did that idea ever grow any significant legs?
I guess the word here is significant. It certainly was used where it made
sense. In the CAD group, we had simulations that might run for a few
days. We used to call the mailer every so often to send status and
sometimes do something like a checkpoint. It lead to Sam writing syslogd,
particularly after Joy created UNIX domain sockets. But I can say we used
it a number of places in systems oriented or long running code before
syslogd as a scheme to log errors, deal with stuff.
> I can't tell if the general concept is clever or systems abuse, in those
> days it seems like it could've gone either way.
>
> I guess it sorta did survive in the form of automated systems today
> expecting specially formatted emails to trigger "stuff" to happen.
Exactly.
ᐧ
Greg, this needs to move to COFF, so I'm BCCing TUHS in my reply. (My error
in the original message was that I should have BCC'd everyone but COFF, so
replies were directed there. Mei culpa).
However, since I have seen different people on all these lists bemoan the
loss of the LCM+L, I hope that by the broader announcement, a number of you
will consider the $36/yr membership to help Stephen and his team to be able
to keep these systems running and the at least the "labs" port of the old
LCM+L mission alive.
On Thu, Aug 1, 2024 at 9:56 PM Gregg Levine <gregg.drwho8(a)gmail.com> wrote:
> Hello!
> Pardon me for asking Clem, but would you mind naming the survivors?
The details are still coming out from Stephen and friends -- I would
recommend listening to his presentation and then maybe joining the List
Server at SDF by sending a (plain text) email to majordomo(a)sdf.org with
the subject and body containing the line: subscribe museum-l
I have an idea what these Toads are, and of course what Multics happened to
> be, but that's it.
>
LCM+L owned a real Honeywell 6180 front panel. The folks in their lab
interfaced it to a microcontroller (I think it was an RP3 or 4, but it
could be something like a BeagleBone, I never knew). It was running
Multics Release 12.8 on a SimH-derived Honeywell 6180 [I'm not sure if
those changes ever made it back to OpenSIMH - I have not personally tried
it myself]. This system seems to have been moved to SDF's new site.
Also, a number of the MIT Multics tapes had been donated to the LCM+L.
These have survived, and the SDF has them. I'll not repeat Stephen's
report here, but he describes what they have and are doing.
Miss Piggy is the PDP 11/70 that Microsoft purchased and used for their SW
original development. It has been running a flavor of Unix Seventh Edition
- I do not know what type of updates were added, but I expect the DEC v7m
and the V7 addendum to be there. You can log in and try it yourself by ssh
menu(a)sdf.org" and picking Miss Piggy in the UNIX submenu. Miss Piggy
used to live and be on display at the LCM+L, but Stephen and the SDF were
involved in its admin/operation. Stephen says in his presentation that they
are trying to get Miss Piggy back up and running [my >>guess<< is that the
"Miss Piggy" instance on the SDF menu is currently running on an OpenSIMH
instance while the real hardware is being set up at the new location].
In the early 1980s, as DEC started to de-commit to the 36-bit line after
they introduced the 32-bit Vax systems, a number of PDP-10 clones appeared
on the market. For instance, the System Concepts SC-40 was what
Comp-U-Serve primarily switched to. Similarly, many ex-Stanford AI types
forked to create the Toad Systems XXL, a KL10 clone. SDF and LCM+L owned
several of these two styles of systems and were on display and available
for login. Since Twenex.org is live (and has been) and Stephen shows a
picture of the SC40, again, I am (again) >>guessing<< that these have all
been moved to the new location for SDF.
Stephen mentioned in his presentation that they have the LCM-L's Vax7000
but do not yet have the 3-phase power in their computer room. He suggested
that it is one of the most popular machines in the SDF menu, and they
intend to make it live shortly.
It is unclear what became of some of the other items. It was pointed out
that running a CDC6500 is extremely expensive to operate from a power
standpoint, so they offer an NOS login using the DTCyber simulator. He
never mentioned what became of the former Purdue machine that the LCM owned
and had restored.
I am interested in knowing what happened to the two PDP-7s. I know that at
least one was privately owned, but was being restored and displayed at the
LCM+L. It was one of these systems that Unix V0 was resurrected and ran
for the UNIX 50th Anniversary Party that the LCM+L hosted. The LCM+L had
some interesting peripherals. For instance, the console for Miss Piggy was
a somewhat rare ASR37 [which is Upper/Lower case and the "native" terminal
for Research Unix]. I hope they have it also. The LCM+L had a number of
different types of tape transports for recovering old data. Stephen
mentioned that they have some of these but did not elaborate.
Clem