This stuff is extremely poorly preserved. No time like the present to
fix that. I was reading Tom's blog
https://akapugs.blog/2018/05/12/370unixpart3/ and have been aware of
Amdahl UTS a couple of the other ports for a while.
I've got an HP 88780 quad density 9-track and access to a SCSI IBM
3490. Can fit them in air cargo and bring a laptop with a SCSI card.
Tell me where to go.
Regards,
Kevin
Tangential, but interesting:
https://blog.edx.org/congratulations-edx-prize-2019-winners/
Where would you expect a MOOC about C to originate? Not, it
turns out, in a computer-science department. Professor
Bonfert-Taylor is a mathematician in the school of
engineering at Dartmouth.
Doug
On Tue, 19 Nov 2019, Tony Finch wrote:
> Amusingly POSIX says the C standard takes precedence wrt the details of
> gets() (and other library functions) and C18 abolished gets(). I'm
> slightly surprised that the POSIX committee didn't see that coming and
> include the change in the 2018 edition...
Didn't know that gets() had finally been abolished; it's possibly the most
unsafe function (OK, macro) on the planet. I've long been tempted to
remove gets() and see what breaks...
-- Dave
Bakul Shah:
Unfortunately strcpy & other buffer overflow friendly
functions are still present in the C standard (I am looking at
n2434.pdf, draft of Sept 25, 2019). Is C really not fixable?
====
If you mean `can C be made proof against careless programmers,'
no. You could try but the result wouldn't be C. And Flon's
Dictum applies anyway, as always.
It's perfectly possible to program in C without overflowing
fixed buffers, just as it's perfectly possible to program in
C without dereferencing a NULL or garbage pointer. I don't
claim to be perfect, but before the rtm worm rubbed my nose
in such problems, I was often sloppy about them, and afterward
I was very much aware of them and paid attention.
That's all I ask: we need to pay attention. It's not about
tools, it's about brains and craftmanship and caring more
about quality than about feature count or shiny surfaces
or pushing the product out the door.
Which is a good bit of what was attractive about UNIX in
the first place--that both its ideas and its implementation
were straightforward and comprehensible and made with some
care. (Never mind that it wasn't perfect either.)
Too bad software in general and UNIX descendants in particular
seem to have left all that behind.
Norman Wilson
Toronto ON
PS: if you find this depressing, cheer yourself up by watching
the LCM video showing off UNICS on the PDP-7. I just did, and
it did.
I hadn't seen this yet - apologies if it's not news:
https://livingcomputers.org/Blog/Restoring-UNIX-v0-on-a-PDP-7-A-look-behind…
Quoting:
"I recently sat down with Fred Yearian, a former Boeing engineer, and
Jeff Kaylin, an engineer at Living Computers, to talk about their
restoration work on Yerian’s PDP-7 at Living Computers: Museum +
Labs."
[...]
Up until the discovery of Yearian’s machine, LCM+L’s PDP-7 was
believed to be the only operational PDP-7 left in the world. Chuckling
to himself, Yearian recalls hearing this history presented during his
first visit to LCM+L.
“I walked in the computer museum, and someone said ‘Oh, this is the
only [PDP-7] that’s still working’.
And I said, ‘Well actually, I got one in my basement!’”
[end quote]
Fun story - and worthy work. Nicely done.
--
Royce
Hi.
Doug McIlroy is probably the best person to answer this.
Looking at the V3 and V4 manuals, there is a reference to the m6 macro
processor. The man page thereof refers to
A. D. Hall, The M6 Macroprocessor, Bell Telephone Laboratories, 1969
1. Is this memo available, even in hardcopy that could be scanned?
2. What's the history of m6, was it written in assembler? C?
3. When and why was it replaced with m4 (written by DMR IIRC)?
More generally, what's the history of m6 prior to Unix?
IIRC, the macro processor in Software Tools was inspired by m4,
and in particular its immediate evaluation of its arguments during
definition.
I guess I'll also ask, how widespread was the use of macro processors
in high level languages? They were big for assembler, and PL/1 had
a macro language, but I don't know of any other contemporary languages
that had them. Were the general purpose macro processors used a lot?
E.g. with Fortran or Cobol or ...
I'm just curious. :-)
Thanks,
Arnold
I think I recall an explicit statement somewhere from an
interview with Robert that the worm was inspired partly
by Shockwave Rider.
I confess my immediate reaction to the worm was uncontrollable
laughter. I was out of town when it happened, so I first
heard it from a newspaper article (and wasn't caught up in
fighting it or I'd have laughed a lot less, of course); and
it seemed to me hilarious when I read that Robert was behind
it. He had interned with 1127 for a few summers while I was
there, so I knew him as very bright but often a bit careless
about details; that seemed an exact match for the worm.
My longer-term reaction was to completely drop my sloppy
old habit (common in those days not just in my code but in
that of many others) of ignoring possible buffer overflows.
I find it mind-boggling that people still make that mistake;
it has been literal decades since the lesson was rubbed in
our community's collective noses. I am very disappointed
that programming education seems not to care enough about
this sort of thing, even today.
Norman Wilson
Toronto ON
> That was the trouble; had he bothered to test it on a private network (as
> if a true professional would even consider carrying out such an act)[*] he
> would've noticed that his probability calculations were arse-backwards
Morris's failure to foresee the results of even slow exponential
growth is matched by the failure of the critique above to realize
that Morris wouldn't have seen the trouble in a small network test.
The worm assured that no more than one copy (and occasionally one clone)
would run on a machine at a time. This limits the number of attacks
that any one machine experiences at a time to roughly the
number of machines in the network. For a small network, this will
not be a major load.
The worm became a denial-of-service attack only because a huge
number of machines were involved.
I do not remember whether the worm left tracks to prevent its
being run more than once on a machine, though I rather think
it did. This would mean that a small network test would not
only behave innocuously; it would terminate almost instantly.
Doug
FYI.
----- Forwarded message from Linus Torvalds <torvalds(a)linux-foundation.org> -----
Date: Wed, 13 Nov 2019 12:37:50 -0800
From: Linus Torvalds <torvalds(a)linux-foundation.org>
To: Larry McVoy <lm(a)mcvoy.com>
Subject: Re: enum style?
On Wed, Nov 13, 2019 at 10:28 AM Larry McVoy <lm(a)mcvoy.com> wrote:
>
> and asked what was the point of the #defines. I couldn't answer, the only
> thing I can think of is so you can say
>
> int flags = MS_RDONLY;
>
> Which is cool, but why bother with the enum?
For the kernel we actually have this special "type-safe enum" checker
thing, which warns about assigning one enum type to another.
It's not really C, but it's close. It's the same tool we use for some
other kernel-specific type checking (user pointers vs kernel pointers
etc): 'sparse'.
http://man7.org/linux/man-pages/man1/sparse.1.html
and in particular the "-Wenum-mismatch" flag to enable that warning
when you assign an enum to another enum.
It's quite useful for verifying that you pass the right kind of enum
to functions etc - which is a really easy mistake to make in C, since
they all just devolve into 'int' when they are used.
However, we don't use it for the MS_xyz flag: those are just plain
#define's in the kernel. But maybe somebody at some point wanted to do
something similar for the ones you point at?
The only other reason I can think of is that somebody really wanted to
use an enum for namespace reasons, and then noticed that other people
had used a #define and used "#ifdef XYZ" to check whether it was
available, and then instead of changing the enums to #defines, they
just added the self-defines.
In the kernel we do that "use #define for discoberability" quite a lot
particularly with architecture-specific helper functions. So you migth
have
static inline some_arch_fn(..) ...
#define some_arch_fn some_arch_fn
in an architecture file, and then in generic code we have
#ifndef some_arch_fn
static inline some_arch_fn(.,,) /* generic implemenbtation goes here */
#endif
as a way to avoid extra configuration variables for the "do I have a
special version X?"
There's no way to ask "is the C symbol X available in this scope", so
using the pre-processor for that is as close as you can get.
Linus
----- End forwarded message -----
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
Most of this post is off topic; the conclusion is not.
On the afternoon of Martin Luther King Day, 1990, AT&T's
backbone network slowed to a crawl. The cause: a patch intended
to save time when a switch that had taken itself off line (a
rare, but routine and almost imperceptible event) rejoined the
network. The patch was flawed; a lock should have been taken
one instruction sooner.
Bell Labs had tested the daylights out of the patch by
subjecting a real switch in the lab to tortuously heavy, but
necessarily artificial loads. It may also have been tested on
a switch in the wild before the patch was deployed throughout
the network, but that would not have helped.
The trouble was that a certain sequence of events happening
within milliseconds on calls both ways between two heavily
loaded switches could evoke a ping-pong of the switches leaving
and rejoining the network.
The phenomenon was contagious because of the enhanced odds of a
third switch experiencing the bad sequence with a switch that
was repeatedly taking itself off line. The basic problem (and
a fortiori the contagion) had not been seen in the lab because
the lab had only one of the multimillion-dollar switches to
play with.
The meltdown was embarrassing, to say the least. Yet nobody
ever accused AT&T of idiocy for not first testing on a private
network this feature that was inadvertently "designed to
compromise" switches.
Doug