Hi.
Doug McIlroy is probably the best person to answer this.
Looking at the V3 and V4 manuals, there is a reference to the m6 macro
processor. The man page thereof refers to
A. D. Hall, The M6 Macroprocessor, Bell Telephone Laboratories, 1969
1. Is this memo available, even in hardcopy that could be scanned?
2. What's the history of m6, was it written in assembler? C?
3. When and why was it replaced with m4 (written by DMR IIRC)?
More generally, what's the history of m6 prior to Unix?
IIRC, the macro processor in Software Tools was inspired by m4,
and in particular its immediate evaluation of its arguments during
definition.
I guess I'll also ask, how widespread was the use of macro processors
in high level languages? They were big for assembler, and PL/1 had
a macro language, but I don't know of any other contemporary languages
that had them. Were the general purpose macro processors used a lot?
E.g. with Fortran or Cobol or ...
I'm just curious. :-)
Thanks,
Arnold
I think I recall an explicit statement somewhere from an
interview with Robert that the worm was inspired partly
by Shockwave Rider.
I confess my immediate reaction to the worm was uncontrollable
laughter. I was out of town when it happened, so I first
heard it from a newspaper article (and wasn't caught up in
fighting it or I'd have laughed a lot less, of course); and
it seemed to me hilarious when I read that Robert was behind
it. He had interned with 1127 for a few summers while I was
there, so I knew him as very bright but often a bit careless
about details; that seemed an exact match for the worm.
My longer-term reaction was to completely drop my sloppy
old habit (common in those days not just in my code but in
that of many others) of ignoring possible buffer overflows.
I find it mind-boggling that people still make that mistake;
it has been literal decades since the lesson was rubbed in
our community's collective noses. I am very disappointed
that programming education seems not to care enough about
this sort of thing, even today.
Norman Wilson
Toronto ON
> That was the trouble; had he bothered to test it on a private network (as
> if a true professional would even consider carrying out such an act)[*] he
> would've noticed that his probability calculations were arse-backwards
Morris's failure to foresee the results of even slow exponential
growth is matched by the failure of the critique above to realize
that Morris wouldn't have seen the trouble in a small network test.
The worm assured that no more than one copy (and occasionally one clone)
would run on a machine at a time. This limits the number of attacks
that any one machine experiences at a time to roughly the
number of machines in the network. For a small network, this will
not be a major load.
The worm became a denial-of-service attack only because a huge
number of machines were involved.
I do not remember whether the worm left tracks to prevent its
being run more than once on a machine, though I rather think
it did. This would mean that a small network test would not
only behave innocuously; it would terminate almost instantly.
Doug
FYI.
----- Forwarded message from Linus Torvalds <torvalds(a)linux-foundation.org> -----
Date: Wed, 13 Nov 2019 12:37:50 -0800
From: Linus Torvalds <torvalds(a)linux-foundation.org>
To: Larry McVoy <lm(a)mcvoy.com>
Subject: Re: enum style?
On Wed, Nov 13, 2019 at 10:28 AM Larry McVoy <lm(a)mcvoy.com> wrote:
>
> and asked what was the point of the #defines. I couldn't answer, the only
> thing I can think of is so you can say
>
> int flags = MS_RDONLY;
>
> Which is cool, but why bother with the enum?
For the kernel we actually have this special "type-safe enum" checker
thing, which warns about assigning one enum type to another.
It's not really C, but it's close. It's the same tool we use for some
other kernel-specific type checking (user pointers vs kernel pointers
etc): 'sparse'.
http://man7.org/linux/man-pages/man1/sparse.1.html
and in particular the "-Wenum-mismatch" flag to enable that warning
when you assign an enum to another enum.
It's quite useful for verifying that you pass the right kind of enum
to functions etc - which is a really easy mistake to make in C, since
they all just devolve into 'int' when they are used.
However, we don't use it for the MS_xyz flag: those are just plain
#define's in the kernel. But maybe somebody at some point wanted to do
something similar for the ones you point at?
The only other reason I can think of is that somebody really wanted to
use an enum for namespace reasons, and then noticed that other people
had used a #define and used "#ifdef XYZ" to check whether it was
available, and then instead of changing the enums to #defines, they
just added the self-defines.
In the kernel we do that "use #define for discoberability" quite a lot
particularly with architecture-specific helper functions. So you migth
have
static inline some_arch_fn(..) ...
#define some_arch_fn some_arch_fn
in an architecture file, and then in generic code we have
#ifndef some_arch_fn
static inline some_arch_fn(.,,) /* generic implemenbtation goes here */
#endif
as a way to avoid extra configuration variables for the "do I have a
special version X?"
There's no way to ask "is the C symbol X available in this scope", so
using the pre-processor for that is as close as you can get.
Linus
----- End forwarded message -----
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
Most of this post is off topic; the conclusion is not.
On the afternoon of Martin Luther King Day, 1990, AT&T's
backbone network slowed to a crawl. The cause: a patch intended
to save time when a switch that had taken itself off line (a
rare, but routine and almost imperceptible event) rejoined the
network. The patch was flawed; a lock should have been taken
one instruction sooner.
Bell Labs had tested the daylights out of the patch by
subjecting a real switch in the lab to tortuously heavy, but
necessarily artificial loads. It may also have been tested on
a switch in the wild before the patch was deployed throughout
the network, but that would not have helped.
The trouble was that a certain sequence of events happening
within milliseconds on calls both ways between two heavily
loaded switches could evoke a ping-pong of the switches leaving
and rejoining the network.
The phenomenon was contagious because of the enhanced odds of a
third switch experiencing the bad sequence with a switch that
was repeatedly taking itself off line. The basic problem (and
a fortiori the contagion) had not been seen in the lab because
the lab had only one of the multimillion-dollar switches to
play with.
The meltdown was embarrassing, to say the least. Yet nobody
ever accused AT&T of idiocy for not first testing on a private
network this feature that was inadvertently "designed to
compromise" switches.
Doug
M6 originated as a porting tool for the Fortran source code
for Stan Brown's Altran language for algebraic computation. M6
itself was originally written in highly portable Fortran.
Arnold asked, "How widespread was the use of macro processors
in high level languages? They were big for assembler, and
PL/1 had a macro language, but I don't know of any other
contemporary languages that had them."
Understanding "contemporary" to mean pre-C, I agree. Cpp,
a particularly trivial macroprocessor, has been heavily used
ever since--even for other languages, e.g. Haskell.
The rumor that Bob Morris invented macros is off the
mark. Macros were in regular use by the time he joined Bell
Labs. He did conceive an interesting "form-letter generator",
called "form", and an accompanying editor "fed". A sort of
cross between macros and Vannevar Bush's hypothetical memex
repository, these were among the earliest Unix programs and
appeared in the manual from v1 through v6.
Off-topic warning: pre-Unix stories follow.
Contrary to an assertion on cat-v.org, I did not invent macros
either. In 1959 Doug Eastwood and I, at the suggestion of
George Mealy, created the macro facility for SAP (SHARE assmbly
program) for the IBM 704. However, the idea was in the air at
the time. In particular, we knew that GE already had macros,
though we knew no details about their syntax or semantics.
There were various attempts in the 1960s to build languages by
macro extension. The approach turned out to entail considerable
social cost: communication barriers arise when everyone
can easily create his own dialect. A case in point: I once
had a bright young mathematician summer employee who wrote
wonderfully concise code by heaping up macro definitions. The
result was inscrutable.
Macros caught on in a big way in the ESS labs at Indian Hill.
With a macro-defined switching language, code builds were
slow. One manager there boasted that his lab made more
thoroughgoing use of computers than other departments and
cited enormous consumption of machine time as evidence.
Steve Johnson recalls corrrectly that there was a set of macros
that turned the assembler into a Lisp compiler. I wrote it
and used it for a theorem-proving project spurred by Martin
Davis. (The project was blown away when Robinson published
the resolution princple.) The compiler did some cute local
optimization, taking account of facts such as Bob Morris's
astute observation that the 704 instruction TNZ (transfer on
nonzero) sets the accumulator to zero.
Doug
Dave Horsfall:
And for those who slagged me off for calling him an idiot, try this quick
quiz: on a scale from utter moron to sheer genius, what do you call
someone who deliberately releases untested software designed to compromise
machines that are not under his administrative control in order to make
some sort of a point?
=====
I'd call that careless and irresponsible. Calling it stupid or
idiotic is, well, a stupid, idiotic simplification that succeeds
in being nasty without showing any understanding of the real problem.
Carelessness and irresponsibility are characteristic of people
in their late teens and early 20s, i.e. Robert's age at the time.
Many of us are overly impressed with our own brilliance at that
age, and even when we take some care (as I think Robert did) we
don't always take enough (as he certainly didn't).
Anyone who claims not to have been at least a bit irresponsible
and careless when young is, in my opinion, not being honest. Some
of my former colleagues at Bell Labs weren't always as careful and
responsible as they should be, even to the point of causing harm
to others. But to their credit, when they screwed up that way they
owned up to having done so, tried to make amends, and tried to do
better in future, just as Robert did. It was just Robert's bad
luck that he screwed up in such a public way and did harm to so
many people.
I save my scorn for those who are long past that age and still
behave irresponsibly and harmfully, like certain high politicians
and certain high-tech executives.
Probably future discussion of this should move to COFF unless it
relates directly to the culture and doings in 1127 or other historic
UNIX places.
Norman Wilson
Toronto ON
Sent to me by someone not on this list; I have no idea whether it's been
mentioned here before.
-- Dave
---------- Forwarded message ----------
To: Dave Horsfall <dave(a)horsfall.org>
Subject: Unix Programmer's Manual, 3rd edition (1973)
Hi Dave,
Some nostalgic soul has shared a PDF on the interwebz:
> MIT CSAIL (@MIT_CSAIL) tweeted at 3:12 am on Mon, Nov 04, 2019:
> #otd in 1971 Bell Labs released the first Unix Programmers Manual.
>
> Download the free PDF here: https://t.co/BYh3dAhaJU
I wonder what became of the first and second editions?
> From: Nemo Nusquam
> One comment .. stated that (s)he worked at The Bell and they wrote it
> "unix" (lower-case) to distinguish it from MULTICS. Anyone care to
> comment on this?
All the original Multics hardcopy documentation I have (both from GE and MIT,
as well as later material from Honeywell) spells it 'Multics'. Conversely, an
original V6 UPM spells it 'UNIX'; I think it switched to 'Unix' around the
time of V7. (I don't know about _really_ early, like on the PDP-7.)
The bit about case to differentiate _might_ be right.
Noel