So in most technical circles and indeed in the research communities surrounding
UNIX, the name of the system was just that, UNIX, prefixed often with some
descriptor of which stream, be it Research, USG, BSD/Berkeley, but in any case
the name UNIX itself was descriptive of the operating system for many of its
acolytes and disciples.
However, in AT&T literature and media, addition of "System" to the end of the
formal name seemed to become de facto if not de jure. This can be seen for
instance in manual edits in the early 80s with references to just "UNIX" being
replaced with variations on "The UNIX System", sometimes haphazardly as if done
via a search and replace with little review. This too is evident in some
informative films published by AT&T, available on YouTube today as
"The UNIX Operating System" and "UNIX: Making Computers Easier to Use"[1][2].
Discrepancies in the titles of the videos notwithstanding, throughout it seems
there are several instances where audio of an interviewee saying
"The UNIX System" were edited over what I presume were instances of them simply
saying UNIX.
I'm curious if anyone has the scoop on whether this was an attempt to echo the
"One Bell System" and related terminology, marketing tag lines like
"The System is the Solution", and/or the naming of the revisions themselves as
"System <xyz>". On the other hand, could it have simply been for clarity, with
the uninitiated not being able to glean from the product name anything about it,
making the case for adding "System" in formal descriptions to give them a little
bit of a hint.
Bell Labs folks especially, was there ever some grand thou shalt call it
"The UNIX System" in all PR directive or was it just something that organically
happened over time as bureaucratic powers at be got their hands on a part of the
steering wheel?
- Matt G.
[1] - https://www.youtube.com/watch?v=tc4ROCJYbm0
[2] - https://www.youtube.com/watch?v=XvDZLjaCJuw
For the non-TUHS folks who don't know me, I worked in
Center 1127 (the Bell Labs Computing Science Research
Center) 1984-1990, and had some hand in 9th and 10th
Edition Manuals and what passed for the V8-V10
`distributions.'
To answer Branden's points:
A. I do know what version of troff was used to typeset
the 8th through 10th Edition manuals. It was the version
we were using in 1127 at the time, which was indeed
Kernighan's. The macro packages probably matter more
than the particular troff edition.
For the 10th Edition (which files I have at hand), there
was an individual mkfile (mk(1)) for each paper, so
in principle there was no fixed formatting package,
but in practice everything appears to have used troff -mpm,
with various preprocessors according the paper: prefer,
tbl, pic, ideal, and in some cases additional macros and even
odds and ends of sed and awk.
If you wanted to re-render things from scratch you'd
want all the tools. But if you have the real troff
sources you'll have all the mkfiles--things were stored
one paper per directory.
-mpm (mpm(6) in 10/e vol 1) was a largely ms-compatible
package with special expertise in page layout.
B. There was no such thing as a `release' after V7.
In fall 1984 we made a single V8 snapshot. Making
that involved a lot of fiddly work, because we didn't
normally try to build systems from scratch; when we
brought in a new computer we cloned it from an existing
one. So there was lots of fiddly work to make sure
every program in /bin and /usr/bin on the tape compiled
correctly from the source code that would be on the tape
when the cc and as and ld and libraries on the tape were
used.
We sent V8 tapes to about a dozen external places, few
of which did anything with it (many probably never even
installed it). Which makes sense, by then we really
weren't a central source for Unix even within AT&T, let
alone to the world. Neither did we want the support
burden that would have carried--the group's charter was
research, after all, not software support. So the 9th
and 10th editions existed as manuals, but not as releases.
We did occasionally make one-off snapshots for other parts
of AT&T, and maybe for a university or two. (I definitely
remember taking a snapshot to help the official AT&T System N
Unix people set up a Research system at one point, and have
a vague memory that I may have carried a tape to a university
under a special one-off license letter.)
On the other hand, troff wasn't a rapid moving target, and
unlike the stars of the modern software world, we tried not
to break things unless there was a real reason to do so.
So I suspect the troff from any system of that era would
render the Volume 2 papers properly, and am all but certain
the 10th-edition-era troff would do so even for older manuals.
C. Just to be clear, the official 10th Edition manuals
published by Saunders College Publishing were made from
camera-ready copy prepared by us in 1127 (Doug McIlroy
did all the final work, I think) and printed on our
phototypesetter. We didn't ship them troff source, nor
even Postscript. We did everything including the tables
of contents and indexes and page numbering.
D. troff is indeed not TeX, and some of us think of that
as a feature, not a bug.
I think the odds are fairly good (but not 100%) that
groff would do a reasonable job of rendering the papers;
as I said, the hard part is the macro packages. I'm
not sure -mpm ever made it out of Research.
And there are probably copyright issues not just with
the software but with the papers themselves. The published
manuals bear a copyright notice, after all.
Norman Wilson
Toronto ON
(A much nicer place than suburban NJ, which is why
I left the Labs when I did)
> My understanding is that Unix V8-V10 were not full distributions but
patches.
"Patch" connotes individually distributed small fixes, not complete
working systems. I don't believe Brendan meant that v8 was only a patch on
v7, but that's the natural interpretation of the statement.
V8-v10 were snapshots, yes, possibly not perfectly in sync with the
printed editions. But this was typical of Research editions, and especially
of Volujme 2,
which was originally called something like "Documents for Use with Unix".
Doug
[looping in TUHS so my historical mistakes can be corrected]
Hi Alex,
At 2025-02-13T00:59:33+0100, Alejandro Colomar wrote:
> Just wondering... why not build a new PDF from source, instead of
> scanning the book?
A. I don't think we know for sure which version of troff was used to
format the V10 manual. _Probably_ Kernighan's research version,
which was similar to a contemporaneous DWB troff...but what
"contemporaneous" means in the 1989-1990 period is a little fuzzy.
Also, Kernighan may not have a complete source history of his
version of troff, it is presumably still encumbered by AT&T
copyrights, and he's been using groff for at least his last two
books (his Unix memoir and the 2nd edition of the AWK book).
B. It is hard to recreate a Research Unix V10 installation. My
understanding is that Unix V8-V10 were not full distributions but
patches. And because troff was commercial/proprietary software at
that (the aforementioned DWB troff), I don't know if Kernighan's
"Research troff" escaped Bell Labs or how consistently it could be
expected to be present on a system. Presumably any of a variety of
DWB releases would have "worked fine". How much they would have
varied in extremely fiddly details of typesetting is an open
question. I can say with some confidence that the mm package saw
fairly significant development. Of troff itself (and the
preprocessors one bumps into in the Volume 2 white papers) I'm much
more in the dark.
C. Getting a scan out there tells us at least what one software
configuration deemed acceptable by producers of the book generated,
even if it's impossible to identify details of that software
configuration. That in turn helps us to judge the results of
_known_ software configurations--groff, and other troffs too.
D. troff is not TeX. Nothing like trip.tex has ever existed. A golden
platonic ideal of formatter behavior does not exist except in the
collective, sometimes contentious minds of its users.
> Doesn't groff(1) handle the Unix sources?
Assuming the full source of a document is available, and no part of its
toolchain requires software that is unavailable (like Van Wyk's "ideal"
preprocessor) then if groff cannot satisfactorily render a document
produced by the Bell Labs CSRC, then I'd consider that presumptively a
bug in groff. It's a rebuttable presumption--if one document in one
place relied upon a _bug_ in AT&T troff to produce correct rendering, I
think my inclination would be to annotate the problem somewhere in
groff's documentation and leave it unresolved.
For a case where groff formats a classic Unix document "better" (in
the sense of not unintentionally omitting a formatted equation) than
AT&T troff, see the following.
https://github.com/g-branden-robinson/retypesetting-mathematics
> I expect the answer is not licenses (because I expect redistributing
> the scanned original will be as bad as generating an apocryphal PDF in
> terms of licensing).
I've opined before that the various aspects of Unix "IP" ownership
appear to be so complicated and mired in the details of decades-old
contracts in firms that have changed ownership structures multiple
times, that legally valid answers to questions like this may not exist.
Not until a firm that thinks it holds the rights decides it's worth the
money to pay a bunch of archivists and copyright attorneys to go on a
snipe hunt.
And that decision won't be made unless said firm thinks the probability
is high that they can recover damages from infringers in excess of their
costs. Otherwise the decision simply sets fire to a pile of money.
...which isn't impossible. Billionaires do it every day.
> I sometimes wondered if I should run the Linux man-pages build system
> on the sources of Unix manual pages to generate an apocryphal PDF book
> of Volume 1 of the different Unix systems. I never ended up doing so
> for fear of AT&T lawyers (or whoever owns the rights to their manuals
> today), but I find it would be useful.
It's the kind of thing I've thought about doing. :)
If you do, I very much want to know if groff appears to misbehave.
Regards,
Branden
Dave Horsfall:
Silent, like the "p" in swimming? :-)
===
Not at all the same. Unix smelled much better
than its competitors in the 1970s and 1980s.
Norman Wilson
Toronto ON
John P. Linderman (not the JPL in Altadena):
On a faux-cultural note, Arthur C Clark wrote the "Nine Billion Names of
God" in the 50s.
===
Ken wishes he'd spelled it Clarke, as the author did.
Norman Wilson
Toronto ON
... it is possible to kill a person if you know his true name.
I'm trying to find the origin of that phrase (which I've likely mangled);
V5 or thereabouts?
-- Dave
All-in-one vs pipelined sorts brought to mind NSA's undeservedly obscure
dataflow language, POGOL, https://doi.org/10.1145/512927.512948 (POPL
1973). In POGOL one wrote programs as collections of routines that
communicated via named files, which the compiler did its best to optimize
away. Often this amounted to loop jamming or to the distributive law for
map over function composition. POGOL could, however, handle general
dataflow programming including feedback loops.
One can imagine a program for pulling the POGOL trick on a shell pipeline.
That could accomplish--at negligible cost--the conversion of a cheap demo
into a genuine candidate for intensive production use.
This consideration spurs another thought. Despite Unix's claim to build
tools to make tools, only a relativelly narrow scope of higher-order tools
that take programs as dara ever arose. After the bootstrapping B, there
were a number of compilers, most notably C, plus f77, bc, ratfor, and
struct. A slight variant on the idea of compiling was the suite of troff
preprocessors.
The shell also manipulates programs by composing them into larger programs.
Aside from such examples, only one other category of higher-order Unix
program comes to mind: Peter Weinberger's lcomp for instrumenting C
programs with instruction counts.
An offshoot of Unix were Gerard Holzmann's tools for extracting
model-checker models from C programs. These saw use at Indian Hill and most
notably at JPL, but never appeared among mainstream Unix offerings. Similar
tools exist in-house at Microsoft and elsewhere. But generally speaking we
have vey few kinds of programs that manipulate programs.
What are the prospects for computer science advancing to a stage where
higher-level programs become commonplace? What might be in one's standard
vocabulary of functions that operate on programs?
Doug
I chanced upon a brochure describing the Perkin-Elmer Series 3200 /
(previously Interdata, later Concurrent Computer Corporation) Sort/Merge
II utility [1]. It is instructive to compare its design against that of
the contemporary Unix sort(1) program [2].
- Sort/Merge II appears to be marketed as a separate product (P/N
S90-408), whereas sort(1) was/is an integral part of the Unix used
throughout the system.
- Sort/Merge II provides interactive and batch command input modes;
sort(1) relies on the shell to support both usages.
- Sort/Merge II appears to be able to also sort binary files; sort(1)
can only handle text.
- Sort/Merge II can recover from run-time errors by interactively
prompting for user corrections and additional files. In Unix this is
delegated to shell scripts.
- Sort/Merge II has built-in support for tape handling and blocking;
sort(1) relies on pipes from/to dd(1) for this.
- Sort/Merge II supports user-coded decision subroutines written in
FORTRAN, COBOL, or CAL. Sort(1) doesn't have such support to this day.
One could construct a synthetic key with awk(1) if needed.
- Sort/Merge II can automatically "allocate" its temporary file. For
sort(1) file allocation is handled by the Unix kernel.
To me this list is a real-life demonstration of the differences between
the, prevalent at the time, thoughtless agglomeration of features into a
monolith approach against Unix's careful separation of concerns and
modularization via small tools. The same contrast appears in a more
contrived setting in J. Bentley's CACM Programming Pearl's column where
Doug McIlroy critiques a unique word counting literate program written
by Don Knuth [3]. (I slightly suspect that the initial program
specification was a trap set up for Knuth.)
I also think that the design of Perkin-Elmer's Sort/Merge II shows the
influence of salespeople forcing developers to tack-on whatever features
were required by important customers. Maybe the clean design of Unix
owes a lot to AT&T's operation under the 1956 consent decree that
prevented it from entering the computer market. This may have shielded
the system's design from unhealthy market pressures during its critical
gestation years.
[1]
https://bitsavers.computerhistory.org/pdf/interdata/32bit/brochures/Sort_Me…
[2] https://s3.amazonaws.com/plan9-bell-labs/7thEdMan/v7vol1.pdf#page=166
[3] https://doi.org/10.1145/5948.315654
Diomidis - https://www.spinellis.gr