For the non-TUHS folks who don't know me, I worked in
Center 1127 (the Bell Labs Computing Science Research
Center) 1984-1990, and had some hand in 9th and 10th
Edition Manuals and what passed for the V8-V10
`distributions.'
To answer Branden's points:
A. I do know what version of troff was used to typeset
the 8th through 10th Edition manuals. It was the version
we were using in 1127 at the time, which was indeed
Kernighan's. The macro packages probably matter more
than the particular troff edition.
For the 10th Edition (which files I have at hand), there
was an individual mkfile (mk(1)) for each paper, so
in principle there was no fixed formatting package,
but in practice everything appears to have used troff -mpm,
with various preprocessors according the paper: prefer,
tbl, pic, ideal, and in some cases additional macros and even
odds and ends of sed and awk.
If you wanted to re-render things from scratch you'd
want all the tools. But if you have the real troff
sources you'll have all the mkfiles--things were stored
one paper per directory.
-mpm (mpm(6) in 10/e vol 1) was a largely ms-compatible
package with special expertise in page layout.
B. There was no such thing as a `release' after V7.
In fall 1984 we made a single V8 snapshot. Making
that involved a lot of fiddly work, because we didn't
normally try to build systems from scratch; when we
brought in a new computer we cloned it from an existing
one. So there was lots of fiddly work to make sure
every program in /bin and /usr/bin on the tape compiled
correctly from the source code that would be on the tape
when the cc and as and ld and libraries on the tape were
used.
We sent V8 tapes to about a dozen external places, few
of which did anything with it (many probably never even
installed it). Which makes sense, by then we really
weren't a central source for Unix even within AT&T, let
alone to the world. Neither did we want the support
burden that would have carried--the group's charter was
research, after all, not software support. So the 9th
and 10th editions existed as manuals, but not as releases.
We did occasionally make one-off snapshots for other parts
of AT&T, and maybe for a university or two. (I definitely
remember taking a snapshot to help the official AT&T System N
Unix people set up a Research system at one point, and have
a vague memory that I may have carried a tape to a university
under a special one-off license letter.)
On the other hand, troff wasn't a rapid moving target, and
unlike the stars of the modern software world, we tried not
to break things unless there was a real reason to do so.
So I suspect the troff from any system of that era would
render the Volume 2 papers properly, and am all but certain
the 10th-edition-era troff would do so even for older manuals.
C. Just to be clear, the official 10th Edition manuals
published by Saunders College Publishing were made from
camera-ready copy prepared by us in 1127 (Doug McIlroy
did all the final work, I think) and printed on our
phototypesetter. We didn't ship them troff source, nor
even Postscript. We did everything including the tables
of contents and indexes and page numbering.
D. troff is indeed not TeX, and some of us think of that
as a feature, not a bug.
I think the odds are fairly good (but not 100%) that
groff would do a reasonable job of rendering the papers;
as I said, the hard part is the macro packages. I'm
not sure -mpm ever made it out of Research.
And there are probably copyright issues not just with
the software but with the papers themselves. The published
manuals bear a copyright notice, after all.
Norman Wilson
Toronto ON
(A much nicer place than suburban NJ, which is why
I left the Labs when I did)
Although I edited the v7 through v10 manuals, I have no recollection of
why "system" crept into the title between v7 and v8. Resistance to
trademark edicts did grow. In v10, the cover and the man pages proclaimed
"Unix". However, the fossilized spelling, "UNIX", still appeared in the
introduction to Volume 1 and scattered throughout Volume 2.
Doug
So in most technical circles and indeed in the research communities surrounding
UNIX, the name of the system was just that, UNIX, prefixed often with some
descriptor of which stream, be it Research, USG, BSD/Berkeley, but in any case
the name UNIX itself was descriptive of the operating system for many of its
acolytes and disciples.
However, in AT&T literature and media, addition of "System" to the end of the
formal name seemed to become de facto if not de jure. This can be seen for
instance in manual edits in the early 80s with references to just "UNIX" being
replaced with variations on "The UNIX System", sometimes haphazardly as if done
via a search and replace with little review. This too is evident in some
informative films published by AT&T, available on YouTube today as
"The UNIX Operating System" and "UNIX: Making Computers Easier to Use"[1][2].
Discrepancies in the titles of the videos notwithstanding, throughout it seems
there are several instances where audio of an interviewee saying
"The UNIX System" were edited over what I presume were instances of them simply
saying UNIX.
I'm curious if anyone has the scoop on whether this was an attempt to echo the
"One Bell System" and related terminology, marketing tag lines like
"The System is the Solution", and/or the naming of the revisions themselves as
"System <xyz>". On the other hand, could it have simply been for clarity, with
the uninitiated not being able to glean from the product name anything about it,
making the case for adding "System" in formal descriptions to give them a little
bit of a hint.
Bell Labs folks especially, was there ever some grand thou shalt call it
"The UNIX System" in all PR directive or was it just something that organically
happened over time as bureaucratic powers at be got their hands on a part of the
steering wheel?
- Matt G.
[1] - https://www.youtube.com/watch?v=tc4ROCJYbm0
[2] - https://www.youtube.com/watch?v=XvDZLjaCJuw
> My understanding is that Unix V8-V10 were not full distributions but
patches.
"Patch" connotes individually distributed small fixes, not complete
working systems. I don't believe Brendan meant that v8 was only a patch on
v7, but that's the natural interpretation of the statement.
V8-v10 were snapshots, yes, possibly not perfectly in sync with the
printed editions. But this was typical of Research editions, and especially
of Volujme 2,
which was originally called something like "Documents for Use with Unix".
Doug
[looping in TUHS so my historical mistakes can be corrected]
Hi Alex,
At 2025-02-13T00:59:33+0100, Alejandro Colomar wrote:
> Just wondering... why not build a new PDF from source, instead of
> scanning the book?
A. I don't think we know for sure which version of troff was used to
format the V10 manual. _Probably_ Kernighan's research version,
which was similar to a contemporaneous DWB troff...but what
"contemporaneous" means in the 1989-1990 period is a little fuzzy.
Also, Kernighan may not have a complete source history of his
version of troff, it is presumably still encumbered by AT&T
copyrights, and he's been using groff for at least his last two
books (his Unix memoir and the 2nd edition of the AWK book).
B. It is hard to recreate a Research Unix V10 installation. My
understanding is that Unix V8-V10 were not full distributions but
patches. And because troff was commercial/proprietary software at
that (the aforementioned DWB troff), I don't know if Kernighan's
"Research troff" escaped Bell Labs or how consistently it could be
expected to be present on a system. Presumably any of a variety of
DWB releases would have "worked fine". How much they would have
varied in extremely fiddly details of typesetting is an open
question. I can say with some confidence that the mm package saw
fairly significant development. Of troff itself (and the
preprocessors one bumps into in the Volume 2 white papers) I'm much
more in the dark.
C. Getting a scan out there tells us at least what one software
configuration deemed acceptable by producers of the book generated,
even if it's impossible to identify details of that software
configuration. That in turn helps us to judge the results of
_known_ software configurations--groff, and other troffs too.
D. troff is not TeX. Nothing like trip.tex has ever existed. A golden
platonic ideal of formatter behavior does not exist except in the
collective, sometimes contentious minds of its users.
> Doesn't groff(1) handle the Unix sources?
Assuming the full source of a document is available, and no part of its
toolchain requires software that is unavailable (like Van Wyk's "ideal"
preprocessor) then if groff cannot satisfactorily render a document
produced by the Bell Labs CSRC, then I'd consider that presumptively a
bug in groff. It's a rebuttable presumption--if one document in one
place relied upon a _bug_ in AT&T troff to produce correct rendering, I
think my inclination would be to annotate the problem somewhere in
groff's documentation and leave it unresolved.
For a case where groff formats a classic Unix document "better" (in
the sense of not unintentionally omitting a formatted equation) than
AT&T troff, see the following.
https://github.com/g-branden-robinson/retypesetting-mathematics
> I expect the answer is not licenses (because I expect redistributing
> the scanned original will be as bad as generating an apocryphal PDF in
> terms of licensing).
I've opined before that the various aspects of Unix "IP" ownership
appear to be so complicated and mired in the details of decades-old
contracts in firms that have changed ownership structures multiple
times, that legally valid answers to questions like this may not exist.
Not until a firm that thinks it holds the rights decides it's worth the
money to pay a bunch of archivists and copyright attorneys to go on a
snipe hunt.
And that decision won't be made unless said firm thinks the probability
is high that they can recover damages from infringers in excess of their
costs. Otherwise the decision simply sets fire to a pile of money.
...which isn't impossible. Billionaires do it every day.
> I sometimes wondered if I should run the Linux man-pages build system
> on the sources of Unix manual pages to generate an apocryphal PDF book
> of Volume 1 of the different Unix systems. I never ended up doing so
> for fear of AT&T lawyers (or whoever owns the rights to their manuals
> today), but I find it would be useful.
It's the kind of thing I've thought about doing. :)
If you do, I very much want to know if groff appears to misbehave.
Regards,
Branden
Dave Horsfall:
Silent, like the "p" in swimming? :-)
===
Not at all the same. Unix smelled much better
than its competitors in the 1970s and 1980s.
Norman Wilson
Toronto ON
John P. Linderman (not the JPL in Altadena):
On a faux-cultural note, Arthur C Clark wrote the "Nine Billion Names of
God" in the 50s.
===
Ken wishes he'd spelled it Clarke, as the author did.
Norman Wilson
Toronto ON
... it is possible to kill a person if you know his true name.
I'm trying to find the origin of that phrase (which I've likely mangled);
V5 or thereabouts?
-- Dave
All-in-one vs pipelined sorts brought to mind NSA's undeservedly obscure
dataflow language, POGOL, https://doi.org/10.1145/512927.512948 (POPL
1973). In POGOL one wrote programs as collections of routines that
communicated via named files, which the compiler did its best to optimize
away. Often this amounted to loop jamming or to the distributive law for
map over function composition. POGOL could, however, handle general
dataflow programming including feedback loops.
One can imagine a program for pulling the POGOL trick on a shell pipeline.
That could accomplish--at negligible cost--the conversion of a cheap demo
into a genuine candidate for intensive production use.
This consideration spurs another thought. Despite Unix's claim to build
tools to make tools, only a relativelly narrow scope of higher-order tools
that take programs as dara ever arose. After the bootstrapping B, there
were a number of compilers, most notably C, plus f77, bc, ratfor, and
struct. A slight variant on the idea of compiling was the suite of troff
preprocessors.
The shell also manipulates programs by composing them into larger programs.
Aside from such examples, only one other category of higher-order Unix
program comes to mind: Peter Weinberger's lcomp for instrumenting C
programs with instruction counts.
An offshoot of Unix were Gerard Holzmann's tools for extracting
model-checker models from C programs. These saw use at Indian Hill and most
notably at JPL, but never appeared among mainstream Unix offerings. Similar
tools exist in-house at Microsoft and elsewhere. But generally speaking we
have vey few kinds of programs that manipulate programs.
What are the prospects for computer science advancing to a stage where
higher-level programs become commonplace? What might be in one's standard
vocabulary of functions that operate on programs?
Doug
I chanced upon a brochure describing the Perkin-Elmer Series 3200 /
(previously Interdata, later Concurrent Computer Corporation) Sort/Merge
II utility [1]. It is instructive to compare its design against that of
the contemporary Unix sort(1) program [2].
- Sort/Merge II appears to be marketed as a separate product (P/N
S90-408), whereas sort(1) was/is an integral part of the Unix used
throughout the system.
- Sort/Merge II provides interactive and batch command input modes;
sort(1) relies on the shell to support both usages.
- Sort/Merge II appears to be able to also sort binary files; sort(1)
can only handle text.
- Sort/Merge II can recover from run-time errors by interactively
prompting for user corrections and additional files. In Unix this is
delegated to shell scripts.
- Sort/Merge II has built-in support for tape handling and blocking;
sort(1) relies on pipes from/to dd(1) for this.
- Sort/Merge II supports user-coded decision subroutines written in
FORTRAN, COBOL, or CAL. Sort(1) doesn't have such support to this day.
One could construct a synthetic key with awk(1) if needed.
- Sort/Merge II can automatically "allocate" its temporary file. For
sort(1) file allocation is handled by the Unix kernel.
To me this list is a real-life demonstration of the differences between
the, prevalent at the time, thoughtless agglomeration of features into a
monolith approach against Unix's careful separation of concerns and
modularization via small tools. The same contrast appears in a more
contrived setting in J. Bentley's CACM Programming Pearl's column where
Doug McIlroy critiques a unique word counting literate program written
by Don Knuth [3]. (I slightly suspect that the initial program
specification was a trap set up for Knuth.)
I also think that the design of Perkin-Elmer's Sort/Merge II shows the
influence of salespeople forcing developers to tack-on whatever features
were required by important customers. Maybe the clean design of Unix
owes a lot to AT&T's operation under the 1956 consent decree that
prevented it from entering the computer market. This may have shielded
the system's design from unhealthy market pressures during its critical
gestation years.
[1]
https://bitsavers.computerhistory.org/pdf/interdata/32bit/brochures/Sort_Me…
[2] https://s3.amazonaws.com/plan9-bell-labs/7thEdMan/v7vol1.pdf#page=166
[3] https://doi.org/10.1145/5948.315654
Diomidis - https://www.spinellis.gr
> To me this list is a real-life demonstration of the differences between
> the, prevalent at the time, thoughtless agglomeration of features into a
> monolith approach against Unix's careful separation of concerns and
> modularization via small tools. The same contrast appears in a more
> contrived setting in J. Bentley's CACM Programming Pearl's column where
> Doug McIlroy critiques a unique word counting literate program written
> by Don Knuth [3]. (I slightly suspect that the initial program
> specification was a trap set up for Knuth.)
It wasn't a setup. Although Jon's introduction seems to imply that he had
invited both Don and me to participate, I actually was moved to write the
critique when I proofread the 2-author column, as I did for many of Jon's
Programming Pearls. That led to the 3-author arrangement. Knuth and
I are still friends; he even reprinted the critique. It is also memorably
depicted at https://comic.browserling.com/tag/douglas-mcilroy.
Doug
> I often repeat a throwaway sentence that UUCP was Lesk,
> building a bug fix distribution mechanism.
> Am I completely wrong? I am sure Mike said this to me mid 80s.
That was an important motivating factor, but Mike also had an
unerring anticipatory sense of public "need". Thus his programs
spread like wildfire despite their bugs. UUCP itself is the premier
example. Its popularity impelled its inclusion in v7 despite its
woeful disregard for security.
> Does anyone have [Robert Morris's UUCP CSTR]? Doug?
Not I.
Doug
Robert's uucp was in use in the Research world when I arrived
in late summer of 1984. It had an interesting and sensible
structure; in particular uucico was split into two programs,
ci and co.
One of the first things I was asked to do when I got there was
to get Honey Danber working as a replacement. I don't remember
why that was preferred; possibly just because Robert was a
summer student, not a full-fledged member of the lab, and we
didn't want something as important to us as uucp to rely on
orphaned code.
Honey Danber was in place by the time we made the V8 tape,
toward the end of 1984.
Norman Wilson
Toronto ON
The sound situation in the UNIX world to me has always felt particularly
fragmentary, with OSS offering some glimmer of hope but faltering under the long
shadow of ALSA, with a hodge podge of PCM and other low level interfaces
littered about other offerings.
Given AT&T's involvement with the development of just about everything
"sound over wires" for decades by the time UNIX comes along, one would suspect
AT&T would be quite invested in standardizing interfaces for computers
interacting with audio signals on copper wire. Indeed much of the ESS R&D was
taking in analog telephone signals, digitizing them, and then acting on those
digitized results before converting back to analog to send to the other end.
Where this has me curious is if there were any efforts in Bell Labs, prior to
other industry players having their hands on the steering wheel, to establish an
abstract UNIX interface pattern for interacting with streams of converted audio
signal. Of course modern formats didn't exist, but the general idea of PCM was
well established, concepts like sampling rates, bit depths, etc. could be used
in calculations to interpret and manipulate digitized audio streams.
Any recollections? Was the landscape of signal processing solutions just so
particular that trying to create a centralized interface didn't make sense at
the time? Or was it a simple matter of priorities, with things like language
development and system design taking center stage, leaving a dearth of resources
to direct towards these sorts of matters? Was there ever a chance of seeing,
say, the 5ESS handling of PCM, extended out to non-switching applications, or
was that stuff firmly siloed over in the switching groups, having no influence
on signal processing outside?
- Matt G.
I mentioned a few weeks ago that I was writing this invited paper for an
upcoming 50-year anniversary of the first issue of IEEE Transactions on
Software Engineering.
The paper has now been accepted for publication and here's a preprint
version of it:
https://www.mrochkind.com/mrochkind/docs/SCCSretro2.pdf
Marc
> Was the landscape of signal processing solutions just so
> particular that trying to create a centralized interface didn't make
sense at
> the time? Or was it a simple matter of priorities, with things like
language
> development and system design taking center stage, leaving a dearth of
resources
> to direct towards these sorts of matters? Was there ever a chance of
seeing,
> say, the 5ESS handling of PCM, extended out to non-switching
applications,
In the early days of Unix there were intimate ties between CS Research and
Visual and Acoustic Research. V&A were Bell Labs' pioneer minicomputer
users because they needed interactive access to graphics and audio, which
would have been prohibitively expensive on the Labs' pre-timesharing
mainframes. Also they generally had EE backgrounds, so were comfortable
working hands-on with hardware, whereas CS had been largely spun off from
the math department.
Ed David, who led Bell Labs into Multics, without which Unix might not have
happened, had transferred from V&A to CS. So had Vic Vyssotsky and Elliot
Pinson (Dennis's department head and coauthor with me of the introduction
to the 1978 BSTJ Unix issue). John Kelly, a brilliant transferee who died
all too young pre-Unix, had collaborated with Vic on BLODI, the first
dataflow language, which took digital signal processing off breadboards and
into computers. One central member of the Unix lab, Lee McMahon, never left
V&A.
The PDP-7 of Unix v0 was a hand-me-down from Pinson's time in V&A. And the
PDP-11 of v1 was supported by a year-end fund surplus from there.
People came from V&A to CS because their interests had drifted from signal
processing to computing per se. With hindsight, one can see that CS
recruiting--even when it drew on engineering or physics
talent--concentrated on similarly motivated people. There was dabbling in
acoustics, such as my "speak" text-to-speech program. And there were
workers dedicated to a few specialties, such as Henry Baird in optical
character recognition. But unlike text processing, say, these fields never
reached a critical mass of support that might have stimulated a wider array
of I/O drivers or full toolkits to use them.
Meanwhile, in V&A Research linguists adopted Unix, but most others
continued to roll their own one-off platforms. It's interesting to
speculate whether the lack of audio interfaces in Unix was a cause or a
result of this do-it-yourself impulse.
Doug
In the lost-in-time department, my group at Digital Cambridge Research lab in 1993 did an audio interface patterned after the X Window system. Paper in the Summer USENIX: https://www.usenix.org/legacy/publications/library/proceedings/cinci93/gett…
For extra fun, the lab director of CRL at the time was Vic Vyssotsky.
But there must have been some Bell work, because around 1983 (?) when I was doing Etherphone at PARC I visited John DeTreville at Holmdel. He was building a voice - over - Ethernet system as well.
-Larry
> On Jan 6, 2025, at 4:51 PM, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> segaloco via TUHS wrote in
> <BWYwXjScYdFHM1NV0KEtgvazEfJM1PX7WaZ8lygZ45Bw2pEQG6JQr5OCtX-KMwEwr_k2zLD\
> GXac7wymRCtifnU9VKnlsrJCrKFqGZSgM6-0=(a)protonmail.com>:
> |The sound situation in the UNIX world to me has always felt particularly
> |fragmentary, with OSS offering some glimmer of hope but faltering under \
> |the long
> |shadow of ALSA, with a hodge podge of PCM and other low level interfaces
> |littered about other offerings.
>
> Oh, but *how* great it was when FreeBSD came on over with those
> "virtual sound devices", in 4.7 or 4.9 i think it was. Ie instead
> of one blocking device, one could open dev.1 and dev.2 and it was
> multiplexed in the kernel. It did some format conversion in the
> kernel alongside this.
>
> It was *fantastic*!, and i had a recording program sitting on
> a Cyrix 166+ and it took me ~1.5 percent of (single) CPU to record
> our then still great Hessenradio HR3 for long hours (Clubnight
> with worldwide known DJs, Chill with great sets in the Sunday
> mornings), and oh yes HR2 with the wonderful Mr. Paul Bartholomäi
> in "Notenschlüssel" (classical music), and the fantastic "Voyager"
> hour with Robert Lug on Sunday evening. It cannot be any better.
> I could code and compile and there was no stuttering alongside.
> 1.5 percent of CPU, honestly!
>
> I say this because FreeBSD has replaced that very code last year,
> if i recall correctly. It now all scales dynmically, if i read
> the patches that flew by right. (So it may be even better as of
> now, but by then, over twenty years ago, it blew my mind. And the
> solution was so simple, you know. The number of concurrent
> devices was a compile time constant if i recall correctly, four by
> default.)
>
> I also say this because today i am lucky i can use ALSA on Linux,
> and apulse for the firefox i have to use (and do use, too
> .. i also browse the internet in such a monster, and at least in
> parts still like that). I always hated those server solutions,
> where those masses of audio data flow through several context
> switches. What for? I never understood. Someone convinced me to
> try that pulseaudio server, but i think it was about 20 percent of
> CPU for a simple stream, with a terrible GUI, and that on
> a i5-8250U CPU @ 1.60GHz with up to 3.4 Ghz (four core; the four
> HT are alwys disabled). 20 percent!!
>
> ...
> |Any recollections?[.]
>
> Sorry, the above is totally apart, but for me the above is still
> such a tremendous thing that someone did; and for free. Whoever
> it was (i actually never tried to check it, now that i track their
> git for so many years), *thank you*!
> (And that includes the simple usual format conversions in between
> those 22050/44100 etc etc. Just like that -- open a device and
> read it, no thousands of callbacks, nothing. And 1.5 percent CPU.
> Maybe it is not good/exact enough for studio level audio editing.
> But i still have lots of those recordings, except that the "Balkan
> piss box" chill somehow disappeared. (Sorry Pedja, shall you read
> this.))
>
> --steffen
> |
> |Der Kragenbaer, The moon bear,
> |der holt sich munter he cheerfully and one by one
> |einen nach dem anderen runter wa.ks himself off
> |(By Robert Gernhardt)
> |
> |In Fall and Winter, feel "The Dropbear Bard"s pint(er).
> |
> |The banded bear
> |without a care,
> |Banged on himself for e'er and e'er
> |
> |Farewell, dear collar bear
Do I remember correctly that the early linux exec as a userland exec?
I think this came up before, I just can't recall.
I just learned about:
https://github.com/hardenedlinux/userland-exec
Hi.
The paper on compressing the dictionary was interesting. In the day
of 20 meg disks, compressing a ~ 2.5 meg file down to ~ .5 meg is
a big savings.
Was the compressed dictionary put into use? I could imaging that
spell(1) at least would have needed some library routines to return
a stream of words from it.
Just wondering. Thanks,
Arnold
I am curious about the apposite Bergson quote (intelligence...tools) found
at the beginning of the Forward mentioned in the subject. Specifically, I am
wondering if the quote was originally discovered in _Creative Evolution_
as a part of a course or through private reading.
I am interested in the diffusion of Continental ideas concerning technology
in English speaking countries during the 20th Century.
Dr Rick Hayes
durtal(a)sdf.org SDF Public
Access UNIX System - https://sdf.org
All, Yufeng Gao has done more amazing work at extracting binaries,
source code and text documents from the DECtapes that Dennis Ritchie
provided for the Unix Archive:
https://www.tuhs.org/Archive/Applications/Dennis_Tapes/
His latest e-mail is below. I've temporarily placed his attachments here:
https://minnie.tuhs.org/wktcloud/index.php/s/aWkck2Ljay6c5sB
He needs some help with formatting old *roff documents. If someone could offer
him help, that would be great. His e-mail address is yufeng.gao AT uq.edu.au
Cheers, Warren
----- Forwarded message from Yufeng Gao -----
Date: Tue, 31 Dec 2024
Subject: RE: UNIX DECtapes from dmr
Hi Warren,
Happy New Year! Here's another update. I found more UNIX bins on
another tape ('ken-sky'). They appear to be between V3 and V4. I have
attached them as "ken_sky_bins.tar". I have also attached an updated
tarball of the V2/V3 bins recovered from the 'e-pi' tape (with a few
names corrected), see "identified_v2v3_bins_r2.tar".
So far, the rough timeline of UNIX binaries (RTM hereinafter refers to
the exact version of the OS described by the preserved manuals) is as
follows:
Sys: V1 RTM <= unix-study-src < s1/s2 < V2 RTM < V3 RTM < nsys < V4 RTM
Bin: V1 RTM < s1/s2 < epi-V2 < epi-V3 < ken-sky-bins < V4 RTM
There is a possibility that the V2 bins from the 'e-pi' tape belong to
V2 RTM, as they're all PDP-11/20 bins with V2 headers. In contrast,
most of the bins from the s1/s2 tapes are V1 bins. Some of them are
identical to those from the 's2' tape, and if the timestamps from the
's2' tape can be trusted, they're from May/June 1972.
The V3 bins from the 'e-pi' tape are most likely from late 1972 or
early 1973, but no later than Feb 1973, as they've been overwritten by
files from Feb 1973. This suggests they're from a V3 beta, supported by
the fact that some features described in the V3 manual are missing. The
files were laid out in perfect alphabetical order on the tape.
The bins from the 'ken-sky' tape fall somewhere between V3 RTM and V4
RTM. The directory structure and other elements match the V3 manual, as
do the syscalls (e.g., the arguments for kill(2) differ between V3 and
V4, and these bins use the V3 arguments). The features, however, are
closer to V4. For example, nm(1) had already been rewritten in C and
matches the V4 manual's description. The assembler also matches the V4
manual in terms of the number of temp files, and the C compiler refers
to the assembler as 'nas.' The assembler is located physically between
files starting with "n" and "o," and the files around it follow a weak
alphabetical order, so it is logical to assume that it was named "nas".
It is a bit difficult to version these binaries, especially without any
timestamps. The lines between versions for early UNIX are blurry, and
modern software versioning terms like "beta" and "RTM" don't really
apply well. If these binaries are to be preserved (which I hope they
will be, even though the kernels are long gone), I'd put the V2 bins
from 'e-pi' under V2, the V3 bins from 'e-pi' under V3, and the bins
from 'ken-sky' under V4 (I'd argue that nsys also falls under V4, as
the biggest change between V3 and V4 was the kernel being rewritten in C).
There are other overwritten files on the tapes, and I will address them
later. There are quite a few patents, papers, and memos in *roff
format, but I'm not entirely sure what to do with them. Among those, I
have picked out some V4 distribution documents and attached them as a
ZIP folder :-). If you know of ways to generate PDFs from these ancient
*roff files accurately, please lend a hand - I'm struggling to get
accurate results from groff.
Sincerely,
Yufeng
----- End forwarded message -----