Does anybody out there have a copy of the old AT&T Toolchest "dmd-pgmg"
package?
This apparently includes the a SysV port of Sam for 5620/630 as well
as other programs for the AT&T windowing terminals.
I’ve been looking at this question for a time and thought it could’ve appeared on the TUHS list - but don’t have an idea of the search terms to use on the list.
Perhaps someone suggest some to me.
As a starting point, below is what John Lions wrote on a similar topic in 1978. Conspicuously, “Security” is missing, though “Reliability & Maintenance” would encompass the idea.
With hindsight, I’d suggest (Research) Unix took a very strong stance on “Technical Debt” - it was small, clean & efficient, even elegant. And ‘shipped' with zero known bugs.
It didn’t just bring the Unix kernel to many architectures, the same tools were applied to create what we now call “Open Source” in User land:
- Multi-platform / portable
- the very act of porting software to diverse architectures uncovered new classes of bugs and implicit assumptions. Big- & Little-endian were irrelevant or unknown Before Unix.
- full source
- compatibility layers via
- written in common, well-known, well-supported languages [ solving the maintenance & update problem ]
- standard, portable “toolchains”
- shell, make, compiler, library tools for system linker, documentation & doc reading tools
- distribution systems including test builds, issue / fault reporting & tracking
An emergent property is "Good Security”, both by Design and by (mostly) error-free implementations.
In the Epoch Before Unix (which started when exactly?), there was a lot of Shared Software, but very little that could be mechanically ported to another architecture.
Tools like QED and ROFF were reimplemented on multiple platforms, not ‘ported’ in current lingo.
There are still large, complex FORTRAN libraries shared as source.
There’s an important distinction between “Open” and “Free” : cost & availability.
We’ve gone on to have broadband near universally available with easy to use Internet collaboration tools - e.g. “git”, “mercurial” and “Subversion” just as CVS’s.
The Unix-created Open Source concept broke Vendor Lock-in & erased most “Silos”.
The BSD TCP/IP stack, and Berkeley sockets library, were sponsored by DARPA, and made freely available to vendors as source code.
Similarly, important tools for SMTP and DNS were freely available as Source Code, both speeding the implementation of Internet services and providing “out of the box” protocol / function compatibility.
The best tools, or even just adequate, became only a download & install away for all coding shops, showing up a lot of poor code developed by in-house “experts” and radically trimming many project schedules.
While the Unix “Software Tools” approach - mediated by the STDOUT / STDIN interface, not API’s - was new & radical, and for many classes of problems, provided a definitive solution,
I’d not include it in a list of “Open Source” features.
It assumes a “command line” and process pipelines, which aren’t relevant to very large post-Unix program classes: Graphical Apps and Web / Internet services.
regards
steve jenkin
==============
Lions, J., "An operating system case study" ACM SIGOPS Operating Systems Review, July 1978, ACM SIGOPS Oper. Syst. Rev. 12(3): 46-53 (1978)
2. Some Comments on UNIX
------------------------
There is no space here to describe the technical features of UNIX in detail (see Ritchie and Thompson, 1974 ; also Kernighan and Plauger, 1976),
nor to document its performance characteristics, which we have found to be very satisfactory.
The following general comments do bear upon the present discussion:
(a) Cost.
UNIX is distributed for "academic and educational purposes" to educational institutions by the Western Electric Company for only a nominal fee,
and may be implemented effectively on hardware configurations costing less than $50,000.
(b) Reliability and Maintenance.
Since no support of any kind is provided by Western Electric,
each installation is potentially on its own for software maintenance.
UNIX would not have prospered if it were not almost completely error-free and easy to use.
There are few disappointments and no unpleasant surprises.
(c) Conciseness.
The PDP-11 architecture places a strong limitation on the size of the resident operating system nucleus.
As Ritchie and Thompson (1974) observe,
"the size constraint has encouraged not only economy but a certain elegance of design".
The nucleus provides support services and basic management of processes, files and other resources.
Many important system functions are carried out by utility programs.
Perhaps the most important of these is the command language interpreter, known as the "shell".
(Modification of this program could alter, even drastically, the interface between the system and the user.)
(d) Source Code.
UNIX is written almost entirely in a high level language called "C" which is derived from BCPL and which is well matched to the PDP-11.
It provides record and pointer types,
has well developed control structures,
and is consistent with modern ideas on structured Programming.
(For the curious, the paper by Kernighan (1975) indirectly indicates the flavour of "C"
and exemplifies one type of utility program contained in UNIX.)
Something less than i0,000 lines of code are needed to describe the resident nucleus.
pg 47
(e) Amenability.
Changes can be made to UNIX with little difficulty.
A new system can be instituted by recompiling one or more files (at an average of 20 to 30 seconds per file),
relinking the file containing the nucleus (another 30 seconds or so),
and rebooting using the new file.
In simple cases the whole process need take no more than a few minutes.
(f) Intrinsic Interest.
UNIX contains a number of features which make it interesting in its own right:
the run-time support for the general tree structured file system is particularly efficient;
the use of a reserved set of file names smooths the concepts of device independence;
multiple processes (three or four per user is average) are used in a way which in most systems is regarded as totally extravagant
(this leads to considerable simplification of the system/user interface);
and the interactive intent of the system has resulted in an unusually rich set of text editing and formatting programs.
(g) Limitations.
There are few limitations which are of concern to us.
The PDP-11 architecture limits program size, and this for example frustrated an initial attempt to transfer Pascal P onto the 11/40.
Perhaps the greatest weakness of UNIX as it is presently distributed (and this is not fundamental!)
is in the area where other systems usually claim to be strong:
support for "bread and butter" items such as Fortran and Basic.
(h) Documentation.
The entire official UNIX documentation, including tutorial material, runs to less than 500 pages.
By some standards this is incredibly meagre,
but it does mean that student can carry his own copy in his brief case.
Features of the documentation include:
- an unconventional arrangement of material (unsettling at first, but really very convenient);
- a terse, enigmatic style, with much information conveyed by innuendo;
- a permuted KWIC index.
Most importantly perhaps UNIX encourages the programmer to document his work.
There is a very full set of programs for editing and formatting text.
The extent to which this has been developed can be gauged from the paper by Kernighan and Cherry (1975).
==============
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
>> a paper appeared (in CACM?) that repeated Dennis's exercise.
> Maybe this one?
> B.P. Miller, L. Fredriksen, and B. So, "An Empirical Study of the Reliability
> of UNIX Utilities", Communications of the ACM 33, 12 (December 1990).
> http://www.paradyn.org/papers/fuzz.pdf
Probably. I had forgotten that the later effort was considerably more
elaborate than Dennis's. It created multiple random inputs that might
stumble on other things besides buffer overflow. I see a Unix parable
in the remarkable efficacy of Dennis's single-shot test.
Doug
I added i386 binary compiled from 4.4BSD-Alpha source.
http://www.netside.co.jp/~mochid/comp/bsd44-build/
Boot with bochs works rather well. qemu-system-i386 also boots, and
NIC (NE2000 ne0) works good, but kernel prints many "ISA strayintr" messages.
I got many useful infomations from below 2 sites:
"Fun with virtualization" https://virtuallyfun.com/ 386bsd bochs qemu
"Computer History Wiki!" https://gunkies.org/wiki/Main_Page
Installing 386BSD on BOCHS
First time, I tried to compile i386 using 4.4BSD final (1995) source,
patching many many pieces from 386BSD, NetBSD, and else..
but then, I felt "Well, we have BSD/OS 2.0, NetBSD 1.0, and FreeBSD 2.0
those are full of good improvements.."
So, I changed target, and remebered Pace Willisson's memo in 4.4BSD
(and in 4.4BSD-Lite2 also) sys/i386/i386/README:
"4.4BSD-alpha 80386/80486 Status" June 20, 1992
that file says "can be compiled into a fairly usable system".
yeah, needed chages not so small, though.
-mochid
Hi All.
Thanks to Jeremy C. Reed who's email to the maintainer got the PCC Revived
website and CVS back up.
Thanks to everyone who let me know that it's back up. :-)
My github mirror is https://github.com/arnoldrobbins/pcc-revived and there
are links there to the website etc.
My repo has a branch 'ubuntu18' with diffs for running PCC on Ubuntu,
if that interests anyone.
Enjoy,
Arnold
Hi.
I'm hoping some of the BSD people here may know.
I've been keeping a git mirror of the PCC Revived project, but in the
past month or so it's gone dark. The website is no longer there, the
CVS repos don't answer, and an email to the mailing list went unanswered.
Does anyone know anything about it? Did it move to somewhere else?
I use pcc for testing, it's much faster than GCC and clang.
And in general, I think it's a cool thing. :-)
Thanks,
Arnold
Brian's tribute to the brilliant regex mechanism that awk borrowed
from egrep spurred memories.
For more than forty years I claimed credit for stimulating Ken to
liberate grep from ed. Then, thanks to TUHS, I learned that I had
merely caused Ken to spring from the closet a program he had already
made for his own use.
There's a related story for egrep. Al Aho made a deterministic
regular-expression recognizer as a faster replacement for the
non-deterministic recognizer in grep. He also extended the domain of
patterns to full regular expressions, including alternation; thus the
"e" in egrep.
About the same time, I built on Norm Shryer's personal calendar
utility. I wanted to generalize Norm's strict syntax for dates to
cover most any (American) representation of dates, and to warn about
tomorrow's calendar as well as today's--where "tomorrow" could extend
across a weekend or holiday.
Egrep was just the tool I needed for picking the dates out of a
free-form calendar file. I wrote a little program that built an egrep
pattern based on today's date. The following mouthful for Saturday,
August 20 covers Sunday and Monday, too. (Note that, in egrep, newline
is a synonym for |, the alternation operator.)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*20)([^0123456789]|$)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*21)([^0123456789]|$)
(^|[ (,;])(([Aa]ug[^ ]* *|(08|8)/)0*22)([^0123456789]|$)
It worked like a charm, except that it took a good part of a minute to
handle even a tiny calendar file. The reason: the state count of the
deterministic automaton was exponentially larger than the regular
regular expression; and egrep had to build the automaton before it
could run it. Al was mortified that an early serious use of egrep
should be such a turkey.
But Al was undaunted. He replaced the automaton construction with an
equivalent lazy algorithm that constructed a state only when the
recognizer was about to visit it. This made egrep into the brilliant
tool that Brian praised.
What I don't know is whether the calendar program stimulated the idea
of lazy implementation, or whether Al, like Ken before him with grep,
already had the idea up his sleeve.
Doug
https://www.youtube.com/watch?v=GNyQxXw_oMQ
Not quite 30 minutes long. Mostly about the history of awk but some
other stuff, including a nice plug for TUHS at the end.
Arnold
Hello everyone! I’ve been digging into text editor history, and I found:
“This provided another huge step forward in usability and allowed us to
maintain our modeless approach to screen editing, which was, we feel,
superior to the Vi approach.” from https://www.coulouris.net/cs_history/em_story/
This makes me want to know em’s history outside the usual precursor-to-vi narrative. Does anyone know much about the timeline of em from 1971 (QMC Unix installation) to 1976 (Intro to W M Joy @ UCB)? And does anyone know of developments to it after 1976-04-29? That’s the last date within text in the https://www.coulouris.net/cs_history/em_story/emsource/ files. (Also grumble grumble broken touch feature detection in that shar, which indicates last mod of 1996-02-18).
Anyone other than Coulouris used em in the last 45 years?
--
Joseph Holsten
http://josephholsten.com
mailto:joseph@josephholsten.com
tel:+1-360-927-7234
> Message: 4
> Date: Wed, 10 Aug 2022 12:29:24 +0200
> From: Holger Veit <hveit01(a)web.de>
> Subject: [TUHS] PCS Munix kernel source
>
> Hi all,
>
> I have uploaded the kernel source of 32 bit PCS MUNIX 1.2 to
> https://github.com/hveit01/pcs-munix.
Thank you for sharing this work, most impressive!
> MUNIX was an AT&T SVR3.x implementation ...
Are you sure? Could it perhaps be SVR2? (I don’t see any STREAMS stuff that one would expect for R3).
> The interesting feature of this kernel is the integration of the
> Newcastle Connection network
One of my interests is Unix (packet) networking 1975-1985 and that includes Newcastle Connection. I’ve so far not dived deep into this, but your work may be the trigger for some further investigation.
My understanding so far (from reading the paper a few years ago) is that Newcastle Connection works at the level of libc, substituting system calls like open() and exec() with library routines that scan the path, and if it is a network path invokes user mode routines that use remote procedure calls to give the illusion of a networked kernel. I’ve briefly looked at the Git repo, but I do not see that structure in the code. Could you elaborate a bit more on how Newcastle Connection operates in this kernel? Happy to communicate off-list if it goes in too much detail.
I note that the repo Readme says that the kernel only does some basic IP networking as a carrier, but I also see some files in the tree that seem to implement a form of tcp (and that seem unrelated to the early Unix tcp/ip’s that I have seen so far). Or am I reading too much into these files?
===
Re-reading the Newcastle Connection paper also brought up some citations from Bell Labs work that seems to have been lost. There is a reference to “RIDE” which appears to be a system similar to Newcastle Connection. The RIDE paper is from 1979 and it mentions that RIDE is a Datakit re-implementation of earlier an earlier system that ran on Spider. Any recollections about these things among the TUHS readership?
The other citation is for J. C. Kaufeld and D. L. Russell, "Distributed UNIX System", in Workshop on Fundamental Issues in Distributed Computing, ACM SIGOPS and SIGPLAN (15-17 Dec. 1980). It seems contemporaneous with the Luderer/Marshall/Chu work on S/F-Unix. I could not find this paper so far. Here, too, any recollections about this distributed Unix among the TUHS readership?
> And I've received the documents! This is a pastebin with the rough contents of the documentation package.
>
> https://pastebin.com/jAqqBXA4 <https://pastebin.com/jAqqBXA4>
>
> Now for some analysis:
I’m interested in the journey of SysV IPC. So far I have established that these originated in CBUnix, with a lot of thinking on how to optimize these around the time that Unix 3.0/4.0/5.0 happened. They did not appear in Unix 3.0 / SysIII, and from the Unix 4.0 documentation I gather that it was not included there either.
This would make Unix 5.0 / SysV R1 the first release with what is now known as SysV IPC. The PDP11 version of R1 has the CBUnix version of shared memory, as the VAX version did not make sense in the limited address space of the PDP11.
From the pastebin summary, it would seem that IPC is not in this documentation either? That would be surprising, and reopens the possibility that IPC was part of Unix 4.0
Paul
> I've always believed that pic was so well designed
> because it took a day to get the print out (back then),
I'm afraid this belief is urban legend. Credit for pic is due 100% to
Kernighan, not to the contemporary pace of computing practice.
Even in the 1950s, we had one-hour turnaround at Bell Labs. And the
leap from batch processing had happened well before pic. Turnaround on
modest Unix source files and tests has changed little in the past
fifty years.
Doug
Thread fork as we're drifting from documentation research specifically.
One matter that keeps coming to mind for me is the formal history of the runlevel-based init system. It isn't in Research obviously, nor was it in PWB. The first time it shows up in the wild is System III, but this version is slightly different than what was in CB-UNIX at the time, which is what eventually wound up in System V.
The pressing question is whether the version in System III represents an earlier borrowing from CB or if perhaps the runlevel init started in USG, got bumped over to CB and improved, then that improved version came back to supported UNIX in 5.0.
As for the notable differences:
SysIII init allows for runlevels 1-9. It will call /etc/rc with the current state, the prior state, and the number of times the current state has been entered. If the script is called for instance due to a powerfailure, then the current state is passed suffixed with an 'x'. The inittab entries are in a different format:
state:id:flags:process
Where state is the runlevel, id is a two-character identifier, flags can be either 'c' (like respawn) or 'o' (like off I think). No flag then indicates to run once. Flags 't' or 'k' will terminate or kill a process before it is started again if a given runlevel is entered and it is already running.
This of course is in contrast to SysV init which instead offers runlevels 0-6 as well as a, b, and c. Init itself can be called with runlevels S|s or Q|q additionally and these act as calls to enter single user mode or rerun the current init state if I'm understanding correctly. Neither S nor Q options appear to be valid for the inittab runlevel. Init tab entries here are:
id:rstate:action:process
Where id and rstate are essentially just the same fields from SysIII swapped. Action replaces the flags field with the more well known respawn, wait, once, initdefault, etc. behaviors.
All in all, different enough that inittabs between the two wouldn't be compatible. SysV also includes the telinit command which appears to be able to handle those a, b, and c runlevels.
Anywho, that's my understanding of the init changes, with the pertinent question remaining whether the SysIII-style init ultimately started from the same place as SysV, or if the general design idea was there between USG and CB, and they got to a similar answer from different directions. Colon-delimited /etc files aren't uncommon, so while unlikely, it could be entirely possible the two inittab formats arose relatively independently, but the truth remains obscure in my mind at least. I don't really blame Research for not picking up this init system, it seems like there were a few parallel streams of development around the turn of the 80s, and the easier answer was probably to just stay the course. I seem to recall reading in a thread somewhere Rob Pike discussing the resistance in the Research group regarding sucking up each and every little feature USG tried to promulgate as standard, and this init system got specific mention.
- Matt G.
And I've received the documents! This is a pastebin with the rough contents of the documentation package.
https://pastebin.com/jAqqBXA4
Now for some analysis:
The User's Manual is branded System V but also displays a Western Electric Bell logo. I've seen Release 5.0 manuals displaying the Bell logo and System V manuals without, but never a System V with. That implies the publication of the manual had to change a few times, one to switch from internal Release 5.0 to commercial System V and another time to remove the Bell logo due to divestiture. I would have to wonder if similar transition can be seen with different revisions of these documents?
The Release Description manual has a list of System V relevant documents and they all appear to be accounted for here, so this should represent the wealth of documentation available to a user of System V Gold in 1983.
Most documents are traceable to documents in the Unix 4.0 collection. I've suffixed various documents here with the coordinate to the same in the 4.0 collection. Changes of note:
- The System V documentation includes instructions for 3B20S machines as well as the instructions for DEC equipment. PDP-11 and VAX guidance have been combined into a single document.
- The System V documentation adds documents concerning an "Auto Call" feature. Didn't see this anywhere in 4.0, so should be new circa System V.
- This documentation refers to the last version as System III rather than making any mention of 4.0. Given that the specific documents mentioning this are System V-branded, and there are comparable documents that are Release 5.0 branded, this implies there may be a document floating around out there somewhere equivalent to the Release Description manual but that actually covers the transition from 4.0 to 5.0.
- The documentation package drops the updated CACM paper, likely because it's available all sorts of other places.
- The summary and documentation roadmap documents appear to have been synthesized and combined into the Release Description.
- Snyder and Mashey's shell tutorial was either dropped or combined with Bourne's shell introduction
- No evidence of an MM foldout like was distributed with 4.0 (and before, there are sources around implying these foldouts started with the PWB group, may have been printed as early as 1977)
- Either the original EQN paper is dropped or relevant bits mashed together with the user's guide
- EFL documentation seems to be dropped, or is merged into one of the other Fortran documents somewhere down in there. The processor is still in the man pages though.
- ADB documentation seems to be dropped, likewise still in the manuals, listed as DEC only. Since System V seems to treat DEC as PDP-11+VAX, does this imply there was a VAX ADB? My understanding is SDB started on 32V and was *the* debugger for VAX.
- Unix Virtual Protocol papers are dropped, they were marked as 3.0 only in the 4.0 manuals anyhow, so probably not relevant.
- The Standalone I/O Library and SASH (Shell) paper is dropped
- None of the internals nor security papers seem to have made it, so no Unix Implemention, I/O Implementation, PDP and Portable C Compiler Tours, Assembler Manual, PDP-11/23 and 11/34, or Password Security papers.
These will likely be a slower burn than the 4.0 documents since I purchased them myself and am not in a hurry to get them shipped back to someone. That said, if there's anything in the above pastebin that particularly piques any interest, I can try to move those to the top of the stack and get scans done sooner rather than later. I'll also be doing some analysis between these and the 4.0 docs to try and better determine authorship of various documents, my hope is to have a pretty clear picture of whos work went into each manual by the time I'm done with it all.
- Matt G.
> From: Rob Pike
> I still marvel at the productivity and precision of his generatio
We noticed the same thing happening in the IETF, as the number of people
working on networking went up. The explanation is really quite simple, once
you think about it a bit.
If you have a very small group, it is quite possible to have a very high
level. (Not if it's selected randomly, of course; there has to be some
sorting function.) However, as the group gets much larger, it is
_necessarily_ much more 'average' in the skill/etc level of its members.
This rule applies to any group - which includes the members of TUHS,
of course.
Noel
Hi all,
I have uploaded the kernel source of 32 bit PCS MUNIX 1.2 to
https://github.com/hveit01/pcs-munix.
MUNIX was an AT&T SVR3.x implementation for the German PCS Cadmus
workstations in the 80's. They were
based on Motorola 68020 CPUs on a DEC QBUS.
The interesting feature of this kernel is the integration of the
Newcastle Connection network
(https://en.wikipedia.org/wiki/Newcastle_Connection) which I found,
beyond a tech report https://assets.cs.ncl.ac.uk/TRs/175.pdf, no further
references for.
The kernel source was reverse engineered and verified (see readme in the
distribution who this was done) from the binary tape at
ftp.informatik.uni-stuttgart.de/pub/cm/pcs/sw/IS0371P.tap (Computer
museum of the University of Stuttgart), and to my knowledge reveals the
Newcastle connection code for the first time in a commercial Unix.
The Github package includes the kernel sources, i/O drivers, several
standard libraries, the disassembled boot ROM and for reference, two of
my tools, a partial syscall emulator pcsrun which allowed me to run the
C compiler and other native binaries outside the PCS hardware/Unix
environment, and a disassembler pcsdis for the specific COFF dialect
(note that IDA will produce garbage without a specific patch).
Regards
Holger
I've been looking into the history of the nl command lately, which has gotten me curious as to what facilities folks have used at various points in UNIX history for line numbering.
The earliest version of nl I've found is in System III, and it does not derive from Research, PWB, or CB. Neither does it come from BSD, although BSD has the num command which, according to the source commentary, aims to replicate the '#' behavior of ex.
Were there any other facilities for printing back arbitrary lines from a file with line numbers?
Also, would anyone happen to know if the above appearance of nl might have been from the USG line given none of the others feature it? It neither seems to be in V8-V10. nl has managed to find its way into the POSIX standard, so it definitely has some staying power wherever it came from.
- Matt G.
Good morning everyone. Wanted to pose the question since folks here would probably be more likely to know than anyone.
What are the chances that there are surviving tapes of some of the UNIX versions that weren't so well publicized. The versions that come to mind are the CB and USG lines especially, with PWB 2.0 and TS 4.0 getting honorable mention. If folks will recall, we did luck out in that Arnold, a member of this mailing list, did have a documentation trove from TS 4.0, but no binary or source code assets. This had me curious on what trying to unearth these would even look like.
Has anyone tried to dive deep on this sort of stuff before? Would it look more like trying to find old Bell facilities that might have a tape bumping around in a box in a basement somewhere, or is it more likely that if anything survived it would have been due to being nabbed by an employee or contractor before disposal? Or even just in general, what would folks say is the likelihood that there is a recoverable tape of any of this material just waiting to see the light of day? The closest we have on CB is a paper scan of the kernel sources, and I don't know that any assets from USG-proper have ever percolated up, closest thing to any of that would be the kernel routine description bumping around on the archive somewhere. PWB 2.0 is mentioned in several places, but no empirical evidence has surfaced as far as I know, and with 4.0 of course we have the documents Arnold thankfully preserved, but that's it.
Thanks in advance for any insight or thoughts. My concern is that there is a rapidly closing window on ever being able to properly preserve these parts of the UNIX story, although recognition must be paid to all of the hard work folks have done here thus far to keep this valuable part of computing history in the collective consciousness and accessible to researchers and programmers for years and years to come.
- Matt G.
P.S. Even more honorable mention is the Bell Interdata 8/32 work. I've read several places that never saw outside distribution, but I would have to wonder if any of that work survived beyond the visible portability changes in V7.
I didn't expect to have more documents to share this soon, but I've just secured a trove of early System V/5.0 documents, as listed:
System V User's Manual
System V Administrator's Manual
System V Error Message Manual
System V Transition Aids
System V Release Description
User's Guide
Operator's Guide
Administrator's Guide
Programming Guide
Graphics Guide
Support Tools Guide
Document Processing Guide
The System V-prefixed ones are very specifically labeled System V, although I know at least of the User's and Administrator's Manuals with "Release 5.0" branding out in the wild as well. I've got two of the User's Manuals exhibiting this difference. I believe I've seen a scan of the Admin's Manual with 5.0 as well, but I would have to go searching for it, it's on bitsavers perhaps? In any case, this is the documentation series for the initial releases of System V, the ones with "UNIX System" in big letters with grid patterns fading out into the background. I don't know if the second set is considered part of the Release 5.0 or System V version of the document package, or if they made that distinction, but as of present I can positively identify the first 5 as being specifically for the System V version of this release. What is particularly curious is there are documents displaying "System V" but with a Western Electric logo on the front. I've seen a scan of a System V gold User's Manual with the logo removed and a disclaimer on the front page explaining that they can't use the Bell logo anymore due to the divestiture, likewise on bitsavers I'm pretty sure, so this may establish that there were at least three revisions: Release 5.0, System V pre-divestiture, and System V post-divestiture.
Now for a little plug, just because she's been so incredibly helpful, I bought these from Leslie (last name unknown) known as "oldmaddogshop" on eBay. We got chatting for a little while and her husband was a computing professor at the University of Portland for some time as it sounds, and they're currently starting to go through the decades of literature and hardware he's picked up over the years for sale on eBay and perhaps other avenues. She very specifically mentioned a PDP-8 that he happens to have that he's hoping they can coordinate to donate to a museum or some other way to get it into a relatively publicly accessible space rather than winding up in the closet of a private collector. I told her I'd drop a brief mention in letting folks know about the documents in case they'd want the option of perusing some of what they're going to be offloading. She made mention of a stack of USENIX manuals as well, I have a smattering of 4.2 and 4.3 manuals already, so someone may be lucky enough to snag those soon enough. Up currently are an early SVID and some OSF/Motif stuff, but she said they've got plenty of boxes of books to go through.
Anywho, once I receive these documents, I plan on starting the scanning process much like with the UNIX/TS 4.0 stuff, and will be in touch with Warren concerning hosting and a release as time goes on. One bit of input if anyone knows, does the above list represent (aside from Release 5.0 variants) the complete documentation package for System V gold? I can't say I've come across any other titles, and most certainly haven't seen PDFs of anything that isn't included here, but I see plenty of titles I've never seen scanned. If nothing else, I'm hoping that "Release Description" document may have a brief flyover of the published materials, akin to the list of books at the beginning of the SVR4 manuals or the documentation roadmaps of earlier UNIX/TS and PWB releases.
- Matt G.
> Any ideas on why businesses didn’t pick up the H11 in 1980?
> [priced too high for hobbyists]
>
> Wikipedia says:
>
> 1978: H11 US$1295 (kit) or US$1595 fully assembled ("4kword base system”)
> display advert <http://www.decodesystems.com/heathkit-h11-ad-1.gif> $1295 kit + postage/freight, bare system, 8KB (4kword), 6 Q-bus slots free. ROM ?
>
> 1981: IBM 5150(PC) US$1,565 for "16 KB RAM, Color Graphics Adapter, and no disk drives.”
> ( I only saw 5150’s with 2x 5.25” 360KB floppies included - otherwise, can’t run programs & store files)
Note that those are nominal prices. In terms of purchasing power USD 1595 in 1978 equated about USD 2200 in 1981 (https://www.in2013dollars.com/us/inflation/1978?endYear=1981&amount=1595)
Otherwise agree with your observation on packaged, off-the-shelf software being the main driver. In small business before the IBM PC, Visicalc drove Apple II uptake; Wordstar, C-Basic 2 and DBase drove CP/M uptake.
Would LSI-11 hardware with LSX, ed and nroff have been competitive in small business? The experiences of John Walker (of AutoCAD fame) suggests not:
https://www.fourmilab.ch/documents/marinchip/
> While looking for something else, I found this:
>
> VAX-UNIX Networking Support Project Implementation Description
> Robert F. Gurwitz; January, 1981
> https://www.rfc-editor.org/ien/ien168.txt
>
> in a somewhat obscure location. I have no idea if it's already widely known
> or not, but here it is anyway.
Hi Noel,
Thank you for highlighting this document. I had seen it before and the implementation (as found on the tapes from CSRG and now on THUS) follows the plan outlined in IEN168 quite closely. The first snapshot of the code is just a few months after this document.
In a way it is modeled after the UoI Arpanet Unix implementation (and thank you again for finding that source!), with a separate (kernel) process for network activity. In my experiments I have found that it is not all that easy to get smooth network data flow as this network process is difficult to schedule just right. I now better understand why Joy moved to "software interrupts” to get better scheduling of kernel network operations.
Wbr,
Paul
While looking for something else, I found this:
VAX-UNIX Networking Support Project Implementation Description
Robert F. Gurwitz; January, 1981
https://www.rfc-editor.org/ien/ien168.txt
in a somewhat obscure location. I have no idea if it's already widely known
or not, but here it is anyway.
Noel
> Also, another problem with trying to 'push' LSX into a previously
> un-handled operating regions (e.g. large disks, but there are likely
> others) is that there are probably things that are un-tested in that
> previously unused operating mode, and there may be un-found bugs that
> you trip across.
'Speak of the devil, and hear the sound of his wings.'
>> From: Gavin Tersteeg
>> Interestingly enough, existing large V6 RK05 images can be mounted,
>> read from, and written to. The only limitations on these pre existing
>> images is that if enough files are deleted, the system will randomly
>> crash.
> I had a look at the source (in sys4.c, nami.c, iget.c, rdwri.c, and
> alloc.c), but I couldn't quickly find the cause; it isn't obvious.
I don't know if the following is _the_ cause of the crashes, but another
problem (another aspect of the '100 free inodes cache' thing) swam up out of
my brain. If you look at V6's alloc$ifree(), it says:
if(fp->s_ninode >= 100)
return;
fp->s_inode[fp->s_ninode++] = ino;
LSX's is missing the first two lines. So, if you try and free more than 100
inodes on LSX, the next line will march out of the s_inode array and smash
other fields in the in-core copy of the super-block.
Like I said, this is not certain to be the cause of those crashes; and it's
not really a 'bug' (as in the opening observation) - but the general sense of
that observation is right on target. LSX is really designed to operate only
on disks with less than 100 inodes, and tring to run it elsewhere is going to
run into issues.
How many similar limitations exist in other areas I don't know.
> From: Heinz Lycklama <heinz(a)osta.com>
> Remember that the LSX and Mini-UNIX systems were developed for two
> different purposes.
Oh, that's understood - but this just re-states my observation, that LSX was
designed to operate in a certain environment, and trying to run it elsewhere
is just asking for problems.
Noel
{I was going to reply to an earlier message, but my CFS left me with
insufficient energy; I'll try and catch up on the points I was goibf to make
here.}
> From: Gavin Tersteeg
> This leaves me with about 1.9kb of space left in the kernel for
> additional drivers
I'm curious how much memory you have in your target system; it must not be a
lot, if you're targeting LSX.
I ask because LSX has been somewhat 'lobotimized' (I don't mean that in a
negative way; it's just recognition that LSX has had a lot of corners
trimmed, to squeeze it down as much as possible), and some of those trims
behind some of the issues you're having (below).
At the time the LSI-11 came out, semiconductor DRAM was just getting started,
so an LSI-11 with 8KB onboard and a 32KB DRAM card (or four 8KB MMV11 core
memory cards :-), to produce the 40KB target for LSX systems, was then a
reasonable configuration. These days, one has to really search to find
anything smaller than 64KB...
It might be easier to just run MINI-UNIX (which is much closer to V6, and
thus a known quantity), than add a lot of things back in to LSX to produce
what will effectively be MINI-UNIX; even if you have to buy a bit more QBUS
memory for the machine.
> the LSX "mkfs" was hardcoded to create filesystems with 6 blocks of
> inodes. This maxed the number of files on a disk to 96, but even with
> the maximum circumvented LSX would only tolerate a maximum of 101 files.
And here you're seeing the 'lobotomizing' of LSX come into play. That '101'
made me suspicious, as the base V6 'caches' 100 free inodes in the
super-block; once those are used, it scans the ilist on disk to refill it.
The code in alloc$ialloc in LSX is hard to understand (there are a lot of
#ifdef's), and it's very different from the V6 code, but I'm pretty sure it
doesn't refill the 'cache' after it uses the cached 100 free inodes. So, you
can have as many free inodes on a disk as you want, but LSX will never use
more than the first 100.
(Note that the comment in the LSX source "up to 100 spare I nodes in the
super block. When this runs out, a linear search through the I list is
instituted to pick up 100 more." is inaccurate; it probably wasn't updated
after the code was changed. ISTR tis is true of a lot of the comments.)
Use MINI-UNIX.
> A fresh filesystem that was mkfs'd on stock V6 can be mounted on LSX,
> but any attempt to create files on it will fail.
The V6 'mkfs' does not fill the free inode cache in the super-block. So, it's
empty when you start out. The LSX ialloc() says:
if(fp->s_ninode > 0) {
...
}
u.u_error = ENOSPC;
return(NULL);
which would produce what you're seeing.
Also, another problem with trying to 'push' LSX into a previously un-handled
operating regions (e.g. large disks, but there are likely others) is that
there are probably things that are un-tested in that previously unused
operating mode, and there may be un-found bugs that you trip across.
Use MINI-UNIX.
> Interestingly enough, existing large V6 RK05 images can be mounted,
> read from, and written to. The only limitations on these pre existing
> images is that if enough files are deleted, the system will randomly crash.
I had a look at the source (in sys4.c, nami.c, iget.c, rdwri.c, and alloc.c),
but I couldn't quickly find the cause; it isn't obvious. (When unlinking a
file, the blocks in the file have to be freed - that's inode 'ip' - and the
directory - inode 'pp' - has to be updated; so it's pretty complicated.)
Use MINI-UNIX.
> The information there about continuous files ... will be extremely
> helpful if I ever try to make those work in the future.
My recollection is that the LSX kernel doesn't have code to create contiguous
files; the LSX page at the CHWiki says "the paper describing LSX indicates
there were two separate programs, one to allocate space for such files, and
one to move a file into such an area, but they do not seem to be extant". If
you find them, could you let me know? Thanks.
Noel
The MERT (Multi-Environment Real-Time) system was developed
at Bell Telephone Laboratories in Murray Hill, NJ by myself and
Doug Bayer in the mid 1970's on a DEC PDP 11/45 computer.
MERT was picked up by the UNIX Support Group (USG) in 1977 and
has been distributed and supported throughout the Bell System.
The MERT Manual consists of both the MERT Programmer's
Manual and the UNIX Programmer's Manual. You can find
all of this documentation at:
1. https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/
The hosting of this manual online was made possible by Clem Cole's
painstaking efforts to scan in and organize the hundreds of pages
in the hard copy MERT Manual. Clem had previously scanned in
my Technical Memoranda documenting my work at Bell Labs in
the 1970's on MERT, LSX, Mini-UNIX and the Mini-Computer
Satellite Processor System:
2.
https://www.tuhs.org/Archive/Documentation/TechReports/Heinz_Tech_Memos/
The monthly UNIX Technology Advisor newsletter published
in 1989 and 1990 contains articles written by some of the leading
open systems industry pioneers. The first issue is available online here:
3. https://www.tuhs.org/Archive/Documentation/Unix_Advisor/
I want to thank Warren Toomey for providing and maintaining
the TUHS.org <https://www.tuhs.org/> platform for the hosting of this
historical information
on UNIX systems for the community.
Heinz Lycklama