I can answer some of the below, as I was looking into that a few years ago.
> 81. Q: What was the first Unix network?
> A: spider
> You thought it was Datakit, didn't you? But Sandy Fraser had an earlier
> project.
>
> When did Alexander G Fraser's spider cell network happen? For that matter,
> when did Datakit happen? I can't find references to either start date on
> line (nor anything on spider except for references to it in Dr Fraser's
> bio). I can find references to Datakit in 1978 or so.
Spider was designed between 1969 and 1974 - the final lab report (#23) dates from December 1974. It was based around a serial loop running at T1 signalling speed (~1.5Mhz). Here is a video recorded by Dr. Fraser about it: https://www.youtube.com/watch?v=ojRtJ1U6Qzw (first half is about Spider, second half about Datakit).
It connected to its hosts via a (discrete TTL-based) microcontroller or “TIU” and seems to have been connected almost immediately to Unix systems: the oldest driver I have been able to locate is in the V4 tree (https://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys/dmr/tdir/tiu.c) It used a DMA-based parallel interface into the PDP11. As such, it seems to have been much faster than the typical Datakit connection later - but I know too little about Datakit to be sure.
There is an interesting visit report from 1975 that discusses some of the stuff that was done with Spider here: https://stacks.stanford.edu/file/druid:rq704hx4375/rq704hx4375.pdf
Beyond those experiments I think Spider usage was limited to file serving (’nfs’ and ‘ufs’) and printing (’npr’). It would seem logical that it was used for remote login, but I have not found any traces of such usage. Same for email usage.
From what little I know, I think that Datakit became operational in a test network in 1979 and as a product in 1982.
> I thought the answer was "ARPANET" since we had a NCP on 4th edition Unix
> in late 1974 or early 1975 from the University of Illinois dating from that
> time (the code in TUHS appears to be based on V6 + a number of patches).
“Network Unix” (https://www.rfc-editor.org/rfc/rfc681.html) was written by Steve Holmgren, Gary Grossman and Steve Bunch in the last 3 months of 1974. To my best knowledge they used V5 and migrated to V6 as it came along. I think they were getting regular update tapes, and they implemented their system as a device driver (plus userland support) to be able to keep up with the steady flow of updates. Greg Chesson was also involved with this Arpanet Unix.
As far as I can tell, Arpanet Unix saw fairly wide deployment within the Arpanet research community, also as a front end processor for other systems.
A few years back I asked on this list why “Network Unix” was not more enthusiastically received by the core Unix development team and (conceptually) integrated into the main code base. I understood the replies as that (i) people were very satisfied with Spider; and (ii) being part of Bell they wanted a networking system that was more compatible with the Bell network, i.e. Datakit.
==
In my opinion both “Spider Unix” and “Arpanet Unix” threw a very long conceptual shadow. From Spider onwards, the Research systems viewed the network as a device (Spider), that could be multiplexed (V8 streams) or even mounted (Plan9). The Arpa lineage saw the network as a long distance bidirectional pipe, with the actual I/O device hidden from view; this view persists all the way to 4.2BSD and beyond.
I often wonder if it was (is?) possible to come up with a design with the conceptual clarity of Plan9, but organised around the “network as a pipe” view instead.
> Because we can't ask Greg sadly, I think the Holmgren is the last around that would know definitively and I've personally lost track of him.
Steve Holmgren and the Arpanet Unix team are still around (at least they were 3 years ago). I just remembered that I put some of my notes & findings in a draft wiki that I wanted to develop for TUHS - but I never finished it:
http://chiselapp.com/user/pnr/repository/TUHS_wiki/wiki?name=early_networki…
The recent find of CSRG report 3 and 4 may be the incentive I needed to complete my notes about 4.1a, 4.1c and 4.2BSD. However, still looking for the actual source tape to 4.1a - the closest I have is its derivative in 2.9BSD (https://minnie.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/net)
Apologies that this isn't specifically a Unix specific question but I
was wondering if anyone had insight in running domain/OS and it's
relationship to Plan 9 (assuming there is any).
One of my early mentors was a former product person at Apollo in Mass.
and was nice enough to tell me all sorts of war stories working there.
I had known about Plan9 at the time, and from what he described to me
about domain/OS it sounded like there was lots of overlap between the
two from a high level design perspective at the least. I've always been
keen to understand if domain/OS grew out of former Bell Labs folks, or
how it got started.
As an aside, he gifted me a whole bunch of marketing collateral from
Apollo (from before the HQ acquisition) that i'd be happy to share if
there is any historical value in that. At the time I was a
video/special effects engineer are was amazed at how beneficial having
something like domain/OS or Plan9 would have been for us, it felt we
were basically trying to accomplish a lot of the same goals by duct
taping a bunch of Irix and Linux systems together.
Cheers,
-pete
--
Pete Wright
pete(a)nomadlogic.org
@nomadlogicLA
My memory failed me: the part numbers were Z8001/Z8002 for the original and Z8003/Z8004 for the revised chips (segmented/unsegmented).
Hence it is unlikely that the Onyx had any form of demand paging (other than extending the stack in PDP11-like fashion).
——
A somewhat comparable machine to the Onyx was the Zilog S8000. It ran “Zeus”, which was also a Unix version:
https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/zilog/s8000/
Instead of the MMU described below it used the Zilog segmented MMU chips, 3 of them. These could be used to give a plain 16 bit address space divided in 3 segments, or could be used with the segmented addresses of the Z8001. The approach used by Onyx seems much cleaner to me, and reminiscent of the MMU on a DG Eclipse.
I think the original chips were the Z8000 (unsegmented) and the the Z8001 (segmented). These could not abort/restart instructions and were replaced by the Z8002 (unsegmented) and Z8003 (segmented). On these chips one could effectively assert reset during a fault and this would leave the registers in a state where a software routine could roll back the faulted instruction.
If the sources to the Onyx Unix survived, it would be interesting to see if it used this capability of the Z8002 and implemented a form demand paging.
Last but not least, the Xenix overview I linked earlier (http://seefigure1.com/images/xenix/xenix-timeline.jpg) shows Xenix ports to 4 other Z800 machines: Paradyne, Compucorp, Bleasedale and Kontron; maybe all of these never got to production.
> Message: 7
> Date: Tue, 21 Jan 2020 21:32:51 +0000
> From: Derek Fawcus <dfawcus+lists-tuhs(a)employees.org>
> To: The Unix Heritage Society mailing list <tuhs(a)tuhs.org>
> Subject: [TUHS] Onyx (was Re: Unix on Zilog Z8000?)
> Message-ID: <20200121213251.GA25322(a)clarinet.employees.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Jan 21, 2020 at 01:28:14PM -0500, Clem Cole wrote:
>> The Onyx box redated all the 68K and later Intel or other systems.
>
> That was a fun bit of grubbing around courtesy of a bitsavers mirror
> (https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/)
>
> It seems they started with a board based upon the non-segmented Z8002
> and only later switched to using the segmented Z8001. In the initial
> board, they created their own MMU:
>
> Page 6 of: https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/Onyx_C…
>
> Memory Management Controller:
>
> The Memory Management Controller (MMC) enables the C8002 to perform
> address translation, memory block protection, and separation of
> instruction and data spaces. Sixteen independent map sets are
> implemented, with each map set consisting of an instruction map and
> a data map. Within each map there are 32 page registers. Each page
> register relocates and validates a 2K byte page. The MMC generates
> a 20 bit address allowing the C8002 to access up to one Mbyte of
> physical memory.
>
> So I'd guess the MMC was actually programed through I/O instuctions
> to io space, and hence preserved the necessary protection domains.
>
> Cute. I've had a background interest in the Z8000 (triggered by reading
> a Z80000 datasheet around 87/88), and always though about using
> the segmented rather than unsegmented device.
>
> The following has a bit more info about the version of System III
> ported to their boxes:
>
> https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/UNIX_3…
>
> DF
A somewhat comparable machine to the Onyx was the Zilog S8000. It ran “Zeus”, which was also a Unix version:
https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/zilog/s8000/
Instead of the MMU described below it used the Zilog segmented MMU chips, 3 of them. These could be used to give a plain 16 bit address space divided in 3 segments, or could be used with the segmented addresses of the Z8001. The approach used by Onyx seems much cleaner to me, and reminiscent of the MMU on a DG Eclipse.
I think the original chips were the Z8000 (unsegmented) and the the Z8001 (segmented). These could not abort/restart instructions and were replaced by the Z8002 (unsegmented) and Z8003 (segmented). On these chips one could effectively assert reset during a fault and this would leave the registers in a state where a software routine could roll back the faulted instruction.
If the sources to the Onyx Unix survived, it would be interesting to see if it used this capability of the Z8002 and implemented a form demand paging.
Last but not least, the Xenix overview I linked earlier (http://seefigure1.com/images/xenix/xenix-timeline.jpg) shows Xenix ports to 4 other Z800 machines: Paradyne, Compucorp, Bleasedale and Kontron; maybe all of these never got to production.
> Message: 7
> Date: Tue, 21 Jan 2020 21:32:51 +0000
> From: Derek Fawcus <dfawcus+lists-tuhs(a)employees.org>
> To: The Unix Heritage Society mailing list <tuhs(a)tuhs.org>
> Subject: [TUHS] Onyx (was Re: Unix on Zilog Z8000?)
> Message-ID: <20200121213251.GA25322(a)clarinet.employees.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Tue, Jan 21, 2020 at 01:28:14PM -0500, Clem Cole wrote:
>> The Onyx box redated all the 68K and later Intel or other systems.
>
> That was a fun bit of grubbing around courtesy of a bitsavers mirror
> (https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/)
>
> It seems they started with a board based upon the non-segmented Z8002
> and only later switched to using the segmented Z8001. In the initial
> board, they created their own MMU:
>
> Page 6 of: https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/Onyx_C…
>
> Memory Management Controller:
>
> The Memory Management Controller (MMC) enables the C8002 to perform
> address translation, memory block protection, and separation of
> instruction and data spaces. Sixteen independent map sets are
> implemented, with each map set consisting of an instruction map and
> a data map. Within each map there are 32 page registers. Each page
> register relocates and validates a 2K byte page. The MMC generates
> a 20 bit address allowing the C8002 to access up to one Mbyte of
> physical memory.
>
> So I'd guess the MMC was actually programed through I/O instuctions
> to io space, and hence preserved the necessary protection domains.
>
> Cute. I've had a background interest in the Z8000 (triggered by reading
> a Z80000 datasheet around 87/88), and always though about using
> the segmented rather than unsegmented device.
>
> The following has a bit more info about the version of System III
> ported to their boxes:
>
> https://www.mirrorservice.org/sites/www.bitsavers.org/pdf/onyx/c8002/UNIX_3…
>
> DF
[Resending as this got squashed a few days ago. Jon, sorry for the
duplicate. Again.]
On Sun, Jan 12, 2020 at 4:38 PM Jon Steinhart <jon(a)fourwinds.com> wrote:
> [snip]
> So I think that the point that you're trying to make, correct me if I'm
> wrong,
> is that if lists just knew how long they were you could just ask and that
> it
> would be more efficient.
>
What I understood was that, by translating into a lowest-common-denominator
format like text, one loses much of the semantic information implicit in a
richer representation. In particular, much of the internal knowledge (like
type information...) is lost during translation and presentation. Put
another way, with text as usually used by the standard suite of Unix tools,
type information is implicit, rather than explicit. I took this to be less
an issue of efficiency and more of expressiveness.
It is, perhaps, important to remember that Unix works so well because of
heavy use of convention: to take Doug's example, the total number of
commands might be easy to find with `wc` because one assumes each command
is presented on a separate line, with no gaudy header or footer information
or extraneous explanatory text.
This sort of convention, where each logical "record" is a line by itself,
is pervasive on Unix systems, but is not guaranteed. In some sense, those
representations are fragile: a change in output might break something else
downstream in the pipeline, whereas a representation that captures more
semantic meaning is more robust in the face of change but, as in Doug's
example, often harder to use. The Lisp Machine had all sorts of cool
information in the image and a good Lisp hacker familiar with the machine's
structures could write programs to extract and present that information.
But doing so wasn't trivial in the way that '| wc -l' in response to a
casual query is.
While that may be true, it sort of assume that this is something so common
> that
> the extra overhead for line counting should be part of every list. And it
> doesn't
> address the issue that while maybe you want a line count I may want a
> character
> count or a count of all lines that begin with the letter A. Limiting this
> example
> to just line numbers ignores the fact that different people might want
> different
> information that can't all be predicted in advance and built into every
> program.
>
This I think illustrates an important point: Unix conventions worked well
enough in practice that many interesting tasks were not just tractable, but
easy and in some cases trivial. Combining programs was easy via pipelines.
Harder stuff involving more elaborate data formats was possible, but, well,
harder and required more involved programming. By contrast, the Lisp
machine could do the hard stuff, but the simple stuff also required
non-trivial programming.
The SQL database point was similarly interesting: having written programs
to talk to relational databases, yes, one can do powerful things: but the
amount of programming required is significant at a minimum and often
substantial.
> It also seems to me that the root problem here is that the data in the
> original
> example was in an emacs-specific format instead of the default UNIX text
> file
> format.
>
> The beauty of UNIX is that with a common file format one can create tools
> that
> process data in different ways that then operate on all data. Yes, it's
> not as
> efficient as creating a custom tool for a particular purpose, but is much
> better
> for casual use. One can always create a special purpose tool if a
> particular
> use becomes so prevalent that the extra efficiency is worthwhile. If
> you're not
> familiar with it, find a copy of the Communications of the ACM issue where
> Knuth
> presented a clever search algorithm (if I remember correctly) and McIlroy
> did a
> critique. One of the things that Doug pointed out what that while Don's
> code was
> more efficient, by creating a new pile of special-purpose code he
> introduced bugs.
>
The flip side is that one often loses information in the conversion to
text: yes, there are structured data formats with text serializations that
can preserve the lost information, but consuming and processing those with
the standard Unix tools can be messy. Seemingly trivial changes in text,
like reversing the order of two fields, can break programs that consume
that data. Data must be suitable for pipelining (e.g., perhaps free-form
text must be free of newlines or something). These are all limitations.
Where I think the argument went awry is in not recognizing that very often
those problems, while real, are at least tractable.
Many people have claimed, incorrectly in my opinion, that this model fails
> in the
> modern era because it only works on text data. They change the subject
> when I
> point out that ImageMagick works on binary data. And, there are now stream
> processing utilities for JSON data and such that show that the UNIX model
> still
> works IF you understand it and know how to use it.
>
Certainly. I think you hit the nail on the head with the proviso that one
must _understand_ the Unix model and how to use it. If one does so, it's
very powerful indeed, and it really is applicable more often than not. But
it is not a panacea (not that anyone suggested it is). As an example, how
do I apply an unmodified `grep` to arbitrary JSON data (which may span more
than one line)? Perhaps there is a way (I can imagine a 'record2line'
program that consumes a single JSON object and emits it as a syntactically
valid one-liner...) but I can also imagine all sorts of ways that might go
wrong.
- Dan C.
[I originally asked the following on Twitter which was probably not the smartest idea]
I was recently wondering about the origins of Linux, i.e. Linux Torvalds doing his MSc and deciding to write Linux (the kernel) for the i386 because Minix did not support the i386 properly. While this is perfectly understandable I was trying to understand why, as he was in academia, he did not decide to write a “free X” for a different X. The example I picked was Plan 9, simply because I always liked it but X could be any number of other operating systems which he would have been exposed to in academia. This all started in my mind because I was thinking about my friends who were CompSci university students with me at the time and they were into all sorts of esoteric stuff like Miranda-based operating systems, building a complete interface builder for X11 on SunOS including sparkly mouse pointers, etc. (I guess you could define it as “the usual frivolous MSc projects”) and comparing their choices with Linus’.
The answers I got varied from “the world needed a free Unix and BSD was embroiled in the AT&T lawsuit at the time” to “Plan 9 also had a restrictive license” (to the latter my response was that “so did Unix and that’s why Linus built Linux!”) but I don’t feel any of the answers addressed my underlying question as to what was wrong in the exposure to other operating systems which made Unix the choice?
Personally I feel that if we had a distributed OS now instead of Linux we’d be better off with the current architecture of the world so I am sad that "Linux is not Plan 9" which is what prompted the question.
Obviously I am most grateful for being able to boot the Mathematics department’s MS-DOS i486 machines with Linux 0.12 floppy disks and not having to code Fortran 77 in Notepad followed by eventually taking over the department with X-Terminals based on Linux connected to the departmental servers (Sun, DEC Alpha, IBM RS/6000s). Without Linux they had been running eXeed (sp?) on Windows 3.11! In this respect Linux definitely filled in a huge gap.
Arrigo
Hi,
Have you ever used shell level, $SHLVL, in your weekly ~> daily use of Unix?
I had largely dismissed it until a recent conversation in a newsgroup.
I learned that shelling out of programs also increments the shell level.
I.e. :shell or :!/bin/sh in vim.
Someone also mentioned quickly starting a new sub-shell from the current
shell for quick transient tasks, i.e. dc / bc, mount / cp / unmount,
{,r,s}cp, etc., in an existing terminal window to avoid cluttering that
first terminals history with the transient commands.
That got me to wondering if there were other uses for shell level
($SHLVL). Hence my question.
This is more about using (contemporary) shells on Unix, than it is about
Unix history. But I suspect that TUHS is one of the best places to find
the most people that are likely to know about shell level. Feel free to
reply to COFF if it would be better there.
--
Grant. . . .
unix || die
I thought Benno Rice’s argument a bit unorganized and ultimately unconvincing, but I think the underlying point that we should from time to time step back a bit and review fundamentals has some merit. Unfortunately he does not distinguish much between a poor concept and a poor implementation.
For example, what does “everything is a file” mean in Unix?
- Devices and files are accessed through the same small API?
- All I/O is through unstructured byte streams?
- I/O is accessed via a single unified name space? etc.
Once that is clear, how can the concept then best be applied to USB devices?
Or: is there a fundamental difference between windows-style completion ports and completion signals?
Many of the underlying questions have been considered in the past, with carefully laid out arguments in various papers. In my view it is worthwhile to go back to these papers and see how the arguments pro and contra various approaches were weighed then and considering if the same still holds true today.
Interestingly, several points that Benno touches upon in his talk were also the topic of debate when Unix was transitioning to a 32 bits address space and incorporating networking in the early 80’s, as the TR/4 and TR/3 papers show. Of course, the system that CSRG delivered is different from the ambitions expressed in these papers and for sure opinions on the best choices differed as much back then as they will now - and that makes for an interesting discussion.
Rich was kind enough to look through the Joyce papers to see if it contained "CSRG Tech Report 4: Proposals for Unix on the VAX”. It did.
As list regulars will know I’ve been looking for that paper for years as it documents the early ideas for networking and IPC in what was to become 4.2BSD.
It is an intriguing paper that discusses a network API that is imo fairly different from what ended up being in 4.1a and 4.2BSD. It confirms Kirk McKusick’s recollection that the select statement was modelled after the ADA select statement. It also confirms Clem Cole’s recollection that the initial ideas for 4.2BSB were significantly influenced by the ideas of Richard Rashid (Aleph/Accent/Mach).
Besides IPC and networking, it also discusses file systems and a wide array of potential improvements in various other areas.
> If you search for "Jolitz"
Oh, I meant in the DDJ search box, not a general Web search.
> One of the items listed in WP, "Copyright, Copyleft, and Competitive
> Advantage" (Apr/1991) wasn't in the search results .. Since it's not in
> the 'releases' page, it might not really be part of the series?
Also, the last article in the series ("The Final Step") says the series was 17
articles long, not the 18 you get if you include "Copyright".
Noel
>Date: Tue, 07 Jan 2020 14:57:40 -0500.
>From: Doug McIlroy <>
>To: tuhs(a)tuhs.org, thomas.paulsen(a)firemail.de
>Subject: Re: [TUHS] screen editors
>Message-ID: <202001071957.007JveQu169574(a)coolidge.cs.dartmouth.edu>
>Content-Type: text/plain; charset=us-ascii
.. snip ..
>% wc -c /bin/vi bin/sam bin/samterm
>1706152 /bin/vi
> 112208 bin/sam
> 153624 bin/samterm
>These mumbers are from Red Hat Linux.
>The 6:1 discrepancy is understated because
>vi is stripped and the sam files are not.
>All are 64-bit, dynamically linked.
That's a real big vi in RHL. Looking at a few (commercial) unixes I get
SCO UNIX 3.2V4.2 132898 Aug 22 1996 /usr/bin/vi
- /usr/bin/vi: iAPX 386 executable
Tru64 V5.1B-5 331552 Aug 21 2010 /usr/bin/vi
- /usr/bin/vi: COFF format alpha dynamically linked, demand paged
sticky executable or object module stripped - version 3.13-14
HP-UX 11.31 748996 Aug 28 2009 /bin/vi
-- /bin/vi: ELF-32 executable object file - IA64
I'm trying to grab some stuff from bitsavers.org. It seems to be failing to
lookup name records. I'd send mail directly to Al, but the only address I
have for him at at bitsavers.org :(
Anybody have a better contact or good back-channel to Al?
Warner
I would imagine that the user land changes made its way into 386 Mach. Although I haven't seen anything I can recall off the top of my head about 386 commits in user land until much later.
Maybe one day more of that Mt Xinu stuff will surface, although I'm still amazed I got the kernel to build.
Internet legend is that the rift was massive.
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Larry McVoy <lm(a)mcvoy.com>
Sent: Sunday, January 19, 2020, 12:26 a.m.
To: Greg 'groggy' Lehey
Cc: UNIX Heritage Society
Subject: Re: [TUHS] Early Linux and BSD (was: On the origins of Linux - "an academic question")
On Sat, Jan 18, 2020 at 03:19:13PM +1100, Greg 'groggy' Lehey wrote:
> On Friday, 17 January 2020 at 22:50:51 -0500, Theodore Y. Ts'o wrote:
> >
> > In the super-early days (late 1991, early 1992), those of us who
> > worked on it just wanted a "something Unix-like" that we could run at
> > home (my first computer was a 40 MHz 386 with 16 MB of memory). This
> > was before the AT&T/BSD Lawsuit (which was in 1992) and while Jolitz
> > may have been demonstrating 386BSD in private, I was certainly never
> > aware of it
>
> At the start of this time, Bill was working for BSDI, who were
> preparing a commercial product that (in March 1992) became BSD/386.
Wikipedia says he was working on 386BSD as early has 1989 and that
clicks with me (Jolitz worked for me around 1992 or 3). I don't
remember him mentioning working at BSDI, are you sure about that
part? Those guys did not like each other at all.
Ted Ts'o mentioned Bruce Evans in a reply to "On the origins of
Linux". I'm really sorry to have to announce that he died last month.
His family is holding a "small farewell gathering" in Sydney in late
February. To quote his sister Julie Saravanos:
We would be pleased if you, or any other BSD/computer friend, came
There's no date yet, and I don't think it's appropriate to broadcast
details. If anybody is interested, please contact Warren or me.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
> but…damn, even ex/vi 3.x is huge
It was so excesssive right from the start that I refused to use it.
Sam was the first screen editor that I deemed worthwhile, and I
still use it today.
Doug
Of all those CSV repositories, geocities sites and yahoo groups are any indicator, it's going to be up to people to put the past onto plastic and get it out there.
If anything right now the utzoo archives along with people posting source and patches to usenet survived...
Not to mention all those shovel ware CD-ROMs from the 90s that ironically preserved so much early free software and other gems of the pre Linux/NT world.
Github will eventually be shuttered like anything else and all that will remain is dead links.. It really needs to be distributed by nature, but then you have people using Github as cloud storage of all things.
I don't think the CSRG CD's were hot sellers, and I couldn't imagine getting utzoo or TUHS pressed... Although maybe it's something to look at.
It might be interesting. From: TUHS <tuhs-bounces(a)minnie.tuhs.org> on behalf of Lars Brinkhoff <lars(a)nocrew.org>
Sent: Friday, January 17, 2020, 2:47 p.m.
To: Warren Toomey
Cc: tuhs(a)tuhs.org
Subject: Re: [TUHS] History of TUHS
Warren Toomey wrote:
> Heh, I hadn't thought that TUHS itself should now be considered
> historical
I often imagine future historians 100 years from now pouring over
mailing list archives and bitrotted GitHub repositories, including those
that contain historical research. Metahistory maybe?
Hello people in the future! How's the singularity treating you?
Sorry about the climate.
Is there a history of TUHS page I've missed?
When was it formed? Was it an outgrowth of PUPS? etc.
Again, I'm working on a talk and would like to include some of this
information and it made me think that the history of the historians should
be documented too.
Warner
TL; DR. I'm trying to find the best possible home for some dead trees.
I have about a foot-high stack of manilla folders containing "early Unix papers". They have been boxed up for a few decades, but appear to be in perfect condition. I inherited this collection from Jim Joyce, who taught the first Unix course at UC Berkeley and went on to run a series of ventures in Unix-related bookselling, instruction, publishing, etc.
The collection has been boxed up for a few decades, but appears to be in perfect condition. I don't think it has much financial value, but I suspect that some of the papers may have historical significance. Indeed, some of them may not be available in any other form, so they definitely should be scanned in and republished.
I also have a variety of newer materials, including full sets of BSD manuals, SunExpert and Unix Review issues, along with a lot of books and course handouts and maybe a SUGtape or two. I'd like to donate these materials to an institution that will take care of them, make them available to interested parties, etc. Here are some suggested recipients:
- The Computer History Museum (Mountain View, CA, USA)
- The Internet Archive (San Francisco, CA, USA)
- The Living Computers Museum (Seattle, WA, USA)
- The UC Berkeley Library (Berkeley, CA, USA)
- The Unix Heritage Association (Australia?)
- The USENIX Association (Berkeley, CA, USA)
According to Warren Toomey, TUHS probably isn't the best possibility. The Good News about most of the others is that I can get materials to them in the back of my car. However, I may be overlooking some better possibility, so I am following Warren's suggestion and asking here. I'm open to any suggestions that have a convincing rationale.
Now, open for suggestions (ducks)...
-r
I just found out about TUHS today; I plan to skim the archives RSN to get some context. Meanwhile, this note is a somewhat long-winded introduction, followed by a (non-monetary) sales pitch. I think some of the introduction may be interesting and/or relevant to the pitch, but YMMV...
Introduction
In 1970, I was introduced to programming by a cabal of social science professors at SF State College. They had set up a lab space with a few IBM 2741 (I/O Selectric) terminals, connected by dedicated lines to Stanford's Wylbur system. I managed to wangle a spot as a student assistant and never looked back. I also played a tiny bit with a PDP-12 in a bio lab and ran one (1) program on SFSC's "production system", an IBM 1620 Mark II (yep; it's a computer...).
While a student, I actually got paid to work with a CDC 3150, a DEC PDP-15, and (once) on an IBM 360/30. After that, I had some Real Jobs: assembler on a Varian 620i and a PDP-11, COBOL on an IBM mainframe, Fortran on assorted CDC and assorted DEC machines, etc.
By the late 80's, my personal computers were a pair of aging LSI-11's, running RT-11. At work (Naval Research Lab, in DC), I was mostly using TOPS-10 and Vax/VMS. I wanted to upgrade my home system and knew that I wanted all the cool stuff: a bit-mapped screen, multiprocessing, virtual memory, etc.
There was no way I could afford to buy this sort of setup from DEC, but my friend Jim Joyce had been telling me about Unix for a few years, so I attended the Boston USENIX in 1982 (sharing a cheap hotel room with Dick Karpinski :-) and wandered around looking at the workstation offerings. I made a bet on Sun (buying stock would have been far more lucrative, but also more risky and less fun) and ended up buying Sun #285 from John Gage.
At one point, John was wandering around Sun, asking for a slogan that Sun could use on a conference button to indicate how they differed from the competition. I suggested "The Joy of Unix", which he immediately adopted. This decision wasn't totally appreciated by some USENIX attendees from Murray Hill, who printed up (using troff, one presumes) and wore individualized paper badges proclaiming themselves as "The <whatever> of Unix". Imitation is the sincerest form of flattery... (bows)
IIRC, I received my Sun-1 late in a week (of course :-), but managed to set it up with fairly little pain. I got some help on the weekend from someone named Bill, who happened to be in the office on the weekend ... seemed quite competent ... I ran for a position on the Sun User Group board, saying that I would try to protect the interests of the "smaller" users. I think I was able to do some good in that position, not least because I was able to get John Gilmore and the Sun lawyers to agree on a legal notice, edit some SUGtapes, etc.
Later on, I morphed this effort into Prime Time Freeware, which produced book/CD collections of what is now called Open Source software. Back when there were trade magazines, I also wrote a few hundred articles for Unix Review, SunExpert, etc. Of course, I continue to play (happily) with computers...
Perkify
If you waded through all of that introduction, you'll have figured out that I'm a big fan of making libre software more available, usable, etc. This actually leads into Perkify, one of my current projects. Perkify is (at heart) a blind-friendly virtual machine, based on Ubuntu, Vagrant, and VirtualBox. As you might expect, it has a strong emphasis on text-based programs, which Unix (and Linux) have in large quantities.
However, Perkify's charter has expanded quite a bit. At some point, I realized that (within limits) there was very little point to worrying about how big the Vagrant "box" became. After all, a couple of dozen GB of storage is no longer an issue, and having a big VM on the disk (or even running) doesn't slow anything down. So, the current distro weighs in at about 10 GB and 4,000 or so APT packages (mostly brought in as dependencies or recommendations). Think of it as "a well-equipped workshop, just down the hall". For details, see:
- http://pa.cfcl.com/item?key=Areas/Content/Overviews/Perkify_Intro/main.toml
- http://pa.cfcl.com/item?key=Areas/Content/Overviews/Perkify_Index/main.toml
Sales Pitch
I note that assorted folks on this list are trying to dig up copies of Ken's Space Travel program. Amusingly, I was making the same search just the other day. However, finding software that can be made to run on Ubuntu is only part of the challenge I face; I also need to come up APT (or whatever) packages that Just Work when I add them to the distribution.
So, here's the pitch. Help me (and others) to create packages for use in Perkify and other Debian-derived distros. The result will be software that has reliable repos, distribution, etc. It may also help the code to live on after you and I are no longer able (or simply interested enough) to keep it going.
-r