Hi,
Everyone on the list is well aware that running V7 in a modern simulator
like SIMH is not a period realistic environment and some of the
"problems" facing the novice enthusiast are considerably different from
those of the era (my terminal is orders of magnitude faster and my
"tape" is a file on a disk). However, many of the challenges facing
someone in 1980, remain for the enthusiast, such as how to run various
commands successfully and how devices interoperate with unix. Of course,
we have do resources and some overlapping experience to draw on -
duckduckgo (googleish), tuhs member experience, and exposure to modern
derivatives like linux, macos, bsd, etc. We also have documentation of
the system in the form of the Programmer's Guide - as pdfs and to some
degree as man pages on the system (haven't found volume 2 documentation
on the instance).
My question for you citizens of that long-ago era :), is this - what was
it like to sit down and learn unix V7 on a PDP? Not from a hardware or
ergonomics perspective, but from a human information processing
perspective. What resources did you consult in your early days and what
did the workflow look like in practical terms.
As an example - today, when I want to know how to accomplish a task in
modern unix, I:
1. Search my own experience and knowledge. If I know the answer, duh, I
know it.
2. Decide if I have enough information about the task to guess at the
requisite commands. If I do, then man command is my friend. If not,
I try man -k task or apropos task where task is a one word summary
of what I'm trying to accomplish.
3. If that fails, then I search for the task online and try what other
folks have done in similar circumstances.
4. If that fails, then I look for an OS specific help list
(linux-mint-help, freebsd forums, etc), do another search there, and
post a question.
5. If that fails, or takes a while, and I know someone with enough
knowledge to help, I ask them.
6. I find and scan relevant literature or books on the subject for
something related.
Repeat as needed.
Programming requires some additional steps:
1. look at source files including headers and code.
2. look at library dependencies
3. ask on dev lists
but otherwise, is similar.
In V7, it's trickier because apropos doesn't exist, or the functional
equivalent man -k, for that matter and books are hard to find (most deal
with System V or BSD. I do find the command 'find /usr/man -name "*" -a
-print | grep task' to be useful in finding man pages, but it's not as
general as apropos.
So, what was the process of learning unix like in the V7 days? What were
your goto resources? More than just man and the sources? Any particular
notes, articles, posts, or books that were really helpful (I found the
article, not the book, "The Unix Programming Environment" by Kernighan
and Mashey, to be enlightening
https://www.computer.org/csdl/mags/co/1981/04/01667315.pdf)?
Regards,
Will
Reading in the AUUGN vol 1 number 4, p. 15 in letter dated April 5,
1979, from Alistair Kilgour, Glasgow writing to Ian Johnstone, New South
Wales about a Unix meeting in the U.K. at University of Kent at
Caterbury (150 attended the meeting) with Ken Thompson and Brian
Kernighan...
Two paragraphs that I found interesting and fun:
Most U.K. users were astonished to hear that one thing which has
_not_ changed in Version 7 is the default for "delete character" and
"delete line" in the teletype handler - we thought we'd seen the last of
# and @! What was very clear was that version 7 is a "snapshot" of a
still developing system, and indeed neither speaker seemed quite sure of
when the snapshot was taken or exactly what it contained. The general
feeling among users at the meeting was that the new tools provided with
version 7 were too good to resist (though many had doubts about the new
Shell). We were however relieved by the assurance that there would
_never_ be a version 8!
...
Finally a quotation, attributed to Steve Johnstone, with which
Brian Kernighan introduced his excellent sales campaign for Unix on the
first day of the conference: " Using TSO is like kicking a dead whale
along the beach". Unix rules.
...
I knew it, it's not just me - those pesky # and @ characters were and
still are really annoying! Oh, and never say never. Unix does rule :).
Will
--
GPG Fingerprint: 68F4 B3BD 1730 555A 4462 7D45 3EAA 5B6D A982 BAAF
I’m trying to understand the origins of void pointers in C. I think they first appeared formally in the C89 spec, but may have existed in earlier compilers.
Of course, as late as V7 there wasn’t much enforcement of types and hence no need for the void* concept and automatic casting. I suppose ‘lint’ would have flagged it though.
In the 4BSD era there was caddr_t, which I think was used for pretty much the same purposes. Did ‘lint’ in the 4BSD era complain about omitted casts to and fro’ caddr_t?
Background to my question is research into the evolution of the socket API in 4.1x BSD and the persistence of ‘struct sockaddr *’ in actual code, even though the design papers show an intent for ‘caddr_t’ (presumably with ‘void*’ semantics, but I’m not sure).
Paul
> From: Will Senn
> what was it like to sit down and learn unix V7 on a PDP? ... What
> resources did you consult in your early days
Well, I started by reading through the UPM (the 8-section thing, with commands
in I, system calls in II, etc). I also read a lot of Unix documentation which
came as larger documents (e.g the Unix Intro, C Tutorial and spec, etc).
I should point out that back then, this was a feasible task. Most man pages
were really _a_ page, and often a short one. By the end of my time on the PWB1
system, there were about 300 commands in /bin (which includes sections II, VI
and VIII), but a good chunk (I'd say probably 50 or more) were ones we'd
written. So there were not that many to start with (section II was maybe 3/4"
of paper), and you could read the UPM in a couple of hours. (I read through it
more than once; you'd get more retained, mentally, on each pass.)
There were no Unix people at all in the group at MIT which I joined, so I
couldn't ask around; there were a bunch in another group on the floor below,
although I didn't use them much - mostly it was RTFM.
Mailing lists? Books? Fuhgeddaboutit!
My next step in learning the kernel was to start reading the sources. (I
didn't have access to Lyons.) I did an 'cref' of the entire system, and
transferred the results to a large piece of paper, so I could see who was
calling who in the kernel.
> What were your goto resources? More than just man and the sources?
That's all there was!
I should point out that reading the sources to command 'x' taught you more
than just how 'x' worked - you saw how people interacted with the kernel, what
it could do, etc, etc.
Noel
> I'd been moving in this direction for a while
Now that I think about it, I may have subconciously absorbed this from Ken's
and Dennis' code in the V6 kernel. Go take a look at it: more than one level
of indent is quite rare in anything there (including tty.c, which has some
pretty complex stuff in it).
I don't know if this was subconcious but deliberate, or concious, or just
something they did for other reasons (e.g. typing long lines took too long on
their TTY's :-), but it's very noticeable, and consistent.
It interesting that both seem to have had the same style; tty.c is in dmr/, so
presumably Dennis', but the stuff in ken/ is the same way.
Oh, one other trick for simplifying code structure (which I noticed looking
through the V6 code a bit - this was something they _didn't_ do in one place,
which I would have done): if you have
if (<condition>) {
<some complex piece of code>
}
<implicit return>
}
invert it:
if (<!condition>)
return;
<some complex piece of code>
}
That gets that whole complex piece of code down one level of nesting.
Noel
I sometimes put the following in shell scripts at the beginning
> /tmp/foo
2>/tmp/foo_err
Drives some folks up the wall because they don’t get it.
David
> On Nov 8, 2017, at 3:21 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> From: Dave Horsfall <dave(a)horsfall.org>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] pre-more pager?
> Message-ID: <alpine.BSF.2.21.1711091019480.4766(a)aneurin.horsfall.org>
> Content-Type: text/plain; charset=US-ASCII; format=flowed
>
> On Wed, 8 Nov 2017, Arthur Krewat wrote:
>
>> head -20 < filename
>
> Or if you really want to confuse the newbies:
>
> < filename head -20
I am trying to find a paper. It was written at Bell Labs,
I thought by Bill Cheswick (though I cannot find it in his name),
entitled something like:
"A hacker caught, and examined"
A description of how a hacker got into Bell labs, and was quarintined on
a single workstation whilst the staff watched what they did.
Does this ring any bells? Anyone have a link?
I know about the Cuckoo's egg, but this was a paper, in troff and -ms macros
as I remember, not a book.
Thanks,
-Steve
On 10 November 2017 at 10:50, Nemo <cym224(a)gmail.com> wrote:
> On 10 November 2017 at 04:56, Alec Muffett <alec.muffett(a)gmail.com> wrote:
>> http://www.cheswick.com/ches/papers/berferd.pdf ?
>
> Wonderful! I first read this as an appendix in his book and now I
> know a second edition of the book is out.
>
> N.
Egg on my face (and keyboard): (1) I failed to send this to the list;
and (2) I already have both editions.
Apologies, all, especially to Alec.
N.
I happened to come across a 1975 report from the University of Warwick
which includes a section on the state of computer networking.
(https://stacks.stanford.edu/file/druid:rq704hx4375/rq704hx4375.pdf)
It contains a section that appears to be a summary of a chat with Sandy
Fraser about Spider (pp. 66-69). It has some information on Spider network
software and Unix that is new to me, and I find amazing. I had not expected
some of this stuff to exist in 1975.
Below some of the noteworthy paragraphs:
[quote]
Spider is an experimental packet switched data communications system that
provides 64 full-duplex asynchronous channels to each connected terminal
(= host computer). The maximum data-rate per channel is 500K bits/sec. Error
control by packet retransmission is automatic and transparent to users.
Terminals are connected to a central switching computer by carrier transmission
loops operating at 1.544 Mb/s, each of which serves several terminals. The
interface between the transmission loop and a terminal contains a stored program
microcomputer. Cooperation between microcomputers and the central switching
computer effects traffic control, error control, and various other functions
related to network operation and maintenance.
The current system contains just one loop with the switching computer (TEMPO I),
four PDP-11/45 computers, two Honeywell 516 computers, two DDP 224 computers,
and one each of Honeywell 6070, PDP-8 and PDP-11/20. In fact many of these are
connected in turn to other items of digital equipment.
Spider has been running since 1972 and recent work has shifted away from the
design and construction of the network itself to developing user services to
support other research activities at Bell Labs. A major example of this has
been to construct a free-standing file store (extracted in fact from UNIX [88])
and connect it to the network. This is available as a service to any user of
the network.
[...]
The ring is used in different ways by the various computers connected to it.
The filestore has already been mentioned. Two computers use this for conventional
back-up, and access it on a file-by-file basis.
Two other machines - dedicated to laboratory experiments - access it on a
block-within-file basis. To help with program development for these dedicated
machines, the UNIX system (available on yet more computers) is used during
"unavailable" periods for editing and program preparation. The user then leaves
his programs in the filestore ready to load when he next gains access to the
dedicated machine.
Two other "dedicated" machines provide the user interface of UNIX, but lack
peripherals and a UNIX kernel! In place of both is a small amount of software
that transmits all calls on the UNIX system to a full UNIX system elsewhere!
The ring system with its filestore also provides a convenient buffering service.
Finally Fraser tells of the time where one of the PDP-11 machines was delivered
sans discs. A small alteration to a UNIX system diverted all disc transfer
requests to the filestore, where a suitable amount of disc was made available.
The system ran a full swapping version of UNIX at about a quarter of the
normal speed.
[/quote]
> From: Jon Forrest
> In the early days of Unix I was told that it wasn't practical to write a
> pager because such a thing would have to run in raw mode in order to
> process single letter commands, such as the space character for going on
> to the next page. Since raw mode introduced a significant amount of
> overhead on already overtaxed machines, it was considered an anti-social
> thing to do.
Something sounds odd here.
One could have written a pager which used 'Return' for each new page, and run
it in cooked mode and not used any less cycles (in fact, more, IIRC the
cooked/raw differences in handling in the TTY driver).
But that's all second-order effects anyway. If one were using a serial line
hooked up to a DZ (and those were common - DH's were _much_ more expensive, so
poor places like my lab at MIT used DZ's), then _every character printed_
caused an interrupt. So the overhead from printing each screen-ful of text was
orders of magnitude greater than the overhead of the user's input to get the
next screen.
> IIRC later versions of Unix added the ability to respond to a specific
> list of single characters without going into raw mode. Of course, that
> didn't help when full-screen editors like vi and the Rand editor came
> out.
Overhead was definitely an issue with EMACS on Multics, where waking up a
process on each character of input was significant. I think Bernie's Multics
EMACS document discusses this. I'm pretty sure they used the Telnet RCTE
option to try and minimize the overhead.
Noel