> From: Will Senn
> anything similar to modern behavior when handling the delete/backspace
> key where the character is deleted from the input and rubbed out? The
> default, like in v6/v7 for erase and kill is # and @. I can live with
> this, if I can't get it to do the rubout, because at least you can see
> the # in the input
I use ASCII 'backspace' (^H) on my V6, and it 'sort of' works; it doesn't
erase the deleted character on the screen, but if one then types corrected
characters, they overlay the deleted ones, leaving the corrected input. That
should work on everything later than V6.
The MIT PWB1 tty handler (link in a prior message) not only supported a 'kill
line' (we generally used '^U') which actually visibly deleted the old line
contents (on screen terminals, of course; on printing terminals you're
stuck), it also had suppport for '^R' (re-type line) and some other stuff.
Noel
Did svr2 have anything similar to modern behavior when handling the
delete/backspace key where the character is deleted from the input and
rubbed out? The default, like in v6/v7 for erase and kill is # and @. I
can live with this, if I can't get it to do the rubout, because at least
you can see the # in the input, but if I can figure out how to get it to
rubout the last character, I'd map erase to DEL, which I believe to be
^U (but since it's invisible, it's confusing when it doesn't rubout).
Will
All,
Are there any bootable media available for any SVR 2 systems available
online? Or are they all under IP lock and key? If so, what's the closest
system that is available to get a feel for that variety of OS?
Happy holidays, folks.
Will
Hi all, I received an e-mail looking for the ksh-88 source code. A quick
search for it on-line doesn't reveal it. Does anybody have a copy?
Cheers, Warren
Original e-mail:
I recently built a PiDP11 and have been enjoying going back in time
to 2.11BSD.. I was at UC Davis in the the early 1980's and we had
a few PDP-11/70's running 2.8/2.9 BSD. Back then we reached out to
David Korn and he sent us the source for KSH -- this would have been
in 1985ish if I remember, and we compiled it for 2.9 & 4.1BSD, Xenix,
and some other variants that used K&R C. It may have been what was
later called ksh88. I wish I still had the files from then..
I was wondering if you might know if there's an older version like this
or one that's been ported for 2.11BSD?
Many thanks,
Joe
Hey Warren,
First and foremost; Thank you so much for maintaining this mailing list, and for including me within the subscribers list. I find myself intrigued by some of the topics that transfer over to the “COFF” mailing list. Could you include me on that mailing list as well?
Peace.
Thomas Paulsen:
bash is clearly more advanced. ksh is retro computing.
====
Shell wars are, in the end, no more interesting than editor wars.
I use bash on Linux systems because it's the least-poorly
supported of the Bourne-family shells, besides which bash
is there by default. Ksh isn't.
I use ksh on OpenBSD systems because it's the least-poorly
supported of the Bourne-family shells, besides which kh
is there by default. Bash isn't.
I don't actually care for most of the extra crap in either
of those shells. I don't want my shell to do line editing
or auto-completion, and I find the csh-derived history
mechanisms more annoying than useful so I turn them off
too. To my mind, the Research 10/e sh had it about right,
including the simple way functions were exported and the
whatis built-in that told you whether something was a
variable or a shell function or an external executable,
and printed the first two in forms easily edited on the
screen and re-used.
Terminal programs that don't let you easily edit input
or output from the screen and re-send it, and programs
that abet them by spouting gratuitous ANSI control
sequences: now THAT's what I call retro-computing.
Probably further discussion of any of this belongs in
COFF.
Norman Wilson
Toronto ON
John Cowan:
Unfortunately, approximately nobody except you has access to
[the 10/e sh] man page. Can you post or email it?
===
I am happy to remind you that you're a few years out of date:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V10/man/man1/sh.1
Norman Wilson
Toronto ON
So a number of Unix luminaries were photographed as part of the "Faces of
Open Source" project. I have to admit, the photos themselves are quite
good: https://www.facesofopensource.com/collect/
It seems that the photographer is now selling NFTs based on those photos,
which is...a thing.
- Dan C.
> From: Paul Ruizendaal
> Does anyone remember, was this a real life bug back in 6th edition
The 'V6' at MIT (actually, PWB1) never had an issue, but then again,
its TTY driver (here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/mit/dmr/tty.c
if anyone wants to see it) was heavily re-written. But from the below,
it's almost certainlynothing to do with the TTY code...
> From: Dave Plonka
> one experiment we did was to redirection the bas(1)ic program's output
> to a file and what we found was that (a) characters would still
> sometimes be lost
Good test.
If you all want to chase this down (I can lend V6 expertise, if needed), I'd
say the first step is to work out whether it's the application, or the
system, losing the characters. To do that, I'd put a little bit of code in
write() to store a copy of data sent through that in a circular buffer, along
with tagging it with the writing process, etc.
Once you figure out where it's getting lost, then you can move on to
how/why.
> From: Clem Cole
> First Sixth Edition does not have support for either the 11/23
Yeah, but it's super-trivial to add /23 support to V6:
http://gunkies.org/wiki/Running_UNIX_V6_on_an_-11/23
The only places where change is needed (no LKS register, no switch register,
and support for more than 256KB of main memory - and that one one can get by
without), it's hard to see how they could cause this problem.
> One other thought, I'm pretty sure that Noel's V6+ system from MIT can
> support a 23
No, we never ran than on a /23 BITD (no need, no mass storage); and I have
yet to bring the V6+ system up (although I have all the bits, and intend to,
at some point, to get its TCP/IP running). I've been using stock (well,
hacked a bit, in a number of ways - e.g. 8-bit serial line output) V6.
Noel
I am making some slow progress on the topic of Tom Reiser’s 32V with virtual memory.
Two more names popped up of folks who worked with his virtual memory code base at Bell Labs / USG in the early 80’s: Robert (Bob) Baron and Jim McCormick. Bob Barron was later working on Mach at CMU.
If anybody on this list has contact suggestions for these two folks, please send a private message.
Paul
> While doing some end of year retrocomputing revisiting, I thought some
> of you might enjoy this - there is hope for the next generation(s)! ;)
> https://www.youtube.com/watch?v=_Zyng5Ob-e8 <https://www.youtube.com/watch?v=_Zyng5Ob-e8>
Thanks for that video link!
I noticed the bit at the end about V6 and the occasional dropped character and that this was not a serial line issue. I have the same issue in my V6 port to the TI-990 and always assumed that it was a bug I introduced myself when hacking the tty driver.
Does anyone remember, was this a real life bug back in 6th edition back in the 1970’s? Maybe only showing at higher baud rates?
Paul
> there was a commercial package called Spag i which claimed to un-spagatti-ify your code which i always wanted but, could never afford.
You needed struct(1) in v7. It did precisely that, converting Fortran
to Ratfor. Amazingly (to me, anyway) it embodied a theorem: a Fortran
program has a canonical form. People found the converted code to be
easier to understand--even when they had written the original code
themselves.
Doug
hi,
having supported Pafec and then in a different job flow3d, i was most interested in anything that could make large fortran packages more manageable.
there was a commercial package called Spag i which claimed to un-spagatti-ify your code which i always wanted but, could never afford.
the best i managed was sed and awk scripts to split huge fortran files into one file per function and build a makefile. this at least made rebuilds quicker.
i do not miss maintaining fortran code hacked by dozens of people over many decades.
-Steve
Hi folks!
While doing some end of year retrocomputing revisiting, I thought some
of you might enjoy this - there is hope for the next generation(s)! ;)
https://www.youtube.com/watch?v=_Zyng5Ob-e8
In this video I share my personal pick for "best" demo at VCF
Midwest: Gavin's PDP 11/23 running UNIX Version 6! We write and run a
simple BASIC program in Ken Thompson's bas(1), finding some quirks
with this (currently) entirely floppy-based system, possible having to
do with a glitch in disk I/O. (We discovered bas(1) uses a temporary
file as backing store.)
Filmed at the Vintage Computer Festival Midwest: VCF Midwest 16,
September 11, 2021
http://vcfmw.org/
Here's the source code to the simple program we wrote; you can also
run it on modern machines if you install a Research UNIX version using
SimH (pdp-11 simulator).
5 goto 30
10 for col = 1 arg(1)
12 prompt " "
14 next
20 print "Welcome to VCF Midwest!"
25 return
30 for x = 0 55
40 10(x)
50 next
60 for x = _56 _1
70 10(_x)
80 next
--
dave(a)plonka.us http://www.cs.wisc.edu/~plonka/
Hi TUHS folks!
After having reincarnated ratfor, I am wondering about Stuart Feldman's
efl (extended fortran language). It was a real compiler that let you
define structs, and generated more or less readable Fortran code.
I have the impression that it was pretty cool, but that it just didn't
catch on. So:
- Did anyone here ever use it personally?
- Is my impression that it didn't catch on correct? Or am I ignorant?
Thoughts etc. welcome. :-)
Thanks,
Arnold
Spurred on by Bryan, I thought I should properly introduce myself:
I am a fairly young Unix devotee, having gotten my start with System V on a Wang word processing system (believe it or not, they made one!), at my mother’s office, in the late 1980s. My first personal system, which ran SLS Linux, came about in 1992.
I am a member of the Vintage Computing Federation, and have given talks and made exhibits on Unix history at VCF’s museum, in Wall, New Jersey. I have also had the pleasure to show Brian Kernighan and Ken Thompson, who are two of my computing heroes, my exhibit on the origins of BSD Unix on the Intel 386. I learned C from Brian’s book, as probably did many others here.
I have spent my entire professional career supporting Unix, in some form or another. I started with SunOS at the National Institutes of Health, in Bethesda, Maryland, and moved on to Solaris, HP-UX, SCO, and finally Linux. I worked for AT&T, in Virginia, in the early 2000s, but there were few vestiges of Unix present, other than some 3b1 and 3b2 monitors and keyboards.
I current work for Red Hat, in Tyson’s Corner, Virginia, as a principal sales engineer, where I spend most of my time teaching and presenting at conferences, both in person and virtual.
Thank you to everyone here who created the tools that have enabled my career and love of computing!
- Alexander Jacocks
Hello!
I have just joined this mailing list recently, and figured I would give
an introduction to myself.
My first encounter with Unix took place in 2006 when I started my
undergraduate studies in Computer Science. The main servers all ran
Solaris, and we accessed them via thin clients. Eventually I wanted a
similar feeling operating system for my personal computer, so that I
could do my assignments without having to always log into the school
servers, and so I came across Linux. I hopped around for a while, but
eventually settled with Slackware for my personal computers. Nowadays I
run a mixture of Linux and BSD operating systems for various purposes.
Unfortunately my day job has me writing desktop software for Windows (no
fun there :(), so I'm thankful to have found a group of people with
similar computing interests as myself, and I look forward to chatting
with you all!
Regards,
Bryan St. Amour
OK, this is my last _civil_ request to stop email-bombing both lists with
trafic. In the future, I will say publicly _exactly_ what I think - and if
screens still had phosphor, it would probably peel it off.
I can see that there are cases when one might validly want to post to both
lists - e.g. when starting a new discusson. However, one of the two should
_always_ be BCC'd, so that simple use of reply won't generate a copy to
both. I would suggest that one might say something like 'this discussion is
probably best continued on the <foo> list' - which could be seeded by BCCing
the _other_.
Thank you.
Noel
http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fission.pdf
Duncan Mak wrote
> Haskell's powerful higher-level functions
> make middling fragments of code very clear, but can compress large
> code to opacity. Jeremy Gibbons, a high priest of functional
> programming, even wrote a paper about deconstructing such wonders for
> improved readability.
>
I went looking for this paper by Jeremy Gibbons here:
https://dblp.org/pid/53/1090.html but didn't find anything resembling it.
What's the name of the paper?
All, I got this e-mail forwarded on from John Fox via Eric S. Raymond.
Cheers, Warren
Hi Eric, I think you might find this interesting.
I have a 2001 copy of your book. I dog-eared page 9 twenty years ago
because of this section:
It spread very rapidly with AT&T, in spite of the lack of any
formal support program for it. By 1980 it had spread to a large
number of university and research computing sites, and thousands
of hackers considered it home.
Regarding the "spread", I believe one of the contributing factors
was AT&T's decision to give the source code away to universities.
And in doing so, unwittingly provided the fertile soil for open
source development.
I happen to know the man who made that decision. He was my
father-in-law. He died Tuesday. He had no idea what UNIX was, and
had no idea what his decision helped to create. Funny when things we
do have such a major impact without us even knowing. That was
certainly true in this case.
Anyway, I thought you'd be interested to know. His name is John
(Jack) H. Bolt. He was 95.
PS, before making the decision, he called Ken Olson at DEC to see if
he'd be interested in buying it, lock, stock, and barrel. Jack's
opening offer was $250k. Olson wasn't interested. And on that,
Jack's decision was made.
John Fox
>> The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
> You seem to have a gift for notation. That's rare. Curious what you think of APL?
I take credit as a go-between, not as an inventor. Ken Knowlton
introduced the notation ABC in BEFLIX, a pixel-based animation
language. Ken didn't need an operator because identifiers were single
letters. I showed Ken's scheme to Bud Lawson, the originator of PL/I's
pointer facility. Bud liked it and came up with the vivid -> notation
to accommodate longer identifiers.
If I had a real gift of notation I would have come up with the pipe
symbol. In my original notation ls|wc was written ls>wc>. Ken Thompson
invented | a couple of months later. That was so influential that
recently, in a paper that had nothing to do with Unix, I saw |
referred to as the "pipe character"!
APL is a fascinating invention, but can be so compact as to be
inscrutable. (I confess not to have practiced APL enough to become
fluent.) In the same vein, Haskell's powerful higher-level functions
make middling fragments of code very clear, but can compress large
code to opacity. Jeremy Gibbons, a high priest of functional
programming, even wrote a paper about deconstructing such wonders for
improved readability.
Human impatience balks at tarrying over a saying that puts so much in
a small space. Yet it helps once you learn it. Try reading transcripts
of medieval Arabic algebra carried out in words rather than symbols.
Iverson's hardware descriptions in APL are another case where
symbology pays off.
Doug
Hi All.
Mainly for fun (sic), I decided to revive the Ratfor (Rational
Fortran) preprocessor. Please see:
https://github.com/arnoldrobbins/ratfor
I started with the V6 code, then added the V7, V8 and V10 versions
on top of it. Each one has its own branch so that you can look
at the original code, if you wish. The man page and the paper from
the V7 manual are also included.
Starting with the Tenth Edition version, I set about to modernize
the code and get it to compile and run on a modern-day system.
(ANSI style declarations and function headers, modern include files,
use of getopt, and most importantly, correct use of Yacc yyval and
yylval variables.)
You will need Berkely Yacc installed as byacc in order to build it.
I have only touch-tested it, but so far it seems OK. 'make' runs in like 2
seconds, really quick. On my Ubuntu Linux systems, it compiles with
no warnings.
I hope to eventually add a test suite also, if I can steal some time.
Before anyone asks, no, I don't think anybody today has any real use
for it. This was simply "for fun", and because Ratfor has a soft
spot in my heart. "Software Tools" was, for me, the most influential
programming book that I ever read. I don't think there's a better
book to convey the "zen" of Unix.
Thanks,
Arnold
I believe that the PDP-11 ISA was defined at a time when DEC was still using
random logic rather than a control store (which came pretty soon
thereafter). Given a random logic design it's efficient to organize the ISA
encoding to maximize its regularity. Probably also of some benefit to
compilers in a memory-constrained environment?
I'm not sure at what point in time we can say "lots of processors" had moved
to a control store based implementation. Certainly the IBM System/360 was
there in the mid-60's. HP was there by the late 60's.
-----Original Message-----
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> On Behalf Of Larry McVoy
Sent: Monday, November 29, 2021 10:18 PM
To: Clem Cole <clemc(a)ccc.com>
Cc: TUHS main list <tuhs(a)minnie.tuhs.org>; Eugene Miya <eugene(a)soe.ucsc.edu>
Subject: Re: [TUHS] A New History of Modern Computing - my thoughts
On Sun, Nov 28, 2021 at 05:12:44PM -0800, Larry McVoy wrote:
> I remember Ken Witte (my TA for the PDP-11 class) trying to get me to
> see how easy it was to read the octal. If I remember correctly (and I
> probably don't, this was ~40 years ago), the instructions were divided
> into fields, so instruction, operand, operand and it was all regular,
> so you could see that this was some form of an add or whatever, it got
> the values from these registers and put it in that register.
I've looked it up and it is pretty much as Ken described. The weird thing
is that there is no need to do it like the PDP-11 did it, you could use
random numbers for each instruction and lots of processors did pretty much
that. The PDP-11 didn't, it was very uniform to the point that Ken's
ability to read octal made perfect sense. I was never that good but a
little google and reading and I can see how he got there.
...
--lm
For DEC memo’s on designing the PDP-11 see bitsavers:
http://www.bitsavers.org/pdf/dec/pdp11/memos/
(thank you Bitsavers! I love that archive)
Ad van de Goor (author of a few of the memo’s) was my MSc thesis professor. I recall him saying in the early 80’s that in his view the PDP-11 should have been an 18-bit machine; he reasoned that even in the late 60’s it was obvious that 16-bits of address space was not enough for the lifespan of the design.
---
For those who want to experiment with FPGA’s and ancient ISA’s, here is my plain Verilog code for the TI 9995 chip, which has an instruction set that is highly reminiscent of the PDP-11:
https://gitlab.com/pnru/cortex/-/tree/master
The actual CPU code (TMS99095.v) is less than 1000 lines of code.
Paul