>> The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
> You seem to have a gift for notation. That's rare. Curious what you think of APL?
I take credit as a go-between, not as an inventor. Ken Knowlton
introduced the notation ABC in BEFLIX, a pixel-based animation
language. Ken didn't need an operator because identifiers were single
letters. I showed Ken's scheme to Bud Lawson, the originator of PL/I's
pointer facility. Bud liked it and came up with the vivid -> notation
to accommodate longer identifiers.
If I had a real gift of notation I would have come up with the pipe
symbol. In my original notation ls|wc was written ls>wc>. Ken Thompson
invented | a couple of months later. That was so influential that
recently, in a paper that had nothing to do with Unix, I saw |
referred to as the "pipe character"!
APL is a fascinating invention, but can be so compact as to be
inscrutable. (I confess not to have practiced APL enough to become
fluent.) In the same vein, Haskell's powerful higher-level functions
make middling fragments of code very clear, but can compress large
code to opacity. Jeremy Gibbons, a high priest of functional
programming, even wrote a paper about deconstructing such wonders for
improved readability.
Human impatience balks at tarrying over a saying that puts so much in
a small space. Yet it helps once you learn it. Try reading transcripts
of medieval Arabic algebra carried out in words rather than symbols.
Iverson's hardware descriptions in APL are another case where
symbology pays off.
Doug
Hi All.
Mainly for fun (sic), I decided to revive the Ratfor (Rational
Fortran) preprocessor. Please see:
https://github.com/arnoldrobbins/ratfor
I started with the V6 code, then added the V7, V8 and V10 versions
on top of it. Each one has its own branch so that you can look
at the original code, if you wish. The man page and the paper from
the V7 manual are also included.
Starting with the Tenth Edition version, I set about to modernize
the code and get it to compile and run on a modern-day system.
(ANSI style declarations and function headers, modern include files,
use of getopt, and most importantly, correct use of Yacc yyval and
yylval variables.)
You will need Berkely Yacc installed as byacc in order to build it.
I have only touch-tested it, but so far it seems OK. 'make' runs in like 2
seconds, really quick. On my Ubuntu Linux systems, it compiles with
no warnings.
I hope to eventually add a test suite also, if I can steal some time.
Before anyone asks, no, I don't think anybody today has any real use
for it. This was simply "for fun", and because Ratfor has a soft
spot in my heart. "Software Tools" was, for me, the most influential
programming book that I ever read. I don't think there's a better
book to convey the "zen" of Unix.
Thanks,
Arnold
I believe that the PDP-11 ISA was defined at a time when DEC was still using
random logic rather than a control store (which came pretty soon
thereafter). Given a random logic design it's efficient to organize the ISA
encoding to maximize its regularity. Probably also of some benefit to
compilers in a memory-constrained environment?
I'm not sure at what point in time we can say "lots of processors" had moved
to a control store based implementation. Certainly the IBM System/360 was
there in the mid-60's. HP was there by the late 60's.
-----Original Message-----
From: TUHS <tuhs-bounces(a)minnie.tuhs.org> On Behalf Of Larry McVoy
Sent: Monday, November 29, 2021 10:18 PM
To: Clem Cole <clemc(a)ccc.com>
Cc: TUHS main list <tuhs(a)minnie.tuhs.org>; Eugene Miya <eugene(a)soe.ucsc.edu>
Subject: Re: [TUHS] A New History of Modern Computing - my thoughts
On Sun, Nov 28, 2021 at 05:12:44PM -0800, Larry McVoy wrote:
> I remember Ken Witte (my TA for the PDP-11 class) trying to get me to
> see how easy it was to read the octal. If I remember correctly (and I
> probably don't, this was ~40 years ago), the instructions were divided
> into fields, so instruction, operand, operand and it was all regular,
> so you could see that this was some form of an add or whatever, it got
> the values from these registers and put it in that register.
I've looked it up and it is pretty much as Ken described. The weird thing
is that there is no need to do it like the PDP-11 did it, you could use
random numbers for each instruction and lots of processors did pretty much
that. The PDP-11 didn't, it was very uniform to the point that Ken's
ability to read octal made perfect sense. I was never that good but a
little google and reading and I can see how he got there.
...
--lm
For DEC memo’s on designing the PDP-11 see bitsavers:
http://www.bitsavers.org/pdf/dec/pdp11/memos/
(thank you Bitsavers! I love that archive)
Ad van de Goor (author of a few of the memo’s) was my MSc thesis professor. I recall him saying in the early 80’s that in his view the PDP-11 should have been an 18-bit machine; he reasoned that even in the late 60’s it was obvious that 16-bits of address space was not enough for the lifespan of the design.
---
For those who want to experiment with FPGA’s and ancient ISA’s, here is my plain Verilog code for the TI 9995 chip, which has an instruction set that is highly reminiscent of the PDP-11:
https://gitlab.com/pnru/cortex/-/tree/master
The actual CPU code (TMS99095.v) is less than 1000 lines of code.
Paul
Eugene Miya visited by last week and accidentally left his copy of the
book here so I decided to read it before he came back to pick it up.
My overall impression is that while it contained a lot of information,
it wasn't presented in a manner that I found interesting. I don't know
the intended target audience, but it's not me.
A good part of it is that my interest is in the evolution of technology.
I think that a more accurate title for the book would be "A New History
of the Business of Modern Computing". The book was thorough in covering
the number of each type of machine sold and how much money was made, but
that's only of passing interest to me. Were it me I would have just
summarized all that in a table and used the space to tell some engaging
anecdotes.
There were a number of things that I felt the book glossed over or missed
completely.
One is that I didn't think that they gave sufficient credit to the symbiosis
between C and the PDP-11 instruction set and the degree to which the PDP-11
was enormously influential.
Another is that I felt that the book didn't give computer graphics adequate
treatment. I realize that it was primarily in the workstation market segment
which was not as large as some of the other segments, but in my opinion the
development of the technology was hugely important as it eventually became
commodified and highly profitable.
Probably due to my personal involvement I felt that the book missed some
important steps along the path toward open source. In particular, it used
the IPO of Red Hat as the seminal moment while not even mentioning the role
of Cygnus. My opinion is that Cygnus was a huge icebreaker in the adoption
of open source by the business world, and that the Red Hat IPO was just the
culmination.
I also didn't feel that there was any message or takeaways for readers. I
didn't get any "based on all this I should go and do that" sort of feeling.
If the purpose of the book was to present a dry history then it pretty much
did it's job. Obviously the authors had to pick and choose what to write
about and I would have made some different choices. But, not my book.
Jon
> The ++ operator appears to have been.
One would expect that most people on this list would have read "The
Development of the C Language", by Dennis Ritchie, which makes perfectly clear
(at 'More History') that the PDP-11 had nothing to do with it:
Thompson went a step further by inventing the ++ and -- operators, which
increment or decrement; their prefix or postfix position determines whether
the alteration occurs before or after noting the value of the operand. They
were not in the earliest versions of B, but appeared along the way. People
often guess that they were created to use the auto-increment and
auto-decrement address modes provided by the DEC PDP-11 on which C and Unix
first became popular. This is historically impossible, since there was no
PDP-11 when B was developed.
https://www.bell-labs.com/usr/dmr/www/chist.html
thereby alleviating the need for Ken to chime in (although they do allow a
very efficient implementation of it).
Too much to hope for, I guess.
Noel
> From: "Charles H. Sauer"k <sauer(a)technologists.com>
> I haven't done anything with 9 ktrack tapes for a long time ...
> I don't recall problems reading any of them. ...
> IMNSHO, it all depends on the brand/formulation of the tape. I've been
> going through old audio tapes and digitizing them
The vintage computer community has considerable experience with old tapes; in
fact Chuck Guzis has a business reading them (which often includes converting
old file formats to something modern software can grok).
We originally depended heavily on the work of the vintage audio community, who
pioneered working with old tapes, including the discovert of 'baking' them to
improve their mechanical playability. ("the binder used to adhere the magnetic
material to the backing ... becomes unstable" - playing such a tape will
transfer much of the magnetic material to the head, destroying the tape's
contents.)
It's amazing how bad a tape can be, and still be readable. I had a couple of
dump tapes of the CSR PWB1 machine at MIT, which I had thoughtlessly stored in
my (at one period damp) basement, and they were covered in mold - and not just
on the edges! Chuck had to build a special fixture to clean off the mold, but
we read most of the first tape. (I had thoughtfully ade a second copy, which
read perfectly.)
Then I had to work out what the format was - it turned out that even though
the machine had a V6 filesystem, my tape was a 'dd' of a BSD4.1c filesystem
(for reasons I eventually worked out, but won't bore you all with). Dave
Bridgham managed to mount that under Linux, and transform it into a TAR
file. That was the source of many old treasures, including the V6 NCP UNIX.
Noel
> What, if any, features does PL/I have that are not realized in a modern language?
Here are a few dredged from the mental cranny where they have
mouldered for 50+ years.
1. Assignment by name. If A and B are structs (not official PL/I
terminology), then A + B A = B copies similarly named fields of B to
corresponding fields in A.
2. Both binary and decimal data with arithmetic rounded to any
specified precision.
3. Bit strings of arbitrary length, with bitwise Boolean operations
plus substr and catenation.
4. A named array is normally passed by reference, as in F(A). But if
the argument is not a bare name, as in F((A)), it is passed by value.
5. IO by name. On input this behaves like assignment from a constant,
with appropriate type conversion.
6. A SORT statement.
7. Astonishingly complete set of implicit data conversions. E.g. if X
is floating-point and S is a string, the assignment X = S works when S
= "2" and raises an exception (not PL/I terminology) when S = "A".
My 1967 contribution to ACM collected algorithms exploited 3 and 4. I
don't know another language in which that algorithm is as succinct.
Doug
DEC's VAX/VMS group got a customer bug report that was accompanied by
a 9-track tape containing the programs and data necessary to reproduce
the problem. When the engineer mounted the tape, it contained
completely different data. He tried a different tape drive and this
time he got the expected data. It turned out that the customer had
reused the tape and recorded the reproducer at 1600 bpi on top of
previous data recorded at 800 bpi. If the tape was mounted such that
the drive didn't see the PE burst, it could still read the
NRZI-encoded 800 bpi data.
-Paul W.