Folks remember, VAX was not designed with UNIX in mind. It had two primary
influences, assembly programmers (Cutler et al) and FORTRAN compiler
writers. The truth is, the Vax was incredibly successful in both UNIX and
its intended OS (VMS) sites, even if a number of the instructions it has
were ignored by the C compiler writers. The fact the C did not map to it
as well as it would for other architectures later is not surprising given
the design constraints - C and UNIX were not part of the design. But it was
good enough (actually pretty darned good for the time) and was very, very
successful - I certainly stopped running a PDP11 when Vaxen were generally
available. I would not stop doing that until the 68000 based workstations
came along.
From my own experience, when Dave (Patterson) was
writing the RISC papers
in the early 1980s, a number of us ex-industry grad student
types @ USB
were taking his architecture course having just come off some very
successful systems from the Vax, DG Eagle, Pr1me 750, etc.. [I'll leave the
names of said parties off to protect the innocent]. But what I will say is
that the four of used sit in the back of his calls and chuckle. We used to
remind Dave that a lot of the choices that were made on those machines, we
not for "CS" style reasons. IMO: Dave really did not "get it" -- all
of
those system designers did make architectural choices, but the drivers were
the code base from the customer sites not how how well spell or grep
worked. And those commercial systems generally did mapped well at what
the designers considered and >>why<< those engineers considered what they
did [years later a HBS professor Clay Christensen's book explained why].
I've said this in other forums, but I contend that when we used pure CS to
design world's greatest pure computer architecture (Alpha) we ultimately
failed in the market. The computer architecture was extremely successful
and many of miss it. Hey, I now work for a company with one of the worst
instruction sets/ISA from a Computer Science standpoint - INTEL*64 (C),
and like the Vax, it's easy to throw darts at the architecture from a
purity standpoint. Alpha was great, C and other languages map to it well,
and the designers followed all of the CS knowledge at the time. But as a
>system<< it could not compete with the
disruption caused by the 386 and
later it's child, INTEL*64. And like Vaxen,
INTEL*64 is ugly, but it
continues to win because of the economics.
At Intel we look at very specific codes and how they map and the choices of
what new things to add, how the system morphs are directly defined by what
we see from customers and in the case of scientific codes, how well the
FORTRAN compiler can exploit it -- because it is the same places (the
national labs and very large end users like weather, automotive, oil/gas or
life sciences) that have the same Fortran code that still need to run ;-)
This is just want DEC did years ago with the VAX (and Alpha).
As an interesting footnote, the DNA from the old DEC Fortran compiler lives
on "ifort" (and icc). Some of the same folks are still working on the
code generator, although they are leaving us fairly rapidly as they
approach and pass their 70s. But that's a different story ;-)
So the question is not a particular calling sequence or set of instructions
is good, you need to look at the entire economics of the system - which to
me begs the question of if the smartphone/tablet and ARM be the disruptor
to INTEL*64 - time will tell.
Clem
On Sun, Jan 3, 2016 at 7:42 PM, <scj(a)yaccman.com
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=scj@yaccman.com>>
wrote:
Well, I certainly said this on several occasions, and
the fact that it is
recorded more or less exactly as I remember saying it suggests that I may
have even written it somewhere, but if so, I don't recall where...
As part of the PCC work, I wrote a technical report on how to design a C
calling sequence, but that was before the VAX. Early calling sequences
had both a stack pointer and a frame pointer, but for most machines it
was possible to get by with just one, so calling sequences got better as
time went on. Also, RISC machines with many more registers than the
PDP-11 also led to more efficient calls by putting some arguments in
registers. Later standardizations like varargs were painful on some
architectures (especially those which had different registers for pointers
and integers).
The CALLS instruction was indeed a pig -- a space-time tradeoff in the
wrong direction! For languages like FORTRAN it might have been justified,
but for C it was awful. It is my memory too that CALLS was abandoned,
perhaps first at UCB. But I actually had little hands-on experience with
the VAX C compiler...
Steve
I just re-found a quote about Unix processes that
I'd "lost". It's by
Steve Johnson:
Dennis Ritchie encouraged modularity by telling all and sundry that
function calls were really, really cheap in C. Everybody started
writing small functions and modularizing. Years later we found out
that function calls were still expensive on the PDP-11, and VAX code
was often spending 50% of its time in the CALLS instruction. Dennis
had lied to us! But it was too late; we were all hooked...
http://www.catb.org/esr/writings/taoup/html/modularitychapter.html
Steve, can you recollect when you said this, was it just a quote for
Eric's book or did it come from elsewhere?
Does anybodu have a measure of the expense of function calls under Unix
on either platform?
Cheers, Warren