> From: Warren Toomey
> I just re-found a quote about Unix processes
> ..
> Years later we found out that function calls were still expensive
> on the PDP-11
> ..
> Does anybodu have a measure of the expense of function calls under Unix
> on either platform?
Procedure calls were not cheap on the PDP-11 with the V6/V6 C compiler (which
admittedly was not the most efficient with small routines, since it always
saved all three non-temporary registers, no matter whether the called routine
used them or not).
This was especially true when compared to the code produced by the compiler
with the optimizer turned on, if the programmer was careful about allocating
'register' variables, which was pretty good.
On most PDP-11's, the speed was basically linearly related to the number of
memory references (both instruction fetch, and data), since most -11 CPU's
were memory-bound for most instructions. So for that compiler, a subroutine
call had a fair amount of overhead:
inst data
call 4 1
2 0 if any automatic variables
1 1 minimum per single-word argument
csv 9 5
cret 9 5
(In V7, someone managed to bum one cycle out of csv, taking it down to 8+5.)
So assume a pair of arguments which were not register variables (i.e.
automatics, or in structures pointed to by register variables), and some
automatics in the called routine, and that's 4+2 for the arguments, plus 6+1,
a subtotal of 10+3; add in csv and cret, that's 28+13 = 41 memory cycles.
On a typical machine like an 11/40 or 11/23, which had roughly 1 megacycle
memory throughput, that meant 40 usec (on a 1 MIP machine) to do a procedure
call, purely in overhead (counting putting the args on the stack as overhead).
We found that, even with the limited memory on the -11, it made sense to run
the time/space tradeoff the other way for short things like queue
insertion/removal, and do them as macros.
A routine had to be pretty lengthy before it was worth paying the overhead, in
order to amortize the calling cost across a fair amount of work (although of
course, getting access to another three register variables could make the
compiled output for the routine somewhat shorter).
Noel
Folks remember, VAX was not designed with UNIX in mind. It had two primary
influences, assembly programmers (Cutler et al) and FORTRAN compiler
writers. The truth is, the Vax was incredibly successful in both UNIX and
its intended OS (VMS) sites, even if a number of the instructions it has
were ignored by the C compiler writers. The fact the C did not map to it
as well as it would for other architectures later is not surprising given
the design constraints - C and UNIX were not part of the design. But it was
good enough (actually pretty darned good for the time) and was very, very
successful - I certainly stopped running a PDP11 when Vaxen were generally
available. I would not stop doing that until the 68000 based workstations
came along.
>From my own experience, when Dave (Patterson) was writing the RISC papers
in the early 1980s, a number of us ex-industry grad student types @ USB
were taking his architecture course having just come off some very
successful systems from the Vax, DG Eagle, Pr1me 750, etc.. [I'll leave the
names of said parties off to protect the innocent]. But what I will say is
that the four of used sit in the back of his calls and chuckle. We used to
remind Dave that a lot of the choices that were made on those machines, we
not for "CS" style reasons. IMO: Dave really did not "get it" -- all of
those system designers did make architectural choices, but the drivers were
the code base from the customer sites not how how well spell or grep
worked. And those commercial systems generally did mapped well at what
the designers considered and >>why<< those engineers considered what they
did [years later a HBS professor Clay Christensen's book explained why].
I've said this in other forums, but I contend that when we used pure CS to
design world's greatest pure computer architecture (Alpha) we ultimately
failed in the market. The computer architecture was extremely successful
and many of miss it. Hey, I now work for a company with one of the worst
instruction sets/ISA from a Computer Science standpoint - INTEL*64 (C),
and like the Vax, it's easy to throw darts at the architecture from a
purity standpoint. Alpha was great, C and other languages map to it well,
and the designers followed all of the CS knowledge at the time. But as a
>>system<< it could not compete with the disruption caused by the 386 and
later it's child, INTEL*64. And like Vaxen, INTEL*64 is ugly, but it
continues to win because of the economics.
At Intel we look at very specific codes and how they map and the choices of
what new things to add, how the system morphs are directly defined by what
we see from customers and in the case of scientific codes, how well the
FORTRAN compiler can exploit it -- because it is the same places (the
national labs and very large end users like weather, automotive, oil/gas or
life sciences) that have the same Fortran code that still need to run ;-)
This is just want DEC did years ago with the VAX (and Alpha).
As an interesting footnote, the DNA from the old DEC Fortran compiler lives
on "ifort" (and icc). Some of the same folks are still working on the
code generator, although they are leaving us fairly rapidly as they
approach and pass their 70s. But that's a different story ;-)
So the question is not a particular calling sequence or set of instructions
is good, you need to look at the entire economics of the system - which to
me begs the question of if the smartphone/tablet and ARM be the disruptor
to INTEL*64 - time will tell.
Clem
On Sun, Jan 3, 2016 at 7:42 PM, <scj(a)yaccman.com
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=scj@yaccman.com>> wrote:
> Well, I certainly said this on several occasions, and the fact that it is
> recorded more or less exactly as I remember saying it suggests that I may
> have even written it somewhere, but if so, I don't recall where...
>
> As part of the PCC work, I wrote a technical report on how to design a C
> calling sequence, but that was before the VAX. Early calling sequences
> had both a stack pointer and a frame pointer, but for most machines it
> was possible to get by with just one, so calling sequences got better as
> time went on. Also, RISC machines with many more registers than the
> PDP-11 also led to more efficient calls by putting some arguments in
> registers. Later standardizations like varargs were painful on some
> architectures (especially those which had different registers for pointers
> and integers).
>
> The CALLS instruction was indeed a pig -- a space-time tradeoff in the
> wrong direction! For languages like FORTRAN it might have been justified,
> but for C it was awful. It is my memory too that CALLS was abandoned,
> perhaps first at UCB. But I actually had little hands-on experience with
> the VAX C compiler...
>
> Steve
>
>
>
>
> > I just re-found a quote about Unix processes that I'd "lost". It's by
> > Steve Johnson:
> >
> > Dennis Ritchie encouraged modularity by telling all and sundry that
> > function calls were really, really cheap in C. Everybody started
> > writing small functions and modularizing. Years later we found out
> > that function calls were still expensive on the PDP-11, and VAX code
> > was often spending 50% of its time in the CALLS instruction. Dennis
> > had lied to us! But it was too late; we were all hooked...
> > http://www.catb.org/esr/writings/taoup/html/modularitychapter.html
> >
> > Steve, can you recollect when you said this, was it just a quote for
> > Eric's book or did it come from elsewhere?
> >
> > Does anybodu have a measure of the expense of function calls under Unix
> > on either platform?
> >
> > Cheers, Warren
> >
>
>
>
On 2016-01-04 00:53, Tim Bradshaw <tfb(a)tfeb.org> wrote:
>> >On 3 Jan 2016, at 23:35, Warren Toomey<wkt(a)tuhs.org> wrote:
>> >
>> >Does anybodu have a measure of the expense of function calls under Unix
>> >on either platform?
>> >
> I don't have the reference to hand, but one of the things Lisp implementations (probably Franz Lisp in particular) did on the VAX was not to use CALLS: they could do this because they didn't need to interoperate with C except at known points (where they would use the C calling conventions). This change made a really significant difference to function call performance and meant that on call-heavy code Lisp was often very competitive with C.
>
> I can look up the reference (or, really, ask someone who remembers).
>
> The VAX architecture and its performance horrors must have killed DEC, I guess.
I don't know if that is a really honest description of the VAX in
general, nor DEC. DEC thrived in the age of the VAX.
However, the CALLS/CALLG and RET instructions were really horrid for
performance. Any clever programmer started using JSB and RSB instead. as
they give you the plain straight forward call and return semantics
without all the extra stuff that the CALL instructions give.
But, for assembler programmers, the architecture was nice. For
compilers, it's more difficult to do things optimal, and of course, it
took quite a while before hardware designers had the tools, skill and
knowledge how to implement complex instruction sets fast in hardware.
But nowadays, that is definitely not a problem, and it was more or less
already solved by the time of the NVAX chip as well, which was actually
really fast compared to a lot of stuff when it came out.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Clem cole <clemc(a)ccc.com> writes on Thu, 31 Dec 2015 23:04:04 -0500
about SPICE:
>> ...
>> Anyway SPICE1 was actually started in the late 1960's by dop [Don
>> Pederson]. Ellis Cohen wrote SPICE2 for the CDC 6400 in the mid 70's,
>> added some new device models and created really novel bit of self
>> modifying Fortran the compiled the inner loop.
>>
>> You are correct it was really the first widely available FOSS code -
>> an idea that you correctly note dop created.
>> ...
SPICE wasn't the only such package, or even the earliest! Still, I'll
be grateful to list readers for pointers off-list (or on) to early
publications about SPICE that I can add to the bibliography archives.
The EISPACK system, which predated LINPACK, and both of which led to
the current LAPACK, and descendants like CLAPACK and ScaLAPACK, has an
older vintage. It began with Algol routines published in the
German/English journal Numerische Mathematik
http://www.math.utah.edu/pub/tex/bib/nummath.bibhttp://www.math.utah.edu/pub/tex/bib/nummath2000.bibhttp://www.math.utah.edu/pub/tex/bib/nummath2010.bib
[change .bib to .html for a similar view with live hyperlinks]
The first such routine may have been that in entry Martin:1965:SDPa in
nummath.bib, which appeared in Num. Math. 79(5) 355--361 (October
1965) doi:10.1007/BF01436248. That journal did not then record
"received" dates, so the best that I can do for now is to claim
"October 1965" as the start of published code for free and open source
software in the area of numerical analysis.
Publication of related algorithms continued for 6 years, and then they
were collected in the famous HACLA (Handbook for Automatic
Computation: Linear Algebra) volume in 1971 (ISBN 0-387-05414-6).
Because Algol was little used in the USA, a project was begun in that
country to translate the Algol code to Fortran. That project was
called NATS, which originally stood for the groups at (read their
names vertically)
Northwestern University
Argonne National Laboratory
Texas, University of (at Austin)
Stanford
but as more groups joined in the effort, and EISPACK begat LINPACK,
NATS was changed to mean National Activity to Test Software.
The EISPACK book appeared in two editions in 1976 (ISBN 0-387-06710-8)
and 1977 (0-387-08254-9), volumes 6 and 51, respectively of Springer's
Lecture Notes in Computer Science (now around 9000 published volumes).
The LINPACK book appeared in 1979 (ISBN 0-89871-172-X).
The LAPACK book has three editions, in 1992 (ISBN 0-89871-294-7), 1995
(ISBN 0-89871-345-5), and 1999 (ISBN 0-89871-447-8). In between them,
the ScaLAPACK book appeared in 1997 (ISBN 0-89871-400-1).
There were several other packages described in the 1984 book
Sources and Development of Mathematical Software
ISBN 0-13-823501-5
(entry Cowell:1984:SDM), including FUNPACK, MINPACK, IMSL, SLATEC,
Boeing, AT&T PORT, and NAG. Some are free, and others are commercial.
The Algol code from Numerische Mathematik, like the ACM Collected
Algorithms, the Applied Statistics algorithms, and the Transactions on
Mathematical Software algorithms, was intended to be freely available
to anyone for any purpose, and no license of any kind was claimed for
it. That tradition continues with all of its descendants in the *PACK
family.
I have old archives of source code for EISPACK and LINPACK, but
comment documentation in EISPACK does not include revision dates, just
references to page numbers in the HACLA volume from 1971, and rarely,
to journal articles from 1968, 1970 and 1973. My filesystem dates,
alas, only reflect the copying from distribution tape to disk, and my
oldest file date for EISPACK is 20-Apr-1981.
The LINPACK comments appear be almost entirely without dates: I found
only one:
snrm2.for:11:C C.L.LAWSON, 1978 JAN 08
The bibliography on the GNU Project at
http://www.math.utah.edu/pub/tex/bib/gnu.bib
records most of the books mentioned above, and it also contains as its
first entry, Galler:1960:LEC, a letter published in the April 1960
issue of Communications of the ACM from Bernie Galler, with this
field:
remark = "From the letter: ``\ldots{} it is clear that what is
being charged for is the development of the program,
and while I am particularly unhappy that it comes from
a university, I believe it is damaging to the whole
profession. There isn't a 704 installation that hasn't
directly benefited from the free exchange of programs
made possible by the distribution facilities of
SHARE. If we start to sell our programs, this will set
very undesirable precedents.''",
That is so far the earliest reference that I have found for the notion
that software should be free, long before Richard Stallman, Eric
Raymond, Linus Torvalds, and others became such well-known proponents
of that idea, and we had large and profitable companies like Red Hat
and SUSE devoted to supporting, for a fee, such software.
I was a graduate student in quantum chemistry at the Quantum Theory
Project (QTP) at the University of Florida in Gainesville in the late
1960s and early 1970s, and we had a general practice of sharing of
code among various university research groups, most notably through
the Quantum Chemistry Program Exchange (QCPE) hosted at the University
of Indiana in Bloomington, IN.
A search through my bibliography archives found my earliest recording,
a 6-Apr-1971 publication (by me), with mention of QCPE. Library
searches found a catalog entry for QCPE Catalog volume 19 (1987), so
perhaps volume 1 appeared in 1968. But no --- I just found in its
home institution's library catalog
http://www.iucat.iu.edu/?utf8=%E2%9C%93&search_field=all_fields&q=QCPE&high…
an entry dated 1963, with details
Publishing history: 1 (Apr. 1963)- Ceased with 71 (Nov. 1980).
Other widely-distributed programs of that time included Enrico
Clementi's IBM Research group's IBMOL (about 1965), and others named
MOLECULE (pre-1975), POLYATOM (1963), and Gaussian (1970).
The POLYATOM year appears to be the earliest of those: see the paper
by Michael Barnett at
http://dx.doi.org/10.1103/RevModPhys.35.571
It appears in a July 1963 journal issue, again without a "received"
date. It begins:
A system of programs is being written by Dr. Malcolm
C. Harrison, Dr. Jules W. Moskowitz, Dr. Brian T. Sutcliffe,
D. E. Ellis, and R. S. Pitzer, to perform nonempirical
calculations for small molecules.
I have met, or been in the same group as (Don Ellis), most of those,
and it is worth noting their affiliations to emphasize the broad
character of that work:
Malcolm Courant Institute of Mathematical Sciences, New York University, NY
Jules New York University, NY
Brian York University, York, UK
Don University of Florida (later, Northwestern University)
Russ Harvard, Cambridge, MA (later, Ohio State University)
Michael MIT, Cambridge, MA and various UK sites in academia and industry
(see https://en.wikipedia.org/wiki/Michael_P._Barnett)
On the subject of the Gaussian program, developed at Carnegie-Mellon
University, see the two sites
https://en.wikipedia.org/wiki/Gaussian_%28software%29http://www.bannedbygaussian.org/
The second decries the loss of openness of Gaussian, which remains a
widely-used commercial product.
There is also a book on the subject of mathematics whose use is
encumbered by patents and copyrights:
Ben Klemens
Ma$+$h you can't use: patents, copyright, and software
ISBN 0-8157-4942-2
(entry Klemens:2006:MYC in http://www.math.utah.edu/pub/tex/bib/master.bib)
----------------------------------------
P.S. A final sad personal note on computing history:
When our DEC-20/60 (Arpanet node UTAH-SCIENCE, later science.utah.edu
and still later, math.utah.edu) was retired on 31-Oct-1990 (its
predecessor, a DEC-20/40 began operating in March 1978) we were faced
with several cabinets full of 9-track tapes (about 25MB each), several
RP06 (200MB) removable disks (for a picture and description, see
http://www.columbia.edu/cu/computinghistory/rp06.html
) and the contents of three washing-machine sized RP07 (600MB) disks,
and were moving to a new machine room in an adjacent building.
We were able to copy over the RP0[67] disk contents, and I still have
them online on my desktop, but the tapes were financially infeasible
for us to copy to disk on the new VAX 8600 server, and we were leaving
9-track tape technology behind. There were probably 500 to 1000 of
those tapes, and all that we could do was fill a dumpster with them,
because we had no place to store the physical volumes at the new site,
and no money for their bits. I have deeply regretted that loss of 25
years of my, and our, early computing history ever since.
Computers were for far too long crippled by too little memory and too
little permanent storage, and only post-2000 has that situation been
alleviated with radical reductions in storage costs per byte of data.
My new desktop 8TB drive is 3.6 million times cheaper per byte than an
RP06 drive was. Had we been able to foresee that dramatic growth in
capacity, we could have archived those tapes in an off-campus
warehouse for later (attempted) data retrieval.
------------------------------------------------------------------------
P.P.S. Besides VAX VMS, our migration path from TOPS-20 was primarily
to Unix, first on the Wollongong distribution of BSD (3.x, I think)
running on VAX 750 machines in the early 1980s, then on Sun 3
MC68000-based workstations in 1988 that ultimately evolved to an
eclectic mixture of CPUs and vendors. My software test lab now has
about 70 flavors of Unix on assorted physical and virtual machines,
with ARM, MIPS, PowerPC, SPARC, x86, and x86-64 processors. Our last
DEC Alpha CPU died with its power supply 16 months ago, and a
colleague still has a runnable MC68020 box (an old NeXT desktop).
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Jacob Goense <dugo(a)xs4all.nl>
> Mills's 1983 RFC889[2] calls the original PING Packet InterNet Groper.
I have a strong suspicion that Packet-etc is a 'backronym' from Dave Mills.
Note that the use of the term "echo" for a packet returned dates back quite a
while before that, see e.g. IEN 104, "Minutes of the Fault Isolation
Meeting", from March 1979:
"ability to echo packets off any gateway"
When ICMP was split from GGP (see IEN-109, RFC-777), the functionality
migrated from GGP to ICMP, and was generalized so that all hosts provided the
capability, not just routers.
Noel
Personally, I lean away from listing the nine billion debunked
names of cron. It's like adding a disclaimer to cat(1) to
explain that cat just copies data to standard output, it doesn't
transform it or compute how long it would take to send the data
over UUCP.
But it probably shows that I have been trying to write a couple
of manual pages lately (for some personal stuff, plus some docs
for work that are not technically manual pages but deserve the
same sort of conciseness).
Maybe Wikipedia-page format should admit an optional BUGS section.
Norman Wilson
Toronto ON
PS: seriously, though I wouldn't bother including the debunking
text myself, save perhaps on the Talk page to encourage editors
to delete any future attempts to revive the un-names, I have no
problem with Grog doing it. More power to him if he has the
energy!
Hello all!
While I was reading the article "A Research UNIX Reader: Annotated Excerpts
from the Programmer's Manual" from Douglas McIlroy, I learnt of a set of
utilities for designing electronic circuits. Here is a brief quote of this
article:
"CDL (v7 pages 60-63)
Although most users do not encounter the UNIX Circuit Design System, it has long
stood as an important application in the lab. Originated by Sandy Fraser and
extended by Steve Bourne, Joe Condon, and Andrew Hume, UCDS handles circuits
expressed in a common design language, cdl. It includes programs to create
descriptions using interactive graphics, to lay out boards automatically, to
check circuits for consistency, to guide wire-wrap machines, to specify
combinational circuits and optimize them for programmed logic arrays (Chesson
and Thompson). Without UCDS, significant inventions like Datakit, the 5620 Blit
terminal, or the Belle chess machine would never have been built. UCDS appeared
in only one manual, v7."
I looked it up on the 7th Edition's Manual and I haven't found references of
this system. I also searched a v7 system image downloaded from TUHS and got no
results. However I got some references of this system in USENET archives. In
particular, two hierarchies, net.draw and after net.ucds were dedicated to it.
Apparently two of the binaries of the system were called "draw" and "wrap". I
also found a manual of a similar system which I suppose is the UCDS descendant
in the 1st Edition of Plan 9. This is the link of the document:
http://doc.cat-v.org/plan_9/1st_edition/cda/
However that edition of Plan 9 is not publicly released and I could not find
it in following editions. But since v7 Unix is available, I hope it may
be possible to get hold of an older release at least.
Does anyone have any information?
Thank you in advance!
--- Michele
I was going through the old AUUG newsletters at
http://www.tuhs.org/Archive/Documentation/AUUGN/
looking for wiki material. They are a mine of information!
I've sent an e-mail off to the UKUUG folk to see if they have any
on-line newsletters. Does anybody know what happened to EUUG, especialy
if any of their newsletters have been digitised?
And Usenix ;login, are any of their old newsletters available?
If not, who can I lobby to get this done? There's only 3 1/2 years
left before the 50th anniversary!
Cheers, Warren
> From: Wolfgang Helbig
> The HALT instruction of the real PDP11 only stops the CPU
I have this bit set that on at least some models of the real machine, when
the CPU is halted, it does not do DMA grants? If so, on such machines, the
trick of depositing in the device registers directly would not work; the
device could not do the bus cycles to do the transfer to memory. Anyone know
for sure which models do service DMA requests while halted?
Noel
Something of a tangent:
In my early days with UNIX, one of the systems I helped look
after was an 11/45. Normally we booted it from an SMD disk
with a third-party RP-compatible contorller, for which we
had a boot ROM. Occasionally, however, we wanted to boot it
from RK05, usually to run diagnostics, occasionally for some
emergency reason (like the root file system being utterly
scrambled, or the time we ran that system, with UNIX, on a
single RK05 pack, for several days so our secretaries could
keep doing their troff work while the people who had broken
our air-conditioning system got it fixed--all other systems
in our small machine room had to stay shut down).
There was no boot ROM for the RK05, but it didn't matter:
one just did the following from the front-panel switches:
1. Halt/Enable to Halt
2. System reset (also sends a reset to the UNIBUS)
3. Load address 777404
4. Deposit 5.
(watch lights blink for a second or so)
5. Load address 0
6. Halt/Enable to Enable
7. Continue
777404 is the RK11's command register. 5 is a read command.
Resetting the system reset the RK11, which cleared all the
registers; in particular the word count, bus address, and
disk address registers. So when the 5 was deposited (including
the bit 01, the GO bit), the RK11 would read from address 0 on
the disk to address 0 in physical memory, then increment the
word-count register, and keep doing so until the word count
was zero after the increment. Or, in higher-level terms, read
the first 65536 words of the disk into the first 65536 words
of memory.
Then starting at address 0 would start executing whatever code
was at the beginning of memory (read from the beginning of the
disk).
Only the first 256 words (512 bytes) of the disk was really
needed, of course, but it was harmless, faster, and easier to
remember if one just left the word-count at its initial zero,
so that is what we did.
The boot ROM for the SMD disk had a certain charm as well.
It was a quad-high UNIBUS card with a 16x16 array of diodes,
representing 16 words of memory. I forget whether one inserted
or removed a diode to make a bit one rather than zero.
It's too bad people don't get to do this sort of low-level stuff
these days; it gives one rather a good feel for what a bootstrap
does when one issues the command(s) oneself, or physically
programs the boot ROM.
Norman Wilson
Toronto ON