Is there a history of TUHS page I've missed?
When was it formed? Was it an outgrowth of PUPS? etc.
Again, I'm working on a talk and would like to include some of this
information and it made me think that the history of the historians should
be documented too.
Warner
TL; DR. I'm trying to find the best possible home for some dead trees.
I have about a foot-high stack of manilla folders containing "early Unix papers". They have been boxed up for a few decades, but appear to be in perfect condition. I inherited this collection from Jim Joyce, who taught the first Unix course at UC Berkeley and went on to run a series of ventures in Unix-related bookselling, instruction, publishing, etc.
The collection has been boxed up for a few decades, but appears to be in perfect condition. I don't think it has much financial value, but I suspect that some of the papers may have historical significance. Indeed, some of them may not be available in any other form, so they definitely should be scanned in and republished.
I also have a variety of newer materials, including full sets of BSD manuals, SunExpert and Unix Review issues, along with a lot of books and course handouts and maybe a SUGtape or two. I'd like to donate these materials to an institution that will take care of them, make them available to interested parties, etc. Here are some suggested recipients:
- The Computer History Museum (Mountain View, CA, USA)
- The Internet Archive (San Francisco, CA, USA)
- The Living Computers Museum (Seattle, WA, USA)
- The UC Berkeley Library (Berkeley, CA, USA)
- The Unix Heritage Association (Australia?)
- The USENIX Association (Berkeley, CA, USA)
According to Warren Toomey, TUHS probably isn't the best possibility. The Good News about most of the others is that I can get materials to them in the back of my car. However, I may be overlooking some better possibility, so I am following Warren's suggestion and asking here. I'm open to any suggestions that have a convincing rationale.
Now, open for suggestions (ducks)...
-r
I just found out about TUHS today; I plan to skim the archives RSN to get some context. Meanwhile, this note is a somewhat long-winded introduction, followed by a (non-monetary) sales pitch. I think some of the introduction may be interesting and/or relevant to the pitch, but YMMV...
Introduction
In 1970, I was introduced to programming by a cabal of social science professors at SF State College. They had set up a lab space with a few IBM 2741 (I/O Selectric) terminals, connected by dedicated lines to Stanford's Wylbur system. I managed to wangle a spot as a student assistant and never looked back. I also played a tiny bit with a PDP-12 in a bio lab and ran one (1) program on SFSC's "production system", an IBM 1620 Mark II (yep; it's a computer...).
While a student, I actually got paid to work with a CDC 3150, a DEC PDP-15, and (once) on an IBM 360/30. After that, I had some Real Jobs: assembler on a Varian 620i and a PDP-11, COBOL on an IBM mainframe, Fortran on assorted CDC and assorted DEC machines, etc.
By the late 80's, my personal computers were a pair of aging LSI-11's, running RT-11. At work (Naval Research Lab, in DC), I was mostly using TOPS-10 and Vax/VMS. I wanted to upgrade my home system and knew that I wanted all the cool stuff: a bit-mapped screen, multiprocessing, virtual memory, etc.
There was no way I could afford to buy this sort of setup from DEC, but my friend Jim Joyce had been telling me about Unix for a few years, so I attended the Boston USENIX in 1982 (sharing a cheap hotel room with Dick Karpinski :-) and wandered around looking at the workstation offerings. I made a bet on Sun (buying stock would have been far more lucrative, but also more risky and less fun) and ended up buying Sun #285 from John Gage.
At one point, John was wandering around Sun, asking for a slogan that Sun could use on a conference button to indicate how they differed from the competition. I suggested "The Joy of Unix", which he immediately adopted. This decision wasn't totally appreciated by some USENIX attendees from Murray Hill, who printed up (using troff, one presumes) and wore individualized paper badges proclaiming themselves as "The <whatever> of Unix". Imitation is the sincerest form of flattery... (bows)
IIRC, I received my Sun-1 late in a week (of course :-), but managed to set it up with fairly little pain. I got some help on the weekend from someone named Bill, who happened to be in the office on the weekend ... seemed quite competent ... I ran for a position on the Sun User Group board, saying that I would try to protect the interests of the "smaller" users. I think I was able to do some good in that position, not least because I was able to get John Gilmore and the Sun lawyers to agree on a legal notice, edit some SUGtapes, etc.
Later on, I morphed this effort into Prime Time Freeware, which produced book/CD collections of what is now called Open Source software. Back when there were trade magazines, I also wrote a few hundred articles for Unix Review, SunExpert, etc. Of course, I continue to play (happily) with computers...
Perkify
If you waded through all of that introduction, you'll have figured out that I'm a big fan of making libre software more available, usable, etc. This actually leads into Perkify, one of my current projects. Perkify is (at heart) a blind-friendly virtual machine, based on Ubuntu, Vagrant, and VirtualBox. As you might expect, it has a strong emphasis on text-based programs, which Unix (and Linux) have in large quantities.
However, Perkify's charter has expanded quite a bit. At some point, I realized that (within limits) there was very little point to worrying about how big the Vagrant "box" became. After all, a couple of dozen GB of storage is no longer an issue, and having a big VM on the disk (or even running) doesn't slow anything down. So, the current distro weighs in at about 10 GB and 4,000 or so APT packages (mostly brought in as dependencies or recommendations). Think of it as "a well-equipped workshop, just down the hall". For details, see:
- http://pa.cfcl.com/item?key=Areas/Content/Overviews/Perkify_Intro/main.toml
- http://pa.cfcl.com/item?key=Areas/Content/Overviews/Perkify_Index/main.toml
Sales Pitch
I note that assorted folks on this list are trying to dig up copies of Ken's Space Travel program. Amusingly, I was making the same search just the other day. However, finding software that can be made to run on Ubuntu is only part of the challenge I face; I also need to come up APT (or whatever) packages that Just Work when I add them to the distribution.
So, here's the pitch. Help me (and others) to create packages for use in Perkify and other Debian-derived distros. The result will be software that has reliable repos, distribution, etc. It may also help the code to live on after you and I are no longer able (or simply interested enough) to keep it going.
-r
Greetings,
I've so far been unable to locate a copy of munix. This is John Hawley's
dual PDP-11/50 version of Unix he wrote for his PHd Thesis in June 1975 at
the Naval Postgraduate School in Monterey, CA.
I don't suppose that any known copies of this exist? To date, my searches
have turned up goose-eggs.
Hawley's paper can be found here https://calhoun.nps.edu/handle/10945/20959
Warner
P.S. I'm doing another early history talk at FOSDEM in a couple of weeks.
So if you're in the audience, no spoilers please :)
Hello,
https://www.bell-labs.com/usr/dmr/www/spacetravel.html says:
> Later we fixed Space Travel so it would run under (PDP-7) Unix instead
> of standalone, and did also a very faithful copy of the Spacewar game
I have a file with ".TITLE PDP-9/GRAPHIC II VERSION OF SPACEWAR". (I
hope it will go public soon.) It seems to be a standalone program, and
it's written in something close to MACRO-9 syntax. I'm guessing the
Bell Labs version would have been written using the Unix assembler.
Best regards,
Lars Brinkhoff
The Executable and Linkable Format (ELF) is the modern standard for
object files in Unix and Unix-like OSes (e.g., Linux), and even for
OpenVMS. LInux, AIX and probably other implementations of ELF have a
feature in the runtime loader called symbol preemption. When loading
a shared library, the runtime loader examines the library's symbol
table. If there is a global symbol with default visibility, and a
value for that symbol has already been loaded, all references to the
symbol in the library being loaded are rebound to the existing
definition. The existing value thus preempts the definition in the
library.
I'm curious about the history of symbol preemption. It does not exist
in other implementations of shared libraries, such as IBM OS/370 and
its descendants, OpenVMS, and Microsoft Windows NT. ELF apparently
was designed in the mid-1990s. I have found a copy of the System V
Application Binary Interface from April 2001 that describes symbol
preemption in the section on the ELF symbol table.
When was symbol preemption when loading shared objects first
implemented in Unix? Are there versions of Unix that don't do symbol
preemption?
-Paul W.
Random832 <random832 at fastmail.com> writes:
>markus schnalke <meillo at marmaro.de> writes:
>> [2015-11-09 08:58] Doug McIlroy <doug at cs.dartmouth.edu>
>>> things like "cut" and "paste", whose exact provenance
>>> I can't recall.
>>
>> Thanks for reminding me that I wanted to share my portrait of
>> cut(1) with you. (I sent some questions to this list, a few
>> months ago, remember?) Now, here it is:
>>
>> http://marmaro.de/docs/freiesmagazin/cut/cut.en.pdf
>
>Did you happen to find out what GWRL stands for, in the the comments at
>the top of early versions of cut.c and paste.c?
>
>/* cut : cut and paste columns of a table (projection of a relation) (GWRL) */
>/* Release 1.5; handles single backspaces as produced by nroff */
>/* paste: concatenate corresponding lines of each file in parallel. Release 1.4 (GWRL) */
>/* (-s option: serial concatenation like old (127's) paste command */
>
>For that matter, what's the "old (127's) paste command" it refers to?
I know this thread is almost 5 years old, I came across it searching for
something else But as no one could answer these questions back then, I can.
GWRL stands for Gottfried W. R. Luderer, the author of cut(1) and paste(1),
probably around 1978. Those came either from PWB or USG, as he worked with,
or for, Berkley Tague. Thus they made their way into AT&T commercial UNIX,
first into System III and the into System V, and that's why they are missing
from early BSD releases as they didn't get into Research UNIX until the
8th Edition. Also "127" was the internal department number for the Computer
Science Research group at Bell Labs where UNIX originated
Dr. Luderer co-authored this paper in the orginal 1978 BSTJ on UNIX --
https://www.tuhs.org/Archive/Documentation/Papers/BSTJ/bstj57-6-2201.pdf
I knew Dr. Luderer and he was even kind enough to arrange for me stay with his
relatives for a few days in Braunschweig, West Germany (correct county name for
the time) on my first trip to Europe many decades ago. But haven't had contact nor
even thought of him forever until I saw his initials. I also briefly worked for Berk
when he was the department head for 45263 in Whippany Bell Labs before moving to
Murray Hill.
And doing a quick search for him, it looks like he wrote and autobiograhy, which I
am now going to have to purchase
http://www.lulu.com/shop/gottfried-luderer/go-west-young-german/paperback/p…
-Brian
Hi All:
I looking for the source code to the Maitre'd load balancer. It is
used to run jobs on lightly used machines. It was developed by Brian
Berhard at Berkeley's Computer Systems Support Group. I have the
technical report for it (dated 17-dec-1985). But haven't run across the
tarball.
thanks
-ron
All, I've had a few subscribers argue that the type checking
thread was still Unix-related, so feel free to keep posting
here in TUHS. But if it does drift away to non-Unix areas,
please pass it over to COFF.
Thanks & apologies for being too trigger-happy!
Cheers, Warren
>> After scrolling through the command list, I wondered how
>> long it was and asked to have it counted. Easy, I thought,
>> just pass it to a wc-like program. But "just pass it" and
>> "wc-like" were not givens as they are in Unix culture.
>> It took several minutes for the gurus to do it--without
>> leaving emacs, if I remember right.
> This is kind of illustrative of the '60s acid trip that
> perpetuates in programming "Everything's a string maaaaan".
> The output is seen as truth because the representation is
> for some reason too hard to get at or too hard to cascade
> through the system.
How did strings get into the discussion? Warner showed how
emacs could be expected to do the job--and more efficiently
than the Unix way, at that: (list-length (command-list-fn)).
The surprise was that this wasn't readily available.
Back then, in fact, you couldn't ask sh for its command
list. help|wc couldn't be done because help wasn't there.
Emacs had a different problem. It had a universal internal
interface--lists rather than strings--yet did not have
a way to cause this particular list to "cascade through
the system". (print(command-list-fn)) was provided, while
(command-list-fn) was hidden.
Doug
Mention of elevators at Tech Square reminds me of visiting there
to see the Lisp machine. I was struck by cultural differences.
At the time we were using Jerqs, where multiple windows ran
like multiple time-sharing sessions. To me that behavior was a
no-brainer. Surprisingly, Lisp-machine windows didn't work that
way; only the user-selected active window got processor time.
The biggest difference was emacs, which no one used at Bell
Labs. Emacs, of course was native to the Lisp machine and
provided a powerful and smoothly extensible environment. For
example, its reflective ability made it easy to display a
list of its commands. "Call elevator" stood out amng mundane
programmering actions like cut, paste and run.
After scrolling through the command list, I wondered how long
it was and asked to have it counted. Easy, I thought, just
pass it to a wc-like program. But "just pass it" and "wc-like"
were not givens as they are in Unix culture. It took several
minutes for the gurus to do it--without leaving emacs, if I
remember right.
Doug
This question comes from a colleague, who works on compilers.
Given the definition `int x;` (without an initializer) in a source file the
corresponding object contains `x` in a "common" section. What this means is
that, at link time, if some object file explicitly allocates an 'x' (e.g.,
by specifying an initializer, so that 'x' appears in the data section for
that object file), use that; otherwise, allocate space for it at link time,
possibly in the BSS. If several source files contain such a declaration,
the linker allocates exactly one 'x' (or whatever identifier) as
appropriate. We've verified that this behavior was present as early as 6th
edition.
The question is, what is the origin of this concept and nomenclature?
FORTRAN, of course, has "common blocks": was that an inspiration for the
name? Where did the idea for the implicit behavior come from (FORTRAN
common blocks are explicit).
My colleague was particularly surprised that this seemed required: even at
this early stage, the `extern` keyword was present, so why bother with this
behavior? Why not, instead, make it a link-time error? Please note that if
two source files have initializers for these variables, then one gets a
multiple-definition link error. The 1988 ANSI standard made this an error
(or at least undefined behavior) but the functionality persists; GCC is
changing its default to prohibit it (my colleague works on clang).
Doug? Ken? Steve?
- Dan C.
Jon Steinhart:
One amusing thing that Steve told me which I think I can share is why the
symmetry of case-esac, if-fi was broken with with do-done; it was because
the od command existed so do-od wouldn't work!
=====
As I heard the story in the UNIX room decades ago (and at least five
years after the event), Steve tried and tried to convince Ken to
rename od so that he could have the symmetry he wanted. Ken was
unmoved.
Norman Wilson
Toronto ON
> From: Clem Cole
> when she found out the elevators were hacked and controlled by the
> student's different computers, she stopped using them and would take
> the stairs
It wasn't quite as major as this makes it sound! A couple of inconspicuous
wires were run from the 'TV 11' on the MIT-AI KA10 machine (the -11 that ran
the Knight displays) into the elevator controller, and run onto the terminals
where the wires from the 'down' call buttons on the 8th and 9th floors went.
So it wasn't anything major, and there was really no need for her to take the
stair (especially 8 flights up :-).
The code is still extant, in 'SYSTEM; TV >'. It only worked (I think) from
Knight TV keyboards; typing 'ESC E' called the elevator to the floor
that keyboard was on (there's a table, 'ELETAB', which gives the physical
floor for each keyboard).
The machine could also open the locked 9th floor door to the machine room
(with an 'ESC D'), and there some other less major things, e.g. print screen
hardcopy. I'm not sure what the hardware in the TV-11 was (this was all run
out of the 'keyboard multiplexor'); it may have been something the AI Lab
built from scratch.
Noel
> When Bernie Greenberg did EMACS for Multics, he had a similar issue. I
> recall reading a document with an extensive discussion of how they dealt
> with this ... If anyone's really interested in this, and can't find it
> themselves, I can try looking for it.
I got a request for this; a Web search turned up:
https://www.multicians.org/mepap.html
which covers the points I mentioned (and more besides, such as why LISP was
chosen). I don't think this is the thing I remembered (which was, IIRC, an
informal note), but it does seem to be a later version of that.
Noel
> From: Otto Moerbeek <otto(a)drijf.net>
> I believe it was not only vi itself that was causing the load, it was
> also running many terminals in raw mode that killed performance.
I'm not familiar with the tty driver in late versions of Unix like 4.1 (sic),
but I'm very familiar with the one in V6, and it's not the raw mode _itself_
that caused the load (the code paths in the kernel for cooked and raw aren't
that different), but rather the need to wake up and run a process on every
character that was the real load.
When Bernie Greenberg did EMACS for Multics, he had a similar issue. I recall
reading a document with an extensive discussion of how they dealt with this,
especially when using the system over the ARPANET. IIRC, normal printing
characters were echoed without waking up the process; remotely, when using
the network. If anyone's really interested in this, and can't find it themselves,
I can try looking for it.
Noel
> From: Clem Cole <clemc(a)ccc.com>
> So, unless anyone else can illuminate, I'm not sure where the first cpp
> that some of us using v6 had originated.
I recall a prior extensive discussion about 'cpp'. I looked, and found it
(March 30, 2017) but it was a private discussion, not on TUHS (although you
were part of it :-). Here are clips of what I wrote (I don't want to re-post
what others wrote) from what I wrote, which tell most of the story:
There were a series of changes to C before V7 came out, resulting in the
so-called 'phototypsetter C compiler' (previously discussed on TUHS), and they
included the preprocessor. There's that series of short notes describing
changes to C (and the compiler), and they include mention of the preprocessor.
[Available here: http://gunkies.org/wiki/Typesetter_C for those who want to see
them.]
The MIT 'V6' Unix (which was, AFAICT, an augmented version of an early version
of PWB Unix) had that C compiler; and if you look at the PWB1 tree online, it
does have the C with 'cpp':
http://minnie.tuhs.org/cgi-bin/utree.pl?file=PWB1/sys/c/c
I did a diff of that 'cpp' with the MIT one, and they are basically identical.
----
I went looking for the C manual in the V6 distro, to see if it mentioned the
pre-processor. And it does:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/doc/c/c5
(Section 12, "Compiler control lines", about half way down.) So, I'm like,
'WTF? I just looked at cc.c and no mention of cpp!'
So I looked a little harder, and if you look at the cc.c in the distro (URL
above), you see this:
insym(&defloc, "define");
insym(&incloc, "include");
insym(&eifloc, "endif");
insym(&ifdloc, "ifdef");
insym(&ifnloc, "ifndef");
insym(&unxloc, "unix");
The pre-processor is integrated into 'cc' in the initial V6. So we do have a very
early version of it, after all...
----
So, 'cc' in V5 also included pre-processor support (just #define and #include,
though):
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s1/cc.c
Although we don't have the source to 'cc' to show it, V4 also appears to have
had it, per the man page:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/man/man1/cc.1
"If the -p flag is used, only the macro prepass is run on all files whose name
ends in .c"; and the V4 system source:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V4/nsys
also has .h files.
No sign of it in the man page for cc.1 in V3, though.
This all makes sense. .h files aren't any use with[out] #include, and without
#include, you have to have the structure definition, etc in multiple source
files. So #include would have gotten added very early on.
In V3, the system was apparently still in assembler, so no need.
-----
Also, there's an error in:
https://ewe2.ninja/computers/cno/
when it says "V6 was a very different beast for programming to V7. No c
preprocessor. The practical upshot of this is no #includes." that's
clearly incorrect (see above). Also, if you look at Lions (which is pure
early V6), in the source section, all the .c files have #include's.
Noel
Do we really need another boring old editor war? The topic
is not specific to UNIX in the least; nor, alas, is it historic.
Norman Wilson
Toronto ON
(typing this in qed)
Date: Wed, 8 Jan 2020 17:40:10 -0800
> From: Bakul Shah <bakul(a)bitblocks.com>
> To: Larry McVoy <lm(a)mcvoy.com>
> Cc: Warner Losh <imp(a)bsdimp.com>, The Eunuchs Hysterical Society
> <tuhs(a)tuhs.org>
> Subject: Re: [TUHS] screen editors
> Message-ID: <D192F5A5-2A67-413C-8F5C-FCF195151E4F(a)bitblocks.com>
> Content-Type: text/plain; charset=utf-8
>
> On Jan 8, 2020, at 5:28 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> >
> > On Wed, Jan 08, 2020 at 05:08:59PM -0700, Warner Losh wrote:
> >> On Wed, Jan 8, 2020, 4:22 PM Dave Horsfall <dave(a)horsfall.org> wrote:
> >>
> >>> On Wed, 8 Jan 2020, Chet Ramey wrote:
> >>>
> >>>>> That's a real big vi in RHL.
> >>>>
> >>>> It's vim.
> >>>
> >>> It's also VIM on the Mac.
> >>>
> >>
> >> Nvi is also interesting and 1/10th the size of vim. It's also the
> FreeBSD
> >> default for vi.
> >
> > I was gonna stay out of this thread (it has the feel of old folks
> somehow)
> > but 2 comments:
> >
> > Keith did nvi (I can't remember why? licensing or something) and he did
> > a pretty faithful bug for bug compatible job. I've always wondered why.
> > I like Keith but it seemed like a waste. There were other people taking
> > vi forward, elvis, xvi (I hacked the crap out of that one, made it mmap
> > the file and had a whole string library that treated \n like NULL) and
> > I think vim was coming along. So doing a compat vi felt like a step
> > backward for me.
> >
> > For all the vim haters, come on. Vim is awesome, it gave me the one
> > thing that I wanted from emacs, multiple windows. I use that all the
> > time. It's got piles of stuff that I don't use, probably should, but
> > it is every bit as good of a vi as the original and then it added more.
> > I'm super grateful that vim came along.
>
> The first thing I do on a new machine is to install nvi. Very grateful to
> Keith Bostic for implementing it. I do use multiple windows — only
> horizontal splits but that is good enough for me as all my terminal
> windows are 80 chars wide. Not a vim hater but never saw the need.
>
Not sure if you’re saying horizontal splits are all you need, or all you’re
aware of, but nvi “:E somefile” will split to a top/bottom arrangement and
“:vsplit somefile” will do a left/right arrangement, as well as being able
to “:fg”, “:bg” screens. I too am a (NetBSD) nvi appreciator.
-bch
Working on a new project that's unfortunately going to require some changes
to the linux kernel. Lived a lot of my life in the embedded world, haven't
touched a *NIX kernel since 4.3BSD. Am writing a travelogue as I find my way
around the code. Wasn't planning another book but this might end up being
one. Anyway, a few questions...
Was looking at the filesystem super_block structure. A large number of the
members of the structure (but not all) begin with a s_ prefix, and some of
the member names are in the 20 character long range. I recall that using
prefixes was necessary before structures and unions had their own independent
namespaces. But I also seem to recall that that was fixed before long
identifier names happened. Does anybody remember the ordering for these two
events?
Also, anybody know where the term superblock originated? With what filesystem?
Jon
below... -- warning veering a little from pure UNIX history, but trying to
clarify what I can and then moving to COFF for follow up.
On Wed, Jan 8, 2020 at 12:23 AM Brian Walden <tuhs(a)cuzuco.com> wrote:
> ....
>
> - CMU's ALGOL68S from 1978 list all these ways --
> co comment
> comment comment
> pr pragmat
> pragmat pragmat
> # (comment symbol) comment
> :: (pragmat symbol) pragmat
> (its for UNIX v6 or v7 so not surprising # is a comment)
> http://www.softwarepreservation.org/projects/ALGOL/manual/a68s.txt/view
Be careful of overthinking here. The comment in that note says was it was
for* PDP-11's *and lists V6 and V7 was *a possible target*, but it did not
say it was. Also, the Speach and Vision PDP-11/40e based systems ran a
very hacked v6 (which a special C compiler that supported CMU's csv/cret
instructions in the microcode), which would have been the target systems.
[1]
To my knowledge/memory, the CMU Algol68 compiler never ran anywhere but
Hydra (and also used custom microcode). IIRC there was some talk to move
it to *OS (Star OS for CM*) I've sent a note to dvk to see if he remembers
it otherwise. I also ask Liebensperger what he remembers, he was hacking on
*OS in those days. Again, IIRC Prof. Peter Hibbard was the mastermind
behind the CMU Algol68 system. He was a Brit from Cambridge (and taught
the parallel computing course which I took from him at the time).
FWIW: I also don't think the CMU Algol68 compiler was ever completely
self-hosting, and like BLISS, required the PDP-10 to support it. As to why
it was not moved to the Vax, I was leaving/had left by that time, but I
suspect the students involved graduated and by then the Perq's had become
the hot machine for language types and ADA would start being what the gvt
would give research $s too.
>
>
> ...
>
> But look! The very first line of that file! It is a single # sitting all
> by itself. Why? you ask. Well this is a hold over from when the C
> preprocessor was new. C orginally did not have it and was added later.
> PL/I had a %INCLUDE so Ritchie eventaully made a #include -- but pre 7th
> Edition the C preprocessor would not be inkoved unless the very first
> character of the C source file was an #
>
That was true of V7 and Typesetter C too. It was a separate program (
/lib/cpp) that the cc command called if needed.
> Since v7 the preprocessor always run on it. The first C preprocessor was
> Ritchie's work with no nested includes and no macros. v7's was by John
> Reiser which added those parts.
>
Right, this is what I was referring too last night in reference to Sean
comments. As I said, the /bin/cc command was a shell script and it peaked
at the first character to see if it was #. I still find myself starting
all C programs with a # on a line by itself ;-)
Note that the Ritchie cpp was influenced by Brian's Ratfor work, so using #
is not surprising.
This leads to a question/thought for this group, although I think needs to
move to COFF (which I have CC'ed for follow up).
I have often contended, that one of the reasons why C, Fortran, and PL/1
were so popular as commercial production languages were because they could
be preprocessed. For a commercial place where lots of different targets is
possible, that was hugely important. Pascal, for instance, has semantics
that makes writing a preprocessor like cpp or Ratfor difficult (which was
one of the things Brian talks about in his "*Why Pascal is not my favorite
Programming Language <http://www.lysator.liu.se/c/bwk-on-pascal.html>*"
paper). [2]
So, if you went to commercial ISV's and looked at what they wrote in. It
was usually some sort of preprocessed language. Some used Ratfor like a
number of commercial HPC apps vendors, Tektronix wrote PLOT10 in MORTRAN.
I believe it was Morgan-Stanley had a front-end for PL/1, which I can not
recall the name. But you get the point ... if you had to target different
runtime environments, it was best for your base code to not be specific.
However ... as C became the system programming language, the preprocessor
was important. In fact, it even gave birth the other tools like autoconfig
to help control them. Simply, the idiom:
#ifdef SYSTEMX
#define SOME_VAR (1)
... do something specific
#endif /* SYSTEMX */
While loathsome to read, it actually worked well in practice.
That fact is I hate the preprocessor in many ways but love it for what it
for the freedom it actually gave us to move code. Having programmed since
the 1960s, I remember how hard it was to move things, even if the language
was the same.
Today, modern languages try to forego the preprocessor. C++'s solution is
to throw the kitchen sink into the language and have 'frameworks', none of
which work together. Java's and its family tries to control it with the
JVM. Go is a little too new to see if its going to work (I don't see a lot
of production ISV code in it yet).
Note: A difference between then and now, is 1) we have few target
architectures and 2) we have fewer target operating environments, 3) ISV
don't like multiple different versions of their SW, they much prefer very
few for maintenance reasons so they like # 1 and #2 [i.e. Cole's law of
economics in operation here].
So ... my question, particularly for those like Doug who have programmed
longer and at least as long as I, what do you think? You lived the same
time I did and know the difficulties we faced. Is the loss of a
preprocessor good or bad?
Clem
[1] Historical footnote about CMU. I was the person that brought V7 into
CMU and I never updated the Speach or Vision systems and I don't think
anyone did after I left. We ran a CMU V7 variant mostly on the 11/34s (and
later on a couple of 11/44s I believe) that had started to pop up.
Although later if it was a DEC system, CS was moving to Vaxen when they
could get the $s (but the Alto's and Perq's had become popular with the CMU
SPICE proposal). Departments like bio-engineering, mech ee, ran the
cheaper systems on-site and then networked over the Computer Center's Vaxen
and PDP-20's when they needed address space).
[2] Note: Knuth wrote "Web" to handle a number of the issues, Kernighan
talks about - but he had to use an extended Pascal superset and his program
was notable for not being portable (he wrote for it for the PDP-10
Pascal). [BTW: Ward Cunningham, TW Cook and I once counted over 8
different 'Tek Pascal' variants and 14 different 'HP Basics'].