[moving to COFF as this has drifted away from Unix]
On Sat, Sep 28, 2024 at 2:06 PM Larry McVoy <lm(a)mcvoy.com> wrote:
> I have a somewhat different view. I have a son who is learning to program
> and he asked me about C. I said "C is like driving a sports car on a
> twisty mountain road that has cliffs and no guard rails. If you want to
> check your phone while you are driving, it's not for you. It requires
> your full, focussed attention. So that sounds bad, right? Well, if
> you are someone who enjoys driving a sports car, and are good at it,
> perhaps C is for you."
>
> If you really want a language with no guard rails, try programming in
BLISS.
Regarding C and C++ having dangerous language features--of course they do.
Every higher-level language I've ever seen has its set of toxic language
features that should be avoided if you want reliability and maintainability
for your programs. And a set of things to avoid if you want portability.
Regarding managed dynamic memory allocation schemes that use garbage
collection vs. malloc()/free(), there are some applications where they are
not suitable. I'm thinking about real-time programs. You can't have your
missle defense software pause to do garbage collection when you're trying
to shoot down an incoming ballistic missile.
-Paul W.
Poul-Henning also suggests this link as well ...
Warren
----- Forwarded message from Poul-Henning Kamp -----
There is also 3B stuff in various other subdirectories on that site,
for instance: https://www.telecomarchive.com/six-digit.html
----- End forwarded message -----
Moving to COFF ,..
From: "Rich Salz" <rich.salz(a)gmail.com>
To: "TUHS main list" <tuhs(a)tuhs.org>
Cc: "Douglas McIlroy" <douglas.mcilroy(a)dartmouth.edu>
Sent: Monday, September 30, 2024 4:03:15 PM
Subject: [TUHS] Re: Minimum Array Sizes in 16 bit C (was Maximum)
On Mon, Sep 30, 2024 at 3:12 PM Steffen Nurpmeso < steffen(a)sdaoden.eu > wrote
noone ever told them that even the eldest C can be used in a safe
way;
Perhaps we have different meanings of the word safe.
void foo(char *p) { /* interesting stuff here */ ; free(p); }
void bar() { char *p = malloc(20);
foo(p);
printf("foo is %s\n", p);
foo(p);
}
Why should I have to think about this code when the language already knows what is wrong.
No one would make the claim that programming in machine "language" is safe.
No one would make the claim that programming in assembly "language" is safe.
I've always viewed C as a portable assembler. I think the real issue has nothing to do with the "safety" of C, but rather the "safety" of your-choice-of-C-libraries-and-methods.
My $.02
Jim
FWIW, I just saw this in code generated by bison:
(yyvsp[-4].string_val), (yyvsp[-2].string_val), (yyvsp[0].string_val)
(IIUC) referencing the addresses under the top of the stack when passing
$2, $4, $6 into a function from an action (skipping a couple of
tokens). So the sign just depends on which way the stack is growing.
As for range checking of pointers into a malloc'd block of memory, the
pointer could have just 2 things: the address at the start of the block
and the pointer itself, some moving address in the block; and then
before the start of the block malloc could stash the address of the end
of the block (where it could be referenced by all pointers into the
block). So instead of a triple word, the pointer is a double word, and
the malloc'd block has an extra word before it. This must have been
done before by someone, somewhere.
I don't think of pointers and arrays in C as the same thing, but rather
array references as an alternate syntax for pointer arithmetic (or vice
versa).
- Aron
Moved to Coff, because it's about programming style, not history.
> Perhaps I'm missing something? Clever arithmetic in the index
> calculation aside, this is semantically different than using an actual
> negative integer to index into an array? Moreover, if the intent is to
> start the sequence with 0, why set `fib(0)` to 1? How is this
> substantially different from the usual way of writing this:
I said the Fibonacci example was silly. Maybe you'll be more convinced by
the binomial-coefficient program below.
The array of interest is fib. base is simply scaffolding and doesn't appear
in the working code. You won't find the ith Fibonacci in base[i]; it's in
fib(i). But fib(-1) exists. What's important is that the C convention of
array indexes beginning at 0 has been circumvented.
I could be accused of subterfuge in depending on the semantics of static
storage to initialize fib(-1) to zero. Subterfuge or not, it's customary C
usage. The binomial-coefficient program relies on "out-of-bounds" zeros
abutting two sides of a triangle.
int base[N][N+2];
#define binom(n,i) base[n][(i)+1]
void fill() {
binom(0,0) = 1;
for(n=1; n<N; n++)
for(i=0; i<=n; i++)
binom(n,i) = binom(n-1,i) + binom(n,i-1);
}
I think the offset algorithm above looks better than the more typical one
below.
The two programs happen to have identical character counts.
int binom[N][N+1];
void fill() {
for(n=0; n<N; n++) {
binom[n][0] = 1;
for(i=1; i<n; i++)
binom[n][i] = binom[n-1][i] + binom[n][i-1];
binom[n][n] = 1;
}
}
Doug
A summary of a couple of longer posts.
Ralph Corderoy and I used different C syntax than to access an MxN array A,
whose subscripts begin at M0 and N0 in the respective dimensions. Here's a
somewhat simplified version of both. In our examples, M0 and M0 were
negative.
Mine:
int base[M][N];
#define A(i,j) base[i-M0][j-N0]
Ralph's
int base[M][N];
int (*A)[N] = (int(*)[N])&base[-M0][-N0];
In my scheme element references must be written A(i,j). Ralph retains C
array syntax, A[i][j].
Doug
Mea culpa.
I thought I could offer a simple example, but my binomial-coefficient
program is wrong, and loses its force when corrected. For a convincing
example, see the program in
https://digitalcommons.dartmouth.edu/cs_tr/385/
Unfortunately you have to read a couple of pages of explanation to see what
this program is up to. It's a fun problem, though.
Doug
Moved to COFF.
On 2024-09-20 11:07, Dave Horsfall wrote (in part):
>
> Giggle... In a device driver I wrote for V6, I used the expression
>
> "0123"[n]
>
> and the two programmers whom I thought were better than me had to ask me
> what it did...
>
> -- Dave, brought up on PDP-11 Unix[*]
>
> [*]
> I still remember the days of BOS/PICK/etc, and I staked my career on Unix.
Working on embedded systems, we often used constructs such as a[-4] to
either read or modify stuff on the stack (for that particular
compiler+processor only).
S.
> I you have historical resources on Plan 9 or Inferno, or are reminded of
> any interesting tidbits, you can also share them here, as this list is
> already recognized by historians as a legitimate source.
Can someone tell me where the original "here" of the quoted message is?
Thanks,
Doug
Hi all, Edouard asked me to pass this e-mail on to both TUHS and COFF lists.
Cheers, Warren
----- Forwarded message from Edouard Klein <edouardklein(a)gmail.com> -----
Subject: History tract during the next IWMP9 in Paris next May
Date: Thu, 29 Aug 2024 22:46:30 +0200 (1 week, 4 days, 19 hours ago)
Dear Unix history enthusiasts,
The 11th International Workshop on Plan 9 will be held in Paris on May
22-24 2025.
One of the focus area this year will be Plan 9's history and its
influence on later computer science and industry trends.
The history team at the CNAM (where the conference will be held) has
agreed to help us prepare for the event and stands ready to record oral
histories, or any other format that would make the participants happy.
They had organized in 2017 a "colloque" at which Clem spoke (and I
listened somewhere in the audience) on UNIX:
https://technique-societe.cnam.fr/colloque-international-unix-en-europe-ent…
I will keep the list posted as our efforts pan out, but I thought I'd
get the word out as soon as possible.
I you have historical resources on Plan 9 or Inferno, or are reminded of
any interesting tidbits, you can also share them here, as this list is
already recognized by historians as a legitimate source.
The program committee members, many (if not all) of whom roam this very
list, would welcome any proposal or contributions in this area :)
The CfP is at:
http://iwp9.org/
Looking forward to read what you care to share, or to seeing you in
person in Paris,
Cheers,
Edouard.
----- End forwarded message -----
I just noticed this:
Sep 2018: The Multics History Project Archives were donated to the Living
Computer Museum in Seattle. This included 11 boxes of tapes, and 58 boxes of
Multics and CTSS documentation and listings. What will happen to these items
is unknown.
https://multicians.org/multics-news.html#events
That last sentence is ironic; I _assume_ it was written before the recent news.
I wonder what will happen to all such material at the LCM. Anyone know?
Noel
That's where it all began for Unix in Oz (the Dept of Power Systems paid
for the Ed5 tape, as I recall). I'm told that the campus has changed so
much it's now unrecognisable...
https://www.openday.unsw.edu.au/planner
-- Dave
I'm cleaning out my desk as retirement looms (a few more months) and
found my Sun coffee mug!
https://udel.edu/~mm/sun/
The back of the mug says such good things, yet...
Mike Markowski
> From: Larry McVoy
{Moving this to COFF, as it's not UNIX-related. I'll have another reply there
as well, about the social media point.}
> The amazing thing, to me, is I was a CS student in very early 1980's
> and I had no idea of the history behind the arpanet.
I don't think that was that uncommon; at MIT (slightly earlier, I think -
-'74-'77 for me) the undergrad's weren't learning anything about networking
there either, then.
I think the reason is that there wasn't much to teach - in part because we
did not then know much about networking, and in part because it was not yet
crystal clear how important it would become (more below on that).
There was research going on in the area, but even at MIT one doesn't teach
(or didn't then; I don't know about now) on-going research subjects to
undergrads. MIT _did_ have, even then, a formal UROP ('undergrad research
opportunities') program, which allowed undergrads to be part of research
groups - a sheer genius idea - which in some fast-moving fields, like CS, was
an inestimable benefit to more forward undergrads in those fields.
I joined the CSR group at LCS in '77 because I had some operating system
ideas I wanted to work on; I had no idea at that point that they were doing
anything with networks. They took me on as the result of the sheerest chance;
they had just gotten some money from DARPA to build a LAN, and the interface
was going to be built for a UNIBUS PDP-11, and they needed diagnostics, etc
written; and they were all Multicians. I, by chance, knew PDP-11 assembler -
which none of them did - the MIT CS introductory course at that point taught
it. So the deal was that I'd help them with that, and I could use the machine
to explore my OS ideas in return.
Which never really happened; it fairly became clear to me that data
networking was going to have an enormous impact on the world, and at that
point it was also technically interesting, so I quickly got sucked into that
stuff. (I actually have a written document hiding in a file drawer somewhere
from 1978 or so, which makes it plain that that I'm not suffering 20-20
hindsight here, in talking about foreseeing the impact; I should dig it up.)
The future impact actually wasn't hard to foresee: looking at what printed
books had done to the world, and then telgraphs/telephones, and what
computers had already started to do at that point, it was clear that
combining them all was going to have an incredible impact (and we're still
adapting to it).
Learning about networking at the time was tricky. The ARPANET - well, NCP and
below - was pretty well documented in a couple of AFIPS papers (linked to at
the bottom here:
https://gunkies.org/wiki/ARPANET
which I have a very vague memory I photocopied at the time out of the bound
AFIPS proceedings in the LCS library). The applications were only documented
in the RFC's.
(Speaking of which, at that level, the difference between the ARPANET and the
Internet was not very significant - it was only the internals, invisible to
the people who did 'application' protocols, that were completely different.
HTTP would probably run just fine on top of NCP, for instance.)
Anything past that, the start of the internet work, that, I picked up by i)
direct osmosis from other people in CSR who were starting to think about
networks - principally Dave Clark and Dave Reed - and then ii) from documents
prepared as part of the TCP/IP effort, which were distributed electronically.
Which is an interesting point; the ARPANET was a key tool in the internet
work. The most important aspect was email; non-stop discussion between the
widely separated groups who were part of the project. It also made document
distribution really easy (which had also been true of the latter stages of
the ARPANET project, with the RFC's). And of course it was also a long-haul
network that we used to tie the small internets at all the various sites
(BBN, SRI, ISI - and eventually MIT) into the larger Internet.
I hate to think about trying to do all that work on internets, and the
Internet, without the ARPANE there, as a tool.
Noel
Kevin Bowling wrote in
<CAK7dMtAH0km=RLqY0Wtuw6R7jXyWg=xQ+SPWcQA-PLLaTZii0w(a)mail.gmail.com>:
|On Wed, Aug 14, 2024 at 11:59 AM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
|> On Wednesday, August 14th, 2024 at 9:45 AM, Clem Cole <clemc(a)ccc.com> \
|> wrote:
|>> ...
|>> The issue came when people started using the mail system as a programmat\
|>> ic messaging scheme (i.e., fork: some_program | mail user) and other \
|>> programs started to parse the output.
|>> ...
|> Mail as IPC...that's what I'm reading from that anyway...now that's \
|> an interesting concept. Did that idea ever grow any significant \
|> legs? I can't tell if the general concept is clever or systems abuse, \
|> in those days it seems like it could've gone either way.
|
|I like Clem's answer on mail IPC/RPC.
|
|To add I have heard some stories of NNTP being used once upon a time
|at some service providers the way ansible/mcollective/salt might be
|used to orchestrate UNIX host configurations and application
|deployments. The concept of Control messages is somewhat critical to
|operations, so it's not totally crazy, but isolating article flows
|would give me some heartburn if the thing has privileged system
|access.. would probably want it on a totally distinct
|instance+port+configuration.
|
|Email and Usenet both have some nice properties of implementing a
|"Message Queue" like handling offline hosts when they come back. But
|the complexity of mail and nntp implementations lean more towards
|system abuse IMO.
The IETF will go for SML (structured email)
https://datatracker.ietf.org/group/sml/about/
which then goes for machine interpretable email message( part)s.
|> I guess it sorta did survive in the form of automated systems today \
|> expecting specially formatted emails to trigger "stuff" to happen.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Matt - I'm going to BCC: TUHS and move this to COFF - since while UNIX was
certainly in the mix in all this, it was hardly first or the only place it
happenned.
On Wed, Aug 14, 2024 at 2:59 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> On Wednesday, August 14th, 2024 at 9:45 AM, Clem Cole <clemc(a)ccc.com>
> wrote:
>
> >
> > ...
> > The issue came when people started using the mail system as a
> programmatic messaging scheme (i.e., fork: some_program | mail user) and
> other programs started to parse the output.
> > ...
>
> Mail as IPC...that's what I'm reading from that anyway...now that's an
> interesting concept.
It's kind of funny the history. ARPANET gives us FTP as a way to
exchange files. So, people figure out how to hack the mailer to call FTP
to send a file remotely and set up a submit a cron/batch submission, a.k.a
RJE. This is encouraged by DARPA because part of the justification of the
ARAPNET was to be able to share resources, and this enables supercomputers
of the day to be able to provide cycles to DARPA folks who might not have
access to larger systems. Also, remember, mailers were local to systems
at that point.
So someone gets the bright idea to hooker the mailer into this system --
copy the "mail file" and set up a remote job to mail it locally. Let's
just say this prioves to be a cool idea and the idea of intersystem email
begins in the >>ARPANET<< community.
So the idea of taking it to the next level was not that far off. The
mailer transports started to offer (limited) features to access services.
By the time of Kurt's "delivermail" but he added a feature, thinking it was
system logs that allowed specific programs to be called. In fact, it may
have been invented elsewhere but before Eric would formalize "vacation" -
Jim Kleckner and I hacked together a "awk" script to do that function on
the UCB CAD 4.1 systems. I showed it to Sam and a few other people, and I
know it went from Cory to Evans fairly quickly. Vacation(1) was written
shortly there after to be a bit more flexible than our original script.
Did that idea ever grow any significant legs?
I guess the word here is significant. It certainly was used where it made
sense. In the CAD group, we had simulations that might run for a few
days. We used to call the mailer every so often to send status and
sometimes do something like a checkpoint. It lead to Sam writing syslogd,
particularly after Joy created UNIX domain sockets. But I can say we used
it a number of places in systems oriented or long running code before
syslogd as a scheme to log errors, deal with stuff.
> I can't tell if the general concept is clever or systems abuse, in those
> days it seems like it could've gone either way.
>
> I guess it sorta did survive in the form of automated systems today
> expecting specially formatted emails to trigger "stuff" to happen.
Exactly.
ᐧ
Greg, this needs to move to COFF, so I'm BCCing TUHS in my reply. (My error
in the original message was that I should have BCC'd everyone but COFF, so
replies were directed there. Mei culpa).
However, since I have seen different people on all these lists bemoan the
loss of the LCM+L, I hope that by the broader announcement, a number of you
will consider the $36/yr membership to help Stephen and his team to be able
to keep these systems running and the at least the "labs" port of the old
LCM+L mission alive.
On Thu, Aug 1, 2024 at 9:56 PM Gregg Levine <gregg.drwho8(a)gmail.com> wrote:
> Hello!
> Pardon me for asking Clem, but would you mind naming the survivors?
The details are still coming out from Stephen and friends -- I would
recommend listening to his presentation and then maybe joining the List
Server at SDF by sending a (plain text) email to majordomo(a)sdf.org with
the subject and body containing the line: subscribe museum-l
I have an idea what these Toads are, and of course what Multics happened to
> be, but that's it.
>
LCM+L owned a real Honeywell 6180 front panel. The folks in their lab
interfaced it to a microcontroller (I think it was an RP3 or 4, but it
could be something like a BeagleBone, I never knew). It was running
Multics Release 12.8 on a SimH-derived Honeywell 6180 [I'm not sure if
those changes ever made it back to OpenSIMH - I have not personally tried
it myself]. This system seems to have been moved to SDF's new site.
Also, a number of the MIT Multics tapes had been donated to the LCM+L.
These have survived, and the SDF has them. I'll not repeat Stephen's
report here, but he describes what they have and are doing.
Miss Piggy is the PDP 11/70 that Microsoft purchased and used for their SW
original development. It has been running a flavor of Unix Seventh Edition
- I do not know what type of updates were added, but I expect the DEC v7m
and the V7 addendum to be there. You can log in and try it yourself by ssh
menu(a)sdf.org" and picking Miss Piggy in the UNIX submenu. Miss Piggy
used to live and be on display at the LCM+L, but Stephen and the SDF were
involved in its admin/operation. Stephen says in his presentation that they
are trying to get Miss Piggy back up and running [my >>guess<< is that the
"Miss Piggy" instance on the SDF menu is currently running on an OpenSIMH
instance while the real hardware is being set up at the new location].
In the early 1980s, as DEC started to de-commit to the 36-bit line after
they introduced the 32-bit Vax systems, a number of PDP-10 clones appeared
on the market. For instance, the System Concepts SC-40 was what
Comp-U-Serve primarily switched to. Similarly, many ex-Stanford AI types
forked to create the Toad Systems XXL, a KL10 clone. SDF and LCM+L owned
several of these two styles of systems and were on display and available
for login. Since Twenex.org is live (and has been) and Stephen shows a
picture of the SC40, again, I am (again) >>guessing<< that these have all
been moved to the new location for SDF.
Stephen mentioned in his presentation that they have the LCM-L's Vax7000
but do not yet have the 3-phase power in their computer room. He suggested
that it is one of the most popular machines in the SDF menu, and they
intend to make it live shortly.
It is unclear what became of some of the other items. It was pointed out
that running a CDC6500 is extremely expensive to operate from a power
standpoint, so they offer an NOS login using the DTCyber simulator. He
never mentioned what became of the former Purdue machine that the LCM owned
and had restored.
I am interested in knowing what happened to the two PDP-7s. I know that at
least one was privately owned, but was being restored and displayed at the
LCM+L. It was one of these systems that Unix V0 was resurrected and ran
for the UNIX 50th Anniversary Party that the LCM+L hosted. The LCM+L had
some interesting peripherals. For instance, the console for Miss Piggy was
a somewhat rare ASR37 [which is Upper/Lower case and the "native" terminal
for Research Unix]. I hope they have it also. The LCM+L had a number of
different types of tape transports for recovering old data. Stephen
mentioned that they have some of these but did not elaborate.
Clem
I think this is off topic for TUHS and more appropriate for COFF.
Gregg Levine wrote:
> Pardon me for asking Clem, but would you mind naming the survivors? I
> have an idea what these Toads are, and of course what Multics happened
> to be, but that's it.
We don't know exactly yet, but according to the video, there's a VAX
7000 and a DEC-2020. The TOAD computers are XKL's PDP-10 remake;
there's also another one called SC-40. Stephen also mentions Multics
tapes were rescued.
Maybe the best way to see what is there right now, is to dial into
"ssh menu(a)sdf.org"
[-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-]
-+- SDF Vintage Systems REMOTE ACCESS -+-
[-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-]
[a] multics Multics MR12.8 Honeywell 6180
[b] toad-2 TOPS-20 7(110131)-1 XKL TOAD-2
[c] twenex TOPS-20 7(63327)-6 XKL TOAD-2
[d] sc40 TOPS-20 7(21733) SC Group SC40
[e] lc ITS ver 1648 PDP-10 KS10
[f] ka1050 TOPS-10 6.03a sim KA10 1050
[g] kl2065 TOPS-10 7.04 sim KL10 2065
[h] rosenkrantz OpenVMS 7.3 VAX 7000-640
[i] tss8 TSS/8 PDP-8/e
[j] ibm4361 VM/SP5 Hercules 4361
[k] ibm7094 CTSS i7094
[l] cdc6500 NOS 1.3 DTCyber CDC-6500
[z] bitzone NetBSD BBS AMD64
[1] Proceed to the UNIX Systems sub-menu
[2] Information about Vintage Systems at SDF.ORG
And the Unix section:
[a] misspiggy UNIX v7 PDP-11/70
[c] lcm3b2 UNIX SVR3.2.3 AT&T 3B2/1000-70
[d] guildenstern BSD 4.3 simh MicroVAX 3900
[e] snake BSD 2.11 PDP-11/84
[f] hkypux HP/UX 10.20 HP9000/715
[g] truly TRU64 5.0 DEC Alpha 500au
[h] three SunOS 4.1.1 Sun-3/160
[i] indy IRIX 6.5 SGI Indy R5000
[j] ultra Ultrix 4.5 simh MicroVAX 3900
Please excuse the wide distribution, but I suspect this will have general
interest in all of these communities due to the loss of the LCM+Labs.
The good folks from SDF.org are trying to create the Interim Computer
Museum:
https://icm.museum/join.html
As Lars pointed out in an earlier message to COFF there is a 1hr
presentation on the plans for the ICM.
https://toobnix.org/w/ozjGgBQ28iYsLTNbrczPVo
FYI: The yearly (Bootstrap) subscription is $36
They need to money to try to keep some of these systems online and
available. The good news is that it looks like many of the assets, such as
Miss Piggy, the Multics work, the Toads, and others, from the old LCM are
going to be headed to a new home.
ᐧ
Hi all,
Some time ago I dived into ed and tried programming with it a bit. It
was an interesting experience but I feel like the scrolling
visual terminal can't properly emulate the paper terminal. You can't do
rip out a printout and put it next to you, scribble on it, etc.
I'd like to try replicating the experience more closely but I'm not
interested in acquiring collector's items or complex mechanical
hardware. There don't seem to be contemporary equivalents of the TI
Silent 700 so I've been looking at are standalone printing devices to
combine with a keyboard. But the best I can find is line printing,
which is unsuitable for input.
Any suggestions?
Sijmen
> Yeah, I wasn't specific enough.
> The ownership of the model 67 changed to the State of NJ, but it was
> operated and present at Princeton, until replaced by a 370/158, which in
> turn changed owners back to Princeton in 75.
>
> What OS did you use on the 67?
On the /67 I used TSS with a free account they gave me for being in a
local computer club. On the /91 I mostly used the free stuff but one
summer in the early 70s I had a job speeding up an Ecom professor's
Fortran model. Compiling it with Fortran H rather than G, and adjusting
an assembler routine that managed an external file not to open and close
the file on every call helped a lot.
Paul Hilfinger had a long career at UC Berkeley and is easy to find if you
want to ask him if he has any of his old papers.
R's,
John
>
> On Wed, Jul 17, 2024 at 6:58 PM John Levine <johnl(a)taugh.com> wrote:
>
>> It appears that Tom Lyon <pugs78(a)gmail.com> said:
>>> -=-=-=-=-=-
>>>
>>> Jonathan - awesome!
>>> Some Princeton timing: the 360/67 arrived in 1967, but was replaced in the
>>> summer of 1969 by the 360/91.
>>
>> No, the /67 and /91 were there at the same time. I used them both in high
>> school.
>> I graduated in 1971 so that must have been 1969 to 71, and when I left I'm
>> pretty
>> sure both were still there.
>>
>> R's,
>> John
>>
>>
>>> BWK must've got started on the 7094 that preceded the 67, but since it was
>>> FORTRAN the port wasn't hard.
>>> Now I wonder what Paul Hilfinger did and whether it was still FORTRAN.
>>>
>>> I graduated in 1978, ROFF usage was still going strong!
>>>
>>> On Wed, Jul 17, 2024 at 5:42 PM Jonathan Gray <jsg(a)jsg.id.au> wrote:
>>>
>>>> On Wed, Jul 17, 2024 at 09:45:57PM +0000, segaloco via TUHS wrote:
>>>>> On Wednesday, July 17th, 2024 at 1:51 PM, segaloco <
>>>> segaloco(a)protonmail.com> wrote:
>>>>>
>>>>>> Just sharing a copy of the Roff Manual that I had forgotten I
>> scanned a little while back:
>>>>>>
>>>>>> https://archive.org/details/roff_manual
In 1959, when Doug Eastwood and I, at the suggestion of George Mealy, set
out to add macro capability to SAP (Share assembly program), the word
"macro"--short for "macroinstruction"--was in the air, though none of us
had ever seen a macroprocessor. We were particularly aware that GE had a
macro-capable assembler. I still don't know where or when the term was
coined. Does anybody know?
We never considered anything but recursive expansion, where macro
definitions can contain macro calls; thus the TX-0 model comes as quite a
surprise. We kept a modest stack of the state of each active macro
expansion. We certainly did not foresee that within a few years some
applications would need a 70-level stack!
General stack-based programming was not common practice (and the term
"stack" did not yet exist). This caused disaster the first time we wrote a
macro that generated a macro definition, because a data-packing subroutine
with remembered state, which was used during both definition and expansion,
was not reentrant. To overcome the bug we had in effect to introduce
another small stack to keep the two uses out of each other's way. Luckily
there were no more collisions between expansion and definition. Moreover,
this stack needed to hold only one suspended state because expansion could
trigger definition but not vice versa.
Interestingly, the problem in the previous paragraph is still with us 65
years later in many programming languages. To handle it gracefully, one
needs coroutines or higher-order functions.
Doug
I was idly leafing through Padlipsky's _Elements Of Network Style_ the
other day, and on page 72, he was imagining a future in which a cigar-box
sized PDP-10 would be exchanging data with a breadbox-sized S/370.
And here we are, only 40 years later, and 3 of my PDP-10s and my S/370 are
all running on the same cigarette-pack sized machine, which cost something
like $75 ($25 in 1984 dollars).
Adam
> the DEC PDP-1 MACRO assembler manual says that a macro call
> is expanded by copying the *sequence of 'storage words' and
> advancing the current location (.) for each word copied*
> I am quite surprised.
I am, too. It seems that expansion is not recursive. And that it can only
allocate storage word by word, not in larger blocks.
Doug
> Well, doesn't it depend on whether VAX MACRO kept the macros as
> high-level entities when translating them, or if it processed macros in
> the familiar way into instructions that sat at the same level as
> hand-written ‘assembler’. I don't think this thread has made that clear
> so far.
The Multics case that I cited was definitely in the latter category.
There was no "translator". Effectively there were just two different
macro packages applied to the same source file.
In more detail, there were very similar assemblers for the original
IBM machines and the new GE machines. Since they didn't have
"include" facilities, there were actually two source files that differed
only in their macro definitions. The act of translation was to supply
the latter set of definitions--a notably larger set than the former
(which may well have been empty).
Doug