I just noticed this:
Sep 2018: The Multics History Project Archives were donated to the Living
Computer Museum in Seattle. This included 11 boxes of tapes, and 58 boxes of
Multics and CTSS documentation and listings. What will happen to these items
is unknown.
https://multicians.org/multics-news.html#events
That last sentence is ironic; I _assume_ it was written before the recent news.
I wonder what will happen to all such material at the LCM. Anyone know?
Noel
Hi all,
Some time ago I dived into ed and tried programming with it a bit. It
was an interesting experience but I feel like the scrolling
visual terminal can't properly emulate the paper terminal. You can't do
rip out a printout and put it next to you, scribble on it, etc.
I'd like to try replicating the experience more closely but I'm not
interested in acquiring collector's items or complex mechanical
hardware. There don't seem to be contemporary equivalents of the TI
Silent 700 so I've been looking at are standalone printing devices to
combine with a keyboard. But the best I can find is line printing,
which is unsuitable for input.
Any suggestions?
Sijmen
> Yeah, I wasn't specific enough.
> The ownership of the model 67 changed to the State of NJ, but it was
> operated and present at Princeton, until replaced by a 370/158, which in
> turn changed owners back to Princeton in 75.
>
> What OS did you use on the 67?
On the /67 I used TSS with a free account they gave me for being in a
local computer club. On the /91 I mostly used the free stuff but one
summer in the early 70s I had a job speeding up an Ecom professor's
Fortran model. Compiling it with Fortran H rather than G, and adjusting
an assembler routine that managed an external file not to open and close
the file on every call helped a lot.
Paul Hilfinger had a long career at UC Berkeley and is easy to find if you
want to ask him if he has any of his old papers.
R's,
John
>
> On Wed, Jul 17, 2024 at 6:58 PM John Levine <johnl(a)taugh.com> wrote:
>
>> It appears that Tom Lyon <pugs78(a)gmail.com> said:
>>> -=-=-=-=-=-
>>>
>>> Jonathan - awesome!
>>> Some Princeton timing: the 360/67 arrived in 1967, but was replaced in the
>>> summer of 1969 by the 360/91.
>>
>> No, the /67 and /91 were there at the same time. I used them both in high
>> school.
>> I graduated in 1971 so that must have been 1969 to 71, and when I left I'm
>> pretty
>> sure both were still there.
>>
>> R's,
>> John
>>
>>
>>> BWK must've got started on the 7094 that preceded the 67, but since it was
>>> FORTRAN the port wasn't hard.
>>> Now I wonder what Paul Hilfinger did and whether it was still FORTRAN.
>>>
>>> I graduated in 1978, ROFF usage was still going strong!
>>>
>>> On Wed, Jul 17, 2024 at 5:42 PM Jonathan Gray <jsg(a)jsg.id.au> wrote:
>>>
>>>> On Wed, Jul 17, 2024 at 09:45:57PM +0000, segaloco via TUHS wrote:
>>>>> On Wednesday, July 17th, 2024 at 1:51 PM, segaloco <
>>>> segaloco(a)protonmail.com> wrote:
>>>>>
>>>>>> Just sharing a copy of the Roff Manual that I had forgotten I
>> scanned a little while back:
>>>>>>
>>>>>> https://archive.org/details/roff_manual
In 1959, when Doug Eastwood and I, at the suggestion of George Mealy, set
out to add macro capability to SAP (Share assembly program), the word
"macro"--short for "macroinstruction"--was in the air, though none of us
had ever seen a macroprocessor. We were particularly aware that GE had a
macro-capable assembler. I still don't know where or when the term was
coined. Does anybody know?
We never considered anything but recursive expansion, where macro
definitions can contain macro calls; thus the TX-0 model comes as quite a
surprise. We kept a modest stack of the state of each active macro
expansion. We certainly did not foresee that within a few years some
applications would need a 70-level stack!
General stack-based programming was not common practice (and the term
"stack" did not yet exist). This caused disaster the first time we wrote a
macro that generated a macro definition, because a data-packing subroutine
with remembered state, which was used during both definition and expansion,
was not reentrant. To overcome the bug we had in effect to introduce
another small stack to keep the two uses out of each other's way. Luckily
there were no more collisions between expansion and definition. Moreover,
this stack needed to hold only one suspended state because expansion could
trigger definition but not vice versa.
Interestingly, the problem in the previous paragraph is still with us 65
years later in many programming languages. To handle it gracefully, one
needs coroutines or higher-order functions.
Doug
I was idly leafing through Padlipsky's _Elements Of Network Style_ the
other day, and on page 72, he was imagining a future in which a cigar-box
sized PDP-10 would be exchanging data with a breadbox-sized S/370.
And here we are, only 40 years later, and 3 of my PDP-10s and my S/370 are
all running on the same cigarette-pack sized machine, which cost something
like $75 ($25 in 1984 dollars).
Adam
> the DEC PDP-1 MACRO assembler manual says that a macro call
> is expanded by copying the *sequence of 'storage words' and
> advancing the current location (.) for each word copied*
> I am quite surprised.
I am, too. It seems that expansion is not recursive. And that it can only
allocate storage word by word, not in larger blocks.
Doug
> Well, doesn't it depend on whether VAX MACRO kept the macros as
> high-level entities when translating them, or if it processed macros in
> the familiar way into instructions that sat at the same level as
> hand-written ‘assembler’. I don't think this thread has made that clear
> so far.
The Multics case that I cited was definitely in the latter category.
There was no "translator". Effectively there were just two different
macro packages applied to the same source file.
In more detail, there were very similar assemblers for the original
IBM machines and the new GE machines. Since they didn't have
"include" facilities, there were actually two source files that differed
only in their macro definitions. The act of translation was to supply
the latter set of definitions--a notably larger set than the former
(which may well have been empty).
Doug
On Wed, Jul 10, 2024 at 9:54 PM John R Levine <johnl(a)taugh.com> wrote:
> On Wed, 10 Jul 2024, Dan Cross wrote:
> > It's not clear to me why you suggest with such evident authority that
> > Knuth was referring only to serialized instruction emulation and not
> > something like JIT'ed code; true, he doesn't specify one way or the
> > other, but I find it specious to conclude that that implies the
> > technique wasn't already in use, or at least known.
>
> The code on pages 205 to 211 shows an instruction by instruction
> interpreter. I assume Knuth knew about JIT compiling since Lisp systems
> had been doing it since the 1960s, but that's not what this section of the
> book is about.
Sure. But we're trying to date the topic here; my point is that
JITing was well known, and simulation was similarly well known; we
know when work on those books started; it doesn't seem that odd to me
that combining the two would be known around that time as well.
> One of the later volumes of TAOCP was supposed to be about
> compiling, but it seems unlikely he'll have time to write it.
Yes; volumes 5, 6 and 7 are to cover parsing, languages, and compilers
(more or less respectively). Sadly, I suspect you are right that it's
unlikely he will have time to write them.
> >> We've been discussing batch or JIT translation of code which gives
> >> much better performance without a lot of hardware help.
> >
> > JIT'd performance of binary transliteration is certainly going to be
> > _better_ than strict emulation, but it is unlikely to be _as good_ as
> > native code.
>
> Well, sure, except in odd cases like the Vax compiler and reoptimizer
> someone mentioned a few messages back.
I think the point about the VAX compiler is that it's an actual
compiler and that the VAX MACRO-32 _language_ is treated as a "high
level" programming language, rather than as a macro assembly language.
That's not doing binary->binary translation, that's doing
source->binary compilation. It's just that, in this case, the source
language happens to look like assembler for an obsolete computer.
- Dan C.
On Sat, Jul 13, 2024 at 1:35 PM John R Levine <johnl(a)taugh.com> wrote:
> On Sat, 13 Jul 2024, Dan Cross wrote:
> > Honeywell was doing it with their "Liberator" software on the
> > Honeywell 200 computer in, at least, 1966:
> > https://bitsavers.org/pdf/honeywell/series200/charlie_gibbs/012_Series_200_…
> > (See the section on, "Conversion Compatibility."). Given that that
> > document was published in February of 1966, it stands to reason work
> > started on that earlier, in at least 1965 if not before ...
>
> Good thought. Now that you mention it, I recall that there were a lot of
> Autocoder to X translators, where X was anything from another machine
> to Cobol. Of course I can't find any of them now but they must have been
> around the same time.
>
> R's,
> John
>
> PS: For you young folks, Autocoder was the IBM 1401 assembler. There were
> other Autocoders but that was by far the most popular because the 1401 was
> the most popular computer of the late 1950s.
Oops, it appears that I inadvertently forgot to Cc: COFF in my earlier
reply to John. Mea culpa.
For context, here's my complete earlier message; the TL;DR is that
Honeywell was doing binary translation from the 1401 to the H-200
sometime in 1965 or earlier; possibly as early as 1963, according to
some sources.
-->BEGIN<--
Honeywell was doing it with their "Liberator" software on the
Honeywell 200 computer in, at least, 1966:
https://bitsavers.org/pdf/honeywell/series200/charlie_gibbs/012_Series_200_…
(See the section on, "Conversion Compatibility."). Given that that
document was published in February of 1966, it stands to reason work
started on that earlier, in at least 1965 if not before (how much
earlier is unclear). According to Wikipedia, that machine was
introduced in late 1963; it's unclear whether the Liberator software
was released at the same time, however. Ease of translation of IBM
1401 instructions appears to have been a design goal. At least some
sources suggest that Liberator shipped with the H-200 in 1963
(https://ibm-1401.info/1401-Competition.html#UsingLib)
It seemed like what Doug was describing earlier was still
source->binary translation, using some clever macro packages.
-->END<--
- Dan C.
(Let me try sending this again, now that I'm a member of the list.)
Another example of operator-typing in BLISS, of more use in a kernel
than floating point, is in the relational operators. For example, GTR
(greater-than) for signed comparison, GTRU for unsigned comparison, and
GTRA for address comparison (where the number of bits in an address is
less than the number of bits in a machine word), etc. for the other 5
relations.
On 7/9/24 13:18, Paul Winalski wrote:
> expression-1<offset-expr, size-expr, padding-expr>
> [...] padding-expr controls the value used to pad the high order
> bits: if even, zero-padded, if odd, one-padded.
>
> I always wondered how this would work on the IBM S/360/370
> architecture. It is big-endian and bit 0 of a machine word is the
> most significant bit, not the least significant as in DEC's architectures.
Offset and Size taken as numbers of bits in a value (machine word), not
bit numbers, works just fine for any architecture. The PDP-10 and other
DEC architectures before the PDP-11 were word-addressed with bit 0 at
the high-order end.
The optional 3rd parameter is actually 0 for unsigned (zero) extension
and 1 for signed (highest order bit in the extracted field) extension.
I don't think signed extension is widely used, but it depends on the
data structure you're using.
When verifying that, I found something I did not remember, that in
BLISS-16 and -32 (and I would guess also -64), but not -36 (the
word-addressed PDP-10), one could declare 8-bit signed and unsigned data:
OWN
X: BYTE SIGNED,
Y: BYTE;
So the concepts of 'type' in BLISS, at least regarding data size and
representation, can get a little complicated (not to be confused with
COMPLEX :-) ).
--------
An aside re: bit twiddling from CMU and hardware description languages:
Note that the ISP/ISPL/ISPS machine description language(s) from books
by Gordon Bell et al. used the following syntax for a bit or a bit field
of a register:
REG<BIT_NR>
REG<HIGH_BIT_NR:LOW_BIT_NR>
REG<BIT_NR,BIT_NR,...>
(',...' is meta syntax.) Sign extension was handled by a unary operator
because the data were all bit vectors, instead of values as in BLISS, so
the width (in bits) of an expression was known. The DECSIM logic
simulator inherited this syntax. Brackets were used for memory
addresses, so you might have M[0]<0:2> for the first 4 bits of the first
word in memory. I still find it the most clear syntax, but then it is
what I used for many years. (Sorry, VHDL and Verilog, although you won
due to the idea back in the day that internally-developed VLSI CAD
software was to be kept internal.)
- Aron