=> coff since it's non-Unix
On 2020-Jan-22 13:42:44 -0500, Noel Chiappa <jnc(a)mercury.lcs.mit.edu> wrote:
>Pretty interesting machine, if you study its instruction set, BTW; with no
>stack, subroutines are 'interesting'.
"no stack" was fairly standard amongst early computers. Note the the IBM
S/360 doesn't have a stack..
The usual approach to subroutines was to use some boilerplate as part of the
"call" or function prologue that stashed a return address in a known
location (storing it in the word before the function entry or patching the
"return" branch were common aproaches). Of course this made recursion
"hard" (re-entrancy typically wasn't an issue) and Fortran and Cobol (at
least of that vintage) normally don't support recursion for that reason.
Moving to COFF
On Tue, Jan 21, 2020 at 12:53 PM Jon Forrest <nobozo(a)gmail.com> wrote:
> As I remember the Z8000 was going to be the great white hope that
> would continue Zilog's success with the Z80 into modern times.
> But, it obviously didn't happen.
A really good question. I will offer my opinion as someone that lived
through the times.
The two contemporary chips of that time were the Intel 8086 and Z8000.
Certainly, between those two, the Zilog chip was a better chip from a
software standpoint. The funny part was that Moto had been pushing the
6809 against those two. The story of the IBM/PC and Moto is infamous.
Remember the 68K is a skunkworks project and is not something they were
talking about it.
Why IBM picked 8086/8088 over Z8000 I do not know. I'm >>guessing<< total
system cost and maybe vendor preference. The tea, that developed the PC
had been using the 8085 for another project, so the jump of vendors to
Zilog would have been more difficult (Moto and IBM corporate had been tight
for years, MECL was designed by Moto for IBM for the System 360). I do
know they picked Intel over the 6809, even though they had the X-series
device in Yorktown (just like we had it at Tektronix) and had wanted to use
what would become the 68000.
In the end, other than Forest's scheme, none of them could do VM without a
lot of help. If I had not known about the X-series chip (or had been given
a couple of them), I think Roger and I would have used the Z8000 for
Magnolia. But I know Roger and I liked it better; as did most of our
peeps in Tek Labs at the time. IIRC Our thinking was that Z8000 has an
"almost" good enough instruction set, but since many of the processors's
addressing modes are missing on some/most of the instructions, it makes
writing a compiler more difficult (Bill Wulf used to describe this as an
'irregular' instruction set). And even though the address space was large,
you still had to contend with a non-linear segmented address scheme.
So I think once the 68000 came on the scene for real, it had the advantage
if the best instructions set (all instructions worked as expected, was
symmetric) and looked pretty darned near a PDP-11. The large linear
address was a huge win and even though it was built as a 16-bit chip
internally (i.e. 16-bit barrel shifter and needing 2 ticks for all 32-bit
ops), all the registers were defined as 32-bits wide. I think we saw it
as becoming a full 32-bit device the soonest and with the least issues.
This isn't precisely Unix-related, but I'm wondering about the Third
Ronnie's SDI's embedded systems. Is there anyone alive who knows just
what they were? I'm also wondering, since the "Star Wars" program
seemed to go off the boil at the end of the "Cold War", and the
embedded systems were made with the US taxpayer's dollar, whether or
not they are now public domain - since iirc, US federal law mandates
that anything made with the taxpayer's dollar is owned by the taxpayer
and is thus in the public domain. I'm wondering about starting a
Freedom of Information request to find all of that out, but I don't
quite know how to go about it. (FWVLIW, I'm a fan of outer space
exploration (and commercial use) and a trove of realtime, embedded
source code dealing with satellites would be a treasure indeed. It'd
raise the bar and lower the cost of entry into that market.)
Also, more Unixy, what status at the time were the POSIX realtime
standards, and what precise relation did they have to Unix?
Does anyone know if there is a book or in-depth article about the Sprint/Spartan system, named Safeguard, hardware and software?
There is very little about it available online (see http://www.nuclearabms.info/Computers.html) but it was apparently an amazing programming effort running on UNIVAC.
moving to COFF
On Wed, Jan 22, 2020 at 1:06 PM Pete Wright <pete(a)nomadlogic.org> wrote:
> I also seem to remember him telling me about working on the patriot
> missile system, although i am not certain if i am remembering correctly
> that this was something he did at apollo or at another company in the
> boston area.
The Patriot was/is Raytheon in Andover, MA not Apollo (Chelmsford - two
towns west). Cannot speak for today, but when it was developed the source
code was in Ada. I knew the Chief Scientist/PI for the original Patriot
system (who died of a massive stroke a few years back -- my wife used to
take care of his now 30-40 yo kids when they were small and she was a tad
During the first Gulf War, he basically did not sleep the whole first
month. As I understand it, Raytheon normally took 3-6 months per SW
release. During the war, they put out an update every couple of days and
Willman once said they were working non-stop on the codebase, dealing with
issues they have never seen or have been simulated. I gather it was quite
exciting ... sigh. We got him to give a couple of talks at some local
IEEE functions describing the SW engineering process they had used.
Willman was one of the people that got me to respect Ada and the job his
folks had to do. To once told me, that at some point, Raytheon had a
contract supporting the Polaris System for the US Navy. The Navy had long
ago lost the source. They had disassembled and were patching what they
had. Yeech!!!! He also once made another comment to me ( in the late
1980s IIRC) that the DoD wanted Ada because they want the source to be part
of the specifications and wanted a language that was more explicit that
they could use for those specs. I have no idea how much that has proven
to be true.
On 1/17/20, Rob Pike <robpike(a)gmail.com> wrote:
> I am convinced that large-scale modern compute centers would be run very
> differently, with fewer or at least lesser problems, if they were treated
> as a single system rather than as a bunch of single-user computers ssh'ed
> But history had other ideas.
[moving to COFF since this isn't specific to historic Unix]
For applications (or groups of related applications) that are already
distributed across multiple machines I'd say "cluster as a single
system" definitely makes sense, but I still stand by what I said
earlier about it not being relevant for things like workstations, or
for server applications that are run on a single machine. I think
clustering should be an optional subsystem, rather than something that
is deeply integrated into the core of an OS. With an OS that has
enough extensibiity, it should be possible to have an optional
clustering subsystem without making it feel like an afterthought.
That is what I am planning to do in UX/RT, the OS that I am writing.
The "core supervisor" (seL4 microkernel + process server + low-level
system library) will lack any built-in network support and will just
have support for local file servers using microkernel IPC. The network
stack, 9P client filesystem, 9P server, and clustering subsystem will
all be separate regular processes. The 9P server will use regular
read()/write()-like APIs rather than any special hooks (there will be
read()/write()-like APIs that expose the message registers and shared
buffer to make this more efficient), and similarly the 9P client
filesystem will use the normal API for local filesystem servers (which
will also use read()/write() to send messages). The clustering
subsystem will work by intercepting process API calls and forwarding
them to either the local process server or to a remote instance as
appropriate. Since UX/RT will go even further than Plan 9 with its
file-oriented architecture and make process APIs file-oriented, this
will be transparent to applications. Basically, the way that the
file-oriented process API will work is that every process will have a
special "process server connection" file descriptor that carries all
process server API calls over a minimalist form of RPC, and it will be
possible to redirect this to an intermediary at process startup (of
course, this redirection will be inherited by child processes
Originally, I meant to reply to the Linux-origins thread by pointing to
AST's take on the matter but I failed to find it. So, instead, here is
something to warm the cockles of troff users:
Q: What typesetting system do you use?
A: All my typesetting is done using troff. I don't have any need to see
what the output will look like. I am quite convinced that troff will
follow my instructions dutifully. If I give it the macro to insert a
second-level heading, it will do that in the correct font and size, with
the correct spacing, adding extra space to align facing pages down to
the pixel if need be. Why should I worry about that? WYSIWYG is a step
backwards. Human labor is used to do that which the computer can do
better. Also, using troff means that the text is in ASCII, and I have a
bunch of shell scripts that operate on files (whole chapters) to do
things like produce a histogram by year of all the references. That
would be much harder and slower if the text were kept in some
manufacturer's proprietary format.
Q: What's wrong with LaTeX?
A: Nothing, but real authors use troff.
>From the museum pages via the KG-84 picture to wiki. Reading a bit on
crypto devices, stumbling over M-209 and
"US researcher Dennis Ritchie has described a 1970s collaboration with
James Reeds and Robert Morris on a ciphertext-only attack on the M-209
that could solve messages of at least 2000–2500 letters. Ritchie
relates that, after discussions with the NSA, the authors decided not
to publish it, as they were told the principle was applicable to
machines then still in use by foreign governments."
"The program takes about two minutes to produce a solution on a DEC PDP-11/70."
No info on the program coding.
More info around the story from Richie himself