> From: scj(a)yaccman.com
> I think one fatal mistake that A68 made
One of many, apparently, given Hoare's incredible classic "The Emperor's Old
Clothes":
http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf
(which should be required reading for every CS student).
Noel
Steve almost right....mixing a few memories...see below..
On Thu, Jun 30, 2016 at 1:17 PM, <scj(a)yaccman.com> wrote:
> My memory was that the 68000 gave the 8086 a pretty good run for its
> money,
Indeed - most of the UNIX workstations folks picked it because of the
linear addressing.
but when Moto came out with a memory management chip it had some
> severe flaws that made paging and fault recovery impossible, while the
> equivalent features available on the 8086 line were tolerable.
Different issues...
When the 68000 came out there was a base/limit register chip available,
who's number I forget (Moto offered to Apple for no additional cost if they
would use it in the Mac but sadly they did not). This chip was similar
to the 11/70 MMU, as that's what Les and Nick were used to using (they used
a 11/70 running Unix V6 has the development box and had been before the
what would become the 68000 -- another set of great stories from Les, Nick
and Tom Gunter).
The problem with running a 68000 with VM was not the MMU, it was the
microcode. Nick did not store all of the needed information needed by the
microcode to recover from a faulted instruction, so if a instruction could
not complete, it could not be restarted without data loss.
> There were
>
> some bizarre attempts to page with the 68000 (I remember one product that
> had two 68000 chips, one of which was solely to sit on the shoulder of the
> other and remember enough information to respond to faults!).
This was referred to as Forest Baskett mode -- he did a early paper that
described it. I just did a quick look but did not see a copy in my shelf
of Moto stuff. At least two commercial systems were built this way -
Apollo and Masscomp.
The two processors are called the "executor" and "fixer." The trick is
that when the MMU detects an fault will occur, the executor is sent "wait
state" cycles telling it that the required memory location is just taking
longer to read or write. The fixer is then given the faulting address,
which handles the fault. When the page is finally filled, on the
Masscomp system the cache is then loaded and the executor is allowed to
complete the memory cycle.
When Nick fixed the microcode for the processor, the updated chip was
rebranded as the 68010. In the case of the Masscomp MC-500 CPU board, we
popped the new chip in as the executor and changed the PAL's so the fault
was allowed to occur (creating the MPU board). This allowed the executor
to go do other work while the fixer was dealing with the fault. We
picked up a small amount of performance, but in fact it was not much. I
still have a system on my home network BTW (although I have not turned it
on in a while -- it was working last time I tried it).
Note the 68010 still needed an external MMU. Apollo and Masscomp built
their own, although fairly soon after they did the '10 Moto created a chip
to replace the base/limit register scheme with one that handled 2-level
pages.
In Masscomp's case when we did the 5000 series which was based on the
68020, use their MMU for their low end (300 series) and our custom MMU on
the larger systems (700 series).
> By the time
>
> Moto fixed it, the 8086 had taken the field...
>
Well sort of. The 68K definitely won the UNIX wars, at least until the
386 and linear addressing would show up in the Intel line. There were
some alternatives like the Z8000, NS32032, and AT&T did the 32100 which was
used in the 3B2/3B5 et al. but 68K was the lion share.
Clem
> I'm curious if the name "TPC" was an allusion to the apocryphal telephone
> company of the same name in the 1967 movie, "The President's Analyst"?
Good spotting. Ken T confirms it was from the flick.
doug
> From: Dave Horsfall <dave(a)horsfall.org>
>
> On Wed, 29 Jun 2016, scj(a)yaccman.com wrote:
>
>> Pascal had P-code, and gave C a real run, especially as a teaching
>> language.
>
> Something I picked up at Uni was that Pascal was never designed for
> production use; instead; you debugged your algorithm in it, then ported it
> to your language of choice.
I was an active member of the UCSD Pascal project from 77 to 80, and then was with SofTech MicroSystems for a couple years after that.
An unwritten legacy of the Project was that, according to Professor Ken Bowles, IBM wanted to use UCSD Pascal as the OS for their new x86 based personal computer. The license was never worked out as the University of California got overly involved in it. As a result IBM went with their second choice, some small Redmond based company no one had ever heard of. So it was intended and, at least IBM thought, it was good enough for production use.
I also knew of UCSD Pascal programs written to do things such as dentist office billing and scheduling and other major ‘real world’ tasks. So it wasn’t just an academic project.
I still have UCSD Pascal capable of running in a simulator, though I’ve not run it in a while. And I have all the source for the OS and interpreter for the Version I.5 and II.0 systems. Being a code pig just means that I need a lot of disk space.
David
Hi.
Can anyone give a definitive date for when Bill Joy's csh first got out
of Berkeley? I suspect it's in the 1976 - 1977 time frame, but I don't
know for sure.
Thanks!
Arnold
The requested URL /pub/Wish/wish_internals.pdf was not found on this
server.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> On Jun 26, 2016, at 5:59 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> I detested the CSH syntax. In order to beat back the CSH proponents at BRL, I added JOB control to the SV (and later SVR2) Bourne Shell. Then they beat on me for not having command like editing in (a la TCSH), so I added that. This shell went out as /bin/sh in the Doug Gwyn SV-on-BSD release so every once and a while over the years I trip across a “Ron shell” usually people who were running Mach-derived things that ran my shell as /bin/sh.
When porting BSD to new hardware at Celerity (later Floating Point, now part of Sun, oops Oracle) I got ahold of the code that Doug was working on and made the jsh (Job control sh) my shell of choice. Now that Bash does all of those things and almost everything emacs can do, Bash is my shell.
As far as customizing, I’ve got a .cshrc that does nothing more than redirect to a launch of bash if available and /bin/sh if nothing else. And my scripts for logging in are so long a convoluted due to many years of various hardware and software idiosyncratic changes (DG/UX anyone, anyone?) that I’m sure most of it is now useless. And I don’t change it for fear of breaking something.
David
I asked Jeff Korn (David Korn's son), who in turn asked David Korn who
confirmed that 'read -u' comes from ksh and that 'u' stands for 'unit'.
- Dan C.
Yes, indeed. He says:
*I added -u when I added co processes in the mid '80s. The u stands for
unit. It was command to talk about file descriptor unit at that time.*
On Tue, May 31, 2016 at 6:06 AM, Dan Cross <crossd(a)gmail.com> wrote:
> Hey, did your dad do `read -u`?
>
> ---------- Forwarded message ----------
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Date: Tue, May 31, 2016 at 3:27 AM
> Subject: [TUHS] etymology of read -u
> To: tuhs(a)minnie.tuhs.org
>
>
> What's the mnmonic significance, if any, of the u in
> the bash builtin read -u for reading from a specified
> file descriptor? Evidently both f and d had already been
> taken in analogy to usage in some other commands.
>
> The best I can think of is u as in "tape unit", which
> was common usage back in the days of READ INPUT TAPE 5.
> That would make it the work of an old timer, maybe Dave Korn?
>
>
>