> From: scj(a)yaccman.com
> I think one fatal mistake that A68 made
One of many, apparently, given Hoare's incredible classic "The Emperor's Old
Clothes":
http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf
(which should be required reading for every CS student).
Noel
Steve almost right....mixing a few memories...see below..
On Thu, Jun 30, 2016 at 1:17 PM, <scj(a)yaccman.com> wrote:
> My memory was that the 68000 gave the 8086 a pretty good run for its
> money,
Indeed - most of the UNIX workstations folks picked it because of the
linear addressing.
but when Moto came out with a memory management chip it had some
> severe flaws that made paging and fault recovery impossible, while the
> equivalent features available on the 8086 line were tolerable.
Different issues...
When the 68000 came out there was a base/limit register chip available,
who's number I forget (Moto offered to Apple for no additional cost if they
would use it in the Mac but sadly they did not). This chip was similar
to the 11/70 MMU, as that's what Les and Nick were used to using (they used
a 11/70 running Unix V6 has the development box and had been before the
what would become the 68000 -- another set of great stories from Les, Nick
and Tom Gunter).
The problem with running a 68000 with VM was not the MMU, it was the
microcode. Nick did not store all of the needed information needed by the
microcode to recover from a faulted instruction, so if a instruction could
not complete, it could not be restarted without data loss.
> There were
>
> some bizarre attempts to page with the 68000 (I remember one product that
> had two 68000 chips, one of which was solely to sit on the shoulder of the
> other and remember enough information to respond to faults!).
This was referred to as Forest Baskett mode -- he did a early paper that
described it. I just did a quick look but did not see a copy in my shelf
of Moto stuff. At least two commercial systems were built this way -
Apollo and Masscomp.
The two processors are called the "executor" and "fixer." The trick is
that when the MMU detects an fault will occur, the executor is sent "wait
state" cycles telling it that the required memory location is just taking
longer to read or write. The fixer is then given the faulting address,
which handles the fault. When the page is finally filled, on the
Masscomp system the cache is then loaded and the executor is allowed to
complete the memory cycle.
When Nick fixed the microcode for the processor, the updated chip was
rebranded as the 68010. In the case of the Masscomp MC-500 CPU board, we
popped the new chip in as the executor and changed the PAL's so the fault
was allowed to occur (creating the MPU board). This allowed the executor
to go do other work while the fixer was dealing with the fault. We
picked up a small amount of performance, but in fact it was not much. I
still have a system on my home network BTW (although I have not turned it
on in a while -- it was working last time I tried it).
Note the 68010 still needed an external MMU. Apollo and Masscomp built
their own, although fairly soon after they did the '10 Moto created a chip
to replace the base/limit register scheme with one that handled 2-level
pages.
In Masscomp's case when we did the 5000 series which was based on the
68020, use their MMU for their low end (300 series) and our custom MMU on
the larger systems (700 series).
> By the time
>
> Moto fixed it, the 8086 had taken the field...
>
Well sort of. The 68K definitely won the UNIX wars, at least until the
386 and linear addressing would show up in the Intel line. There were
some alternatives like the Z8000, NS32032, and AT&T did the 32100 which was
used in the 3B2/3B5 et al. but 68K was the lion share.
Clem
> I'm curious if the name "TPC" was an allusion to the apocryphal telephone
> company of the same name in the 1967 movie, "The President's Analyst"?
Good spotting. Ken T confirms it was from the flick.
doug
> From: Dave Horsfall <dave(a)horsfall.org>
>
> On Wed, 29 Jun 2016, scj(a)yaccman.com wrote:
>
>> Pascal had P-code, and gave C a real run, especially as a teaching
>> language.
>
> Something I picked up at Uni was that Pascal was never designed for
> production use; instead; you debugged your algorithm in it, then ported it
> to your language of choice.
I was an active member of the UCSD Pascal project from 77 to 80, and then was with SofTech MicroSystems for a couple years after that.
An unwritten legacy of the Project was that, according to Professor Ken Bowles, IBM wanted to use UCSD Pascal as the OS for their new x86 based personal computer. The license was never worked out as the University of California got overly involved in it. As a result IBM went with their second choice, some small Redmond based company no one had ever heard of. So it was intended and, at least IBM thought, it was good enough for production use.
I also knew of UCSD Pascal programs written to do things such as dentist office billing and scheduling and other major ‘real world’ tasks. So it wasn’t just an academic project.
I still have UCSD Pascal capable of running in a simulator, though I’ve not run it in a while. And I have all the source for the OS and interpreter for the Version I.5 and II.0 systems. Being a code pig just means that I need a lot of disk space.
David
Hi.
Can anyone give a definitive date for when Bill Joy's csh first got out
of Berkeley? I suspect it's in the 1976 - 1977 time frame, but I don't
know for sure.
Thanks!
Arnold
The requested URL /pub/Wish/wish_internals.pdf was not found on this
server.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> On Jun 26, 2016, at 5:59 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> I detested the CSH syntax. In order to beat back the CSH proponents at BRL, I added JOB control to the SV (and later SVR2) Bourne Shell. Then they beat on me for not having command like editing in (a la TCSH), so I added that. This shell went out as /bin/sh in the Doug Gwyn SV-on-BSD release so every once and a while over the years I trip across a “Ron shell” usually people who were running Mach-derived things that ran my shell as /bin/sh.
When porting BSD to new hardware at Celerity (later Floating Point, now part of Sun, oops Oracle) I got ahold of the code that Doug was working on and made the jsh (Job control sh) my shell of choice. Now that Bash does all of those things and almost everything emacs can do, Bash is my shell.
As far as customizing, I’ve got a .cshrc that does nothing more than redirect to a launch of bash if available and /bin/sh if nothing else. And my scripts for logging in are so long a convoluted due to many years of various hardware and software idiosyncratic changes (DG/UX anyone, anyone?) that I’m sure most of it is now useless. And I don’t change it for fear of breaking something.
David
I asked Jeff Korn (David Korn's son), who in turn asked David Korn who
confirmed that 'read -u' comes from ksh and that 'u' stands for 'unit'.
- Dan C.
Yes, indeed. He says:
*I added -u when I added co processes in the mid '80s. The u stands for
unit. It was command to talk about file descriptor unit at that time.*
On Tue, May 31, 2016 at 6:06 AM, Dan Cross <crossd(a)gmail.com> wrote:
> Hey, did your dad do `read -u`?
>
> ---------- Forwarded message ----------
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Date: Tue, May 31, 2016 at 3:27 AM
> Subject: [TUHS] etymology of read -u
> To: tuhs(a)minnie.tuhs.org
>
>
> What's the mnmonic significance, if any, of the u in
> the bash builtin read -u for reading from a specified
> file descriptor? Evidently both f and d had already been
> taken in analogy to usage in some other commands.
>
> The best I can think of is u as in "tape unit", which
> was common usage back in the days of READ INPUT TAPE 5.
> That would make it the work of an old timer, maybe Dave Korn?
>
>
>
What's the mnmonic significance, if any, of the u in
the bash builtin read -u for reading from a specified
file descriptor? Evidently both f and d had already been
taken in analogy to usage in some other commands.
The best I can think of is u as in "tape unit", which
was common usage back in the days of READ INPUT TAPE 5.
That would make it the work of an old timer, maybe Dave Korn?
> Now we are hoping to get the Living Computer Museum people to bring it up
on their real PDP-7.
Truly a fantastic prospect! The only Unix the museum has running is
on a 3B2--a curious byway perhaps, but of little historic interest.
The PDP-7 version would be a tremendous coup.
doug
On Wed, May 04, 2016 at 12:44:15AM +0300, Diomidis Spinellis wrote:
> This would have found any code from the PDP-7 Unix that appeared in the
> First Edition. (I was hoping that some PDP-7 instruction sequences might be
> the same in PDP-11.)
> Unsurprisingly, nothing came out.
No, the instruction set is completely different. The PDP-11 ISA is a paradise
compared to the spartan PDP-7 ISA.
Cheers, Warren
All, a status update on the PDP-7 Unix restoration project at
https://github.com/DoctorWkt/pdp7-unix
The system is pretty much complete now. We have as much of the original
code working as we can. We have rewritten things like the shell and some
other utilities (ls etc.). The ed editor and the native assembler both
work. We also have written a user-mode PDP-7 simulator to test things
and an assembler to make building things faster.
The system boots up under SimH with a filesystem and you can see what things
were like back in 1970.
One big missing utility is roff. As of today, I've written a compiler that
inputs a vaguely C-like language and outputs PDP-7 code. Using this, I've
compiled a minimalist roff which is enough to format man pages. This is
a separate project here: https://github.com/DoctorWkt/h-compiler
Now we are hoping to get the Living Computer Museum people to bring it up
on their real PDP-7. Unfortunately, it doesn't have a disk drive. The
expected solution is to build a disk simulator with an FPGA and SD card.
There is no time frame for this, but it is in the works.
Thanks go to Phil Budne and Robert Swierczek for all their hard work
in building and testing things, and also to Norman Wilson for supplying
scans of the original documents.
Cheers, Warren
Hello everyone!
I had been lurking this list for long, this is my first post to this
list.
I read with a lot of interest, an old Usenix paper by the late Richard
Stevens on a system called "Portals":
<https://www.usenix.org/legacy/publications/library/proceedings/neworl/steve…>
It explores a lot of ideas that found itself in Plan 9, like a
filesystem interface for sockets etc. Wondering if this survived in any
existing, so called "modern" Unix. I have always felt the need to have
something like this in Unix.
Cheers
--
Ramakrishnan
On 2016-04-02 04:00, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Saturday, 2 April 2016 at 1:06:58 +1100, Dave Horsfall wrote:
>> On Mon, 28 Mar 2016, scj(a)yaccman.com wrote:
>>
>>> ... and I once heard an old-timer growl at a young programmer "I've
>>> written boot loaders that were shorter than your variable names!"
>>
>> Ah, the 512-byte boot blocks... We got pretty inventive in those days
>> (and this was before secondary loaders!) with line editing etc.
>
> I was thinking more of the RIM loader on the PDP-8. 16 words or 24
> bytes.
Bah! The RK8E bootloader for OS/8: 2 words... :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Hi TUHSers,
For a long time now, I have had a theory that I've never seen
substantiated (or disproved) in print. After Steve Johnson's recollection
of how hard it was to type on the Teletype terminals, I'm going to throw
this thought out for consideration.
One of Unix's signature hallmarks is its terseness: short command names
like mv, ln, cp, cc, ed; short options (a dash and a single letter),
programs with just a few, if any, options at all, and short path names:
"usr" instead of "user", "src" instead of "source" and so on.
I have long theorized that the reason for the short names is that since
typing was so physically demanding, it was natural to make the command
names (and all the rest) be short and easier to type. I don't know if
this was a conscious decision, but I suspect it more likely to have been
an unconscious / natural one.
Today, I started wondering if this wasn't at least part of the reason
for commands being simple, with few if any options. After all, if I
have to type 'man foo' to remember how foo works, I don't want to wait
for 85 pages of printout (at 110 characters per second!) to finally see
what option -z does after wading through the descriptions of options -a
through -y.
I certainly think there's some truth to this idea; longer command
names and especially GNU style long options didn't appear until the
video terminal era when terminals were faster (9600 or 19200 baud!) and
much less physically demanding to use. How MUCH correlation is there,
I don't claim to know, but I think there's definitely some.
For the record, I did use the paper teletypes some, mainly at a university
where I took summer classes, connected to a Univac system. I remember
how hard it was to use them. You could almost set your watch by when
it would crash around noon time, as the load went up. :-) On Unix I
only used VDTs, except if I was at a console DECwriter.
Anyway, that's my thought. :-) Comments and or insights, especially from
those who were there, would be welcome.
Thanks,
Arnold
The Unix History repository on GitHub [1] aims to provide the evolution
of Unix from the 1970s until today under Git revision control. Through
a few changes recently made [2] it's now possible for individual
contributors to have their GitHub profile linked to their early Unix
contributions. Ken Thompson graciously made this move last week
following a personal email invitation. I think it would be really cool
if more followed. This would send a powerful message of continuity and
tradition in computing to youngsters joining GitHub today.
What you need to do is the following.
- Create a GitHub profile (if you haven't already got one)
- Click on https://github.com/settings/emails
- Add the email address(es) associated with your early Unix commits
(e.g. foo(a)research.uucp or bar(a)ucbvax.berkeley.edu) You can easily find
an author's commits and email addresses recorded in the repository
through the web search form http://www.spinellis.gr/cgi-bin/namegrep.pl
- GitHub will tell you that a verification email has been sent to your
(probably defunct) email address. Don't worry. Your account will be
linked to the address even without the verification step.
- Adding your photograph to your profile will increase the vividness of
GitHub's revision listings.
If you're in contact with Unix contributors who are not on this list,
please forward them this message. Also, if your name isn't properly
associated with the repository's commits, drop me an email message (or a
GitHub pull request for the corresponding file [3]), and I'll add it.
[1] https://github.com/dspinellis/unix-history-repo
[2] The modifications involved the change of UUCP addresses to use the
.uucp pseudo-domain rather than a ! path and the listing of co-authors
within the commit message.
[3]
https://github.com/dspinellis/unix-history-make/tree/master/src/author-path
Diomidis - http://www.spinellis.gr
Just a friendly word from the guy who runs the TUHS list.
Historical details, with verifiable facts: OK.
Questions and replies about old systems: OK.
Semi-off-topic threads: mostly OK, they usually peter out.
Comments about systems (good or bad): fine.
Comments about individuals and their motivations/actions
(especially if the comments are pejorative): not good at all.
If you think a thread is going to devolve into a slanging match
between people, then a) don't fuel the flames by posting replies,
b) walk away and calm down, c) let me know.
We've had a few threads recently which are coming close to the
edge, and I hate acting as a censor/wet blanket, so please
avoid saying things that will raise other people's hackles.
Back to you regularly scheduled notalgia....
Warren
Marc Rochkind:
BSD is the new kind on the block. I don't think it came along until 1977 or
so. Research UNIX I don't think picked up SCCS ever. SCCS first appeared in
the PWB releases, if you don't count the earlier version in SNOBOL4 for the
IBM mainframes.
=====
Correct. We never needed no stinkin' revision control in Research.
More fairly, early systems like SCCS were so cumbersome that a
community that was fairly small, in which everyone talked to
everyone, and in which there was no glaring need wasn't willing
to adopt them.
I remember trying SCCS for a few small personal projects back in
1979 or so (well before I moved to New Jersey), finding it just
too clunky for the benefits it gave me, and giving up. Much later,
I found RCS just as messy. One thing that really bugged me was
those systems' inherent belief that you rarely want to keep a
checked-out copy of something except while you're working on it.
Another, harder to work around, is that in any nontrivial project
there are often stages when I want to make changes of scope broader
than a single file: factor common stuff out into a new file, merge
things into a single file, rename files, etc.
CVS was a big step forward, but not enough. Subversion was the
first revision-control that didn't feel like a huge burden to me.
None of which is to say that SCCS and RCS were useless; they were
important pioneers, and for the big projects that originally
spawned them I'm sure they were indispensible. But I can't imagine
Ken or Dennis putting up with them for very long, and I'm glad I
never had to.
Norman Wilson
Toronto ON
> These are USED cards. That's OK. No duty!
Quite the opposite happened to me in Britain. I wanted to
import an early computer-generated film to show. When I
inquired whether there would be any customs implications,
I was asked whether the film was exposed or not. Britain
charged duty only on exposed film.
With apologies for straying ever farther from Unix,
Doug
> From: Dave Horsfall
> That makes sense, and someone forgot to document it...
Or perhaps it was added precisely to get rid of the window, and then someone
discovered that it could be used to freeze the system, so they decided they'd
better not document it?
If the system had MOS memory, and you had to power cycle the machine to get it
out of this state, there wouldn't be any evidence left of who did the deed
(unless the system was writing extensive audit trailing to disk), so it would
be a great 'system assasin' (aka vandal) tool.
Noel
PS: I guess this is more PDP-11ish than UNIXish - apologies for the off-topic!
On 21 March 2016 at 17:43, Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
> On Tuesday, 22 March 2016 at 1:11:07 +1100, Dave Horsfall wrote:
>>
>> Walking down the corridors of Comp Sci, a student in front of me
>> dropped his entire deck of approx 2000 cards, all over the floor...
>> I have no idea whether he got them sorted, but I sure as hell used
>> rubber bands after that!
>
> But that's what the sequence numbers in columns 73 to 80 are for!
I did that religiously, even with my small PL/C runs -- PL/C runs were
free. One day, they decided to extend the code area to the entire
card.... and so I learned another feature of the card punch.
N.
>
> Greg
> --
Thanks for some additional information.
On 2016-03-28 18:16, Milo Velimirović wrote:
>
>> On Mar 28, 2016, at 9:44 AM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>>
>> On 2016-03-28 16:18, Noel Chiappa wrote:
>>> > From: Dave Horsfall <dave(a)horsfall.org>
>>>
>
> [ Wait & RK discussion snipped.]
>
>>
>>
>>> > I know that Kevin Dawson (I think) tried it on my /40 as well
>>>
>>> The 11/40 does not have the SPL instruction; see the '75-'76 PDP-11 Processor
>>> Handbook, pg. 4-5. (Again, sorry, just want to be accurate.)
>>
>> This is also a pretty important point. But one which also begs the question how the splxxx() functions in Unix worked back then. Or did Unix not use this pattern and these functions back when the 11/40 was relevant?
>
> These functions existed in V6 and can be found in the file, m40.s, that was assembled with the rest of the kernel to generate a unix that would run on a /40 class machine.
Aha. Great. Thanks. Yes, BIS and BIC on the PSW obviously works, but
this would definitely not block interrupts for the next instruction. So
at least in that case, a WAIT could result in the kernel sitting around
waiting for the next interrupt. I don't really think DEC intend WAIT to
be used in the way Unix uses it, and it don't really have the properties
that would be ideas for Unix. Also somewhat indicated by the fact that
DEC did not use WAIT this way themselves.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol