Greg Lehey:
And why? Yes, the 8088 was a reasonably fast processor, so fast that
they could slow it down a little so that they could use the same
crystal to create the clock both for the CPU and the USART. But the
base system had only 16 kB memory, only a little more than half the
size of the 6th Edition kernel. Even without the issue of disks
(which could potentially have been worked around) it really wasn't big
enough for a multiprogramming OS.
=====
Those who remember the earliest UNIX (even if few of us have
used it) might disagree with that. Neither the PDP-7 nor the
PDP-11/20 on which UNIX was born had memory management: a
context switch was a swap. That would have been pretty slow
on floppies, so perhaps it wouldn't have been saleable, but
it was certainly possible.
In fact Heinz Lycklama revived the idea in the V6 era to
create LSX, a UNIX for the early LSI-11 which had no
memory management and a single ca. 300kiB floppy drive.
It had more memory than the 8088 system, though: 20kiW,
i.e. 40kiB. Even so, Lycklama did quite a bit of work to
squeeze the kernel down, reduce the number of processes
and context switches, and so on.
Here's a link to one of his papers on the system:
https://www.computer.org/csdl/proceedings/afips/1977/5085/00/50850237.pdf
I suspect it would have been possible to make a XENIX
that would have worked on that hardware. Whether it
would have worked well enough to sell is another question.
Norman Wilson
Toronto ON
All, I've been working with Peter Salus (author of A Quarter Century of Unix)
to get the book published as an e-book. However, the current publishers have
been very incommunicative.
Given that the potential readership may be small, Peter has suggested this:
> I think (a) just putting the bits somewhere where they could
> be sucked up would be fine; and (b) let folks make donations
> to TUHS as payment.
However, as with all the Unix stuff, I'm still concerned about copyright
issues. So this is what I'm going to do. You will find a collection of
bits at this URL: http://minnie.tuhs.org/Z3/QCU/qcu.epub
In 24 hours I'll remove the link. After that, you can "do a Lions" on
the bits. I did the scanning, OCR'ing and proofing, so if you spot any
mistakes, let me know.
I'm not really interested in any payment for either the book or TUHS
itself. However, if you do feel generous, my e-mail address is also
my PayPal account.
Cheers, Warren
Thanks, Warren, for the (brief) posting of the ePub file for Peter
Salus' fine book, A Quarter Century of Unix.
I have a printed copy of that book on my shelf, and here is a list of
the errata that I found in it when I read it in 2004 that might also
be present in the ePub version:
p. 23, line 7:
deveoloped -> developed
p. 111, line 5:
Dave Nowitz we'd do -> Dave Nowitz said we'd do
p. 142, line 7:
collaboaration -> collaboration
p. 144, line -4 (i.e., 4 from bottom):
reimplemeted -> reimplemented
p. 160, line 10:
the the only -> the only
p. 196, line 17:
develope JUNET -> develop JUNET
p. 221, running header:
Berkley -> Berkeley
p. 222, line 11:
Mellon Institue -> Mellon Institute
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Since a few people here are Bell Labs veterans, I'd to ask if someone
can explain a bit about that place. Sometimes I hear about work done
there that I'd like to follow up on, but I have no idea where to start.
For starters, I assume that everybody had to write up periodical reports
on their work. Was that stuff archived and is it still accessible
someplace? What about software that got to the point that it actually
had users beyond the developers? I know that major commercial projects
like UNIX are tied up in licensing limbo, but does that apply to
absolutely everything made there?
There is the AT&T Archives and History Center in Warren, NJ. Is it worth
asking if they have old tech reports?
--
http://www.fastmail.com - Or how I learned to stop worrying and
love email again
Steve Bourne tried hard to interest us in A68, and I personally liked some
features of it (especially the automatic type morphing of arguments into
the expected types). But the documentation was a huge barrier--all the
familiar ideas were given completely new (and unintuitive) names, making
it very difficult to get into.
I may be biased in my view, but I think one fatal mistake that A68 made
was that it had no scheme for porting the language to the plethora of
computers and systems around at that time. (The Bliss language from CMU
had a similar problem, requiring a bigger computer to compile for the
PDP-11). Pascal had P-code, and gave C a real run, especially as a
teaching language. C had PCC.
Nowadays, newer languages like Python just piggyback on C or C++...
On recent visit to the Living Computer Museum in
Seattle I got to play with Unix on a 3B2--something
I never did at Bell Labs. Maybe next time I
go they'll offer a real nostalgia trip on
the PDP-7, thanks to Warren's efforts.
doug
Hello,
I want to complete my local ML archive (I deleted a few emails and I
wasn't subscribed before 2001 or so I think). After downloading the
archives and hitting them a few times to get somewhat importable mboxes,
I ended with 8699 emails in a maildir (in theory that should be a
superset of the 5027 emails in my regular TUHS maildir. I will merge
them next.). Two dozens mails are obviously defective (can be repaired
manually maybe) and some more might be defective (needs deeper
checking). So, has anybody more ;)?
Regards
hmw
> AFAIK the later ESS switches include a 3B machine but it only handles
> some administrative functions, with most of the the actual call
> processing being performed in dedicated hardware.
That is correct. The 3B2 was an administrative appendage.
Though Unix itself didn't get into switches, Unix people did
have a significant influence on the OS architecture for
ESS 5. Bob Morris, having observed some of the tribulations of
that project, suggested that CS Research build a demonstration
switch. Lee McMahon, Ken Thompson, and Joe Condon spearheaded
the effort and enlisted Gerard Holzmann's help in verification
(ironically, the only application of Gerhard's methods to
software made in his own department). They called the system,
which was very different from Unix, TPC--The Phone Company. It
actually controlled many of our phones for some years. The
cleanliness of McMahon's architecture, which ran on a PDP-11,
caught the attention of Indian Hill and spurred a major
reworking of the ESS design.
Doug
All, I've been asked by Wendell to forward this query about C
interpreters to the mailing list for him.
----- Forwarded message from Wendell P <wendellp(a)operamail.com> -----
I have a project at softwarepreservation.org to collect work done,
mostly in the 1970s and 80s, on C interpreters.
http://www.softwarepreservation.org/projects/interactive_c
One thing I'm trying to track down is Cin, the C interpreter in UNIX
v10. I found the man page online and the tutorial in v2 of the Saunders
book, but that's it. Can anyone help me to find files or docs?
BTW, if you have anything related to the other commercial systems
listed, I'd like to hear. I've found that in nearly all cases, the
original developers did not keep the files or papers.
Cheers,
Wendell
----- End forwarded message -----
All, I was invited to give a talk at a symposium in Paris
on the early years of Unix. Slides and recording at:
http://minnie.tuhs.org/Z3/Hapop3/
Feel free to point out the inaccuracies :-)
For example, I thought Unix was used at some point
as the OS for some of the ESS switches in AT&T, but
now I think I was mistaken.
That's a temp URL, it will move somewhere else
eventually.
Cheers, Warren
On 2016-07-01 15:43, William Cheswick <ches(a)cheswick.com> wrote:
>
>>> >>...why didn't they have a more capable kernel than MS-DOS?
> >I don't think they cared. or felt it was needed at the time (I disagreed then and still do).
>
> MS-DOS was a better choice at the time than Unix. It had to fit on floppies, and was very simple.
>
> “Unix is a system administrations nightmare” — dmr
>
> Actually, MS-DOS was a runtime system, not an operating system, despite the last two letters of its name.
> This is a term of art lost to antiquity.
Strangely enough, the definition I have of a runtime system is very
different than yours. Languages had/have runtime systems. Some
environments had runtime systems, but they have a somewhat different
scope than what MS-DOS is.
I'd call MS-DOS a program loader and a file system.
> Run time systems offered a minimum of features: a loader, a file system, a crappy, built-in shell,
> I/O for keyboards, tape, screens, crude memory management, etc. No multiuser, no network stacks, no separate processes (mostly). DEC had several (RT11, RSTS, RSX) and the line is perhaps a little fuzzy: they were getting operating-ish.
Uh? RSX and RSTS/E are full fledged operating systems with multiuser
proteciton, time sharing, virtual memory, and all bells and whistles you
could ever ask for... Including networking... DECnet was born on RSX.
And RSTS/E offered several runtime systems, it had an RT-11 runtime
system, an RSX runtime system, you also had a TECO runtime system, and
the BASIC+ runtime system, and you could have others. You could
definitely have had a Unix runtime system in RSTS/E as well, but I don't
know if anyone ever wrote one.
In RSX, compilers/languages have runtime systems, which you linked with
your object files for that language, in order to get a complete runnable
binary.
Johnny
Ori Idan <ori(a)helicontech.co.il> asks today:
>> Pascal compiler written in Pascal? how can I compile the compiler it I
>> don't yet have a pascal compiler? :-)
You compile the code by hand into assembly language for the CDC
6400/6600 machines, and bootstrap that way: see
Urs Ammann
On Code Generation in a PASCAL Compiler
http://dx.doi.org/10.1002/spe.4380070311
Niklaus Wirth
The Design of a PASCAL Compiler
http://dx.doi.org/10.1002/spe.4380010403
It has been a long time since I read those articles in the journal
Software --- Practice and Experience, but my recollection is that they
wrote the compiler in a minimal subset of Pascal needed to do the job,
just to ease the hand-translation process.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
On 2016-06-30 21:22, Clem Cole <clemc(a)ccc.com> wrote:
>
> but when Moto came out with a memory management chip it had some
>> > severe flaws that made paging and fault recovery impossible, while the
>> > equivalent features available on the 8086 line were tolerable.
> Different issues...
>
> When the 68000 came out there was a base/limit register chip available,
> who's number I forget (Moto offered to Apple for no additional cost if they
> would use it in the Mac but sadly they did not). This chip was similar
> to the 11/70 MMU, as that's what Les and Nick were used to using (they used
> a 11/70 running Unix V6 has the development box and had been before the
> what would become the 68000 -- another set of great stories from Les, Nick
> and Tom Gunter).
Clem, I think pretty much all you are writing is correct, except that I
don't get your reference to the PDP-11 MMU.
The MMU of the PDP-11 is not some base/limit register thing. It's a
paged memory, with a flat address space. Admittedly, you only have 8
pages, but I think it's just plain incorrect to call it something else.
(Even though noone I know of ever wrote a demand-paged memory system for
a PDP-11, there is no technical reason preventing you from doing it.
Just that with 8 pages, and load more physical memory than virtual, it
just didn't give much of any benifits.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> Ronald Natalie <ron(a)ronnatalie.com>
>
>>
>> On the other hand, there was
>> no excuse for a Pascal compiler to be either large, buggy, or slow, even before Turbo Pascal.
>>
> I remember the Pascal computer on my Apple II used to have to use some of the video memory while it was running.
UCSD Pascal, the Apple Pascal base, would grab the video memory as space to write the heap when compiling. When the Terak system was in use at UCSD the video memory would display on the screen so you could watch the heap grow down the screen while the stack crawled up when compiling. If it ever hit in the middle, you had a crash. Exciting times.
Terak systems were 11/03 based, IIRC. (http://www.threedee.com/jcm/terak/)
David
> On Jun 30, 2016, at 10:27 AM, schily(a)schily.net (Joerg Schilling)
> Marc Rochkind <rochkind(a)basepath.com> wrote:
>
>> Bill Cheswick: "What a different world it would be if IBM had selected the
>> M68000 and UCSD Pascal. Both seemed
>> to me to better better choices at the time."
>>
>> Not for those of us trying to write serious software. The IBM PC came out
>> in August, 1981, and I left Bell Labs to write software for it full time
>> about 5 months later. At the time, it seemed to me to represent the future,
>> and that turned out to be a correct guess.
>
> I worked on a "Microengine" in 1979.
>
> The Microengine was a micro PDP-11 with a modified micro code ROM that directly
> supported to execute p-code.
>
> The machine was running a UCSD pascal based OS and was really fast and powerful.
>
> Jörg
Very likely one of the Western Digital products. They were the first to take UCSD Pascal and burned the p-code interpreter into the ROM. Made for a blindingly fast system. I worked with the folks who did the port and make it all play together. Fun days.
I worked on the OS and various utility programs those days. Nothing to do with the interpreters.
When the 68000 came out SofTech did a port of the system to it. Worked very well; you could take code compiled on the 6502 system write it to a floppy, take the floppy to the 68k system and just execute the binary. It worked amazingly well.
David
> From: scj(a)yaccman.com
> I think one fatal mistake that A68 made
One of many, apparently, given Hoare's incredible classic "The Emperor's Old
Clothes":
http://zoo.cs.yale.edu/classes/cs422/2014/bib/hoare81emperor.pdf
(which should be required reading for every CS student).
Noel
Steve almost right....mixing a few memories...see below..
On Thu, Jun 30, 2016 at 1:17 PM, <scj(a)yaccman.com> wrote:
> My memory was that the 68000 gave the 8086 a pretty good run for its
> money,
Indeed - most of the UNIX workstations folks picked it because of the
linear addressing.
but when Moto came out with a memory management chip it had some
> severe flaws that made paging and fault recovery impossible, while the
> equivalent features available on the 8086 line were tolerable.
Different issues...
When the 68000 came out there was a base/limit register chip available,
who's number I forget (Moto offered to Apple for no additional cost if they
would use it in the Mac but sadly they did not). This chip was similar
to the 11/70 MMU, as that's what Les and Nick were used to using (they used
a 11/70 running Unix V6 has the development box and had been before the
what would become the 68000 -- another set of great stories from Les, Nick
and Tom Gunter).
The problem with running a 68000 with VM was not the MMU, it was the
microcode. Nick did not store all of the needed information needed by the
microcode to recover from a faulted instruction, so if a instruction could
not complete, it could not be restarted without data loss.
> There were
>
> some bizarre attempts to page with the 68000 (I remember one product that
> had two 68000 chips, one of which was solely to sit on the shoulder of the
> other and remember enough information to respond to faults!).
This was referred to as Forest Baskett mode -- he did a early paper that
described it. I just did a quick look but did not see a copy in my shelf
of Moto stuff. At least two commercial systems were built this way -
Apollo and Masscomp.
The two processors are called the "executor" and "fixer." The trick is
that when the MMU detects an fault will occur, the executor is sent "wait
state" cycles telling it that the required memory location is just taking
longer to read or write. The fixer is then given the faulting address,
which handles the fault. When the page is finally filled, on the
Masscomp system the cache is then loaded and the executor is allowed to
complete the memory cycle.
When Nick fixed the microcode for the processor, the updated chip was
rebranded as the 68010. In the case of the Masscomp MC-500 CPU board, we
popped the new chip in as the executor and changed the PAL's so the fault
was allowed to occur (creating the MPU board). This allowed the executor
to go do other work while the fixer was dealing with the fault. We
picked up a small amount of performance, but in fact it was not much. I
still have a system on my home network BTW (although I have not turned it
on in a while -- it was working last time I tried it).
Note the 68010 still needed an external MMU. Apollo and Masscomp built
their own, although fairly soon after they did the '10 Moto created a chip
to replace the base/limit register scheme with one that handled 2-level
pages.
In Masscomp's case when we did the 5000 series which was based on the
68020, use their MMU for their low end (300 series) and our custom MMU on
the larger systems (700 series).
> By the time
>
> Moto fixed it, the 8086 had taken the field...
>
Well sort of. The 68K definitely won the UNIX wars, at least until the
386 and linear addressing would show up in the Intel line. There were
some alternatives like the Z8000, NS32032, and AT&T did the 32100 which was
used in the 3B2/3B5 et al. but 68K was the lion share.
Clem
> I'm curious if the name "TPC" was an allusion to the apocryphal telephone
> company of the same name in the 1967 movie, "The President's Analyst"?
Good spotting. Ken T confirms it was from the flick.
doug
> From: Dave Horsfall <dave(a)horsfall.org>
>
> On Wed, 29 Jun 2016, scj(a)yaccman.com wrote:
>
>> Pascal had P-code, and gave C a real run, especially as a teaching
>> language.
>
> Something I picked up at Uni was that Pascal was never designed for
> production use; instead; you debugged your algorithm in it, then ported it
> to your language of choice.
I was an active member of the UCSD Pascal project from 77 to 80, and then was with SofTech MicroSystems for a couple years after that.
An unwritten legacy of the Project was that, according to Professor Ken Bowles, IBM wanted to use UCSD Pascal as the OS for their new x86 based personal computer. The license was never worked out as the University of California got overly involved in it. As a result IBM went with their second choice, some small Redmond based company no one had ever heard of. So it was intended and, at least IBM thought, it was good enough for production use.
I also knew of UCSD Pascal programs written to do things such as dentist office billing and scheduling and other major ‘real world’ tasks. So it wasn’t just an academic project.
I still have UCSD Pascal capable of running in a simulator, though I’ve not run it in a while. And I have all the source for the OS and interpreter for the Version I.5 and II.0 systems. Being a code pig just means that I need a lot of disk space.
David
Hi.
Can anyone give a definitive date for when Bill Joy's csh first got out
of Berkeley? I suspect it's in the 1976 - 1977 time frame, but I don't
know for sure.
Thanks!
Arnold
The requested URL /pub/Wish/wish_internals.pdf was not found on this
server.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> On Jun 26, 2016, at 5:59 PM, tuhs-request(a)minnie.tuhs.org wrote:
>
> I detested the CSH syntax. In order to beat back the CSH proponents at BRL, I added JOB control to the SV (and later SVR2) Bourne Shell. Then they beat on me for not having command like editing in (a la TCSH), so I added that. This shell went out as /bin/sh in the Doug Gwyn SV-on-BSD release so every once and a while over the years I trip across a “Ron shell” usually people who were running Mach-derived things that ran my shell as /bin/sh.
When porting BSD to new hardware at Celerity (later Floating Point, now part of Sun, oops Oracle) I got ahold of the code that Doug was working on and made the jsh (Job control sh) my shell of choice. Now that Bash does all of those things and almost everything emacs can do, Bash is my shell.
As far as customizing, I’ve got a .cshrc that does nothing more than redirect to a launch of bash if available and /bin/sh if nothing else. And my scripts for logging in are so long a convoluted due to many years of various hardware and software idiosyncratic changes (DG/UX anyone, anyone?) that I’m sure most of it is now useless. And I don’t change it for fear of breaking something.
David
I asked Jeff Korn (David Korn's son), who in turn asked David Korn who
confirmed that 'read -u' comes from ksh and that 'u' stands for 'unit'.
- Dan C.
Yes, indeed. He says:
*I added -u when I added co processes in the mid '80s. The u stands for
unit. It was command to talk about file descriptor unit at that time.*
On Tue, May 31, 2016 at 6:06 AM, Dan Cross <crossd(a)gmail.com> wrote:
> Hey, did your dad do `read -u`?
>
> ---------- Forwarded message ----------
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Date: Tue, May 31, 2016 at 3:27 AM
> Subject: [TUHS] etymology of read -u
> To: tuhs(a)minnie.tuhs.org
>
>
> What's the mnmonic significance, if any, of the u in
> the bash builtin read -u for reading from a specified
> file descriptor? Evidently both f and d had already been
> taken in analogy to usage in some other commands.
>
> The best I can think of is u as in "tape unit", which
> was common usage back in the days of READ INPUT TAPE 5.
> That would make it the work of an old timer, maybe Dave Korn?
>
>
>