Wow, this brings back memories. When I was a kid I remember visiting
a guy who had a barn full of computers in or around Princeton, N.J.
There was a Burroughs 500, a PB 250, and a PDP-8. The 500 was a vacuum
tube and nixie display machine. That sucker used a lot of neon, and I
seem to remember that it used about $100 worth of electricity in 1960s
dollars just to warm it up. I think that the PB 250 was one of the
first machines built using transistors. I assume that all of you know
what a PDP-8 is. I remember using the PDP-8 using SNAP (simple numeric
arithmetic processor) to crank out my math homework. Note that the PB
250 also had SNAP, but in that case it was their assembler.
Some of the first serious programming that I did was later at BTL on
516-TSS using FSNAP (floating-point SNAP) written by Heinz. Maybe he
can fill us in on whether it was derived from SNAP.
Anyway, I could only visit the place occasionally because it was far
from home. Does anyone else out there know anything about it? It's a
vague memory brought back by the mention of the 250.
Jon
> From: Stuff Received
> I had always thought of a delay line as a precursor to a register (or
> stack) for storing intermediate results. Is this not an accurate way of
> thinking about it?
No, not at all.
First: delay lines were a memory _technology_ (one that was inherently
serial, not random-access). They preceded all others.
Second: registers used to have two aspects - one now gone (and maybe the
second too). The first was that the _technology_ used to implement them
(latches built out of tubes, then transistors) was faster than main memory -
a distinction now mostly gone, especially since caches blur the speed
distinction between today's main memory and registers. The second was that
registers, being smaller in numbers, could be named with a few bits, allowing
them to be named with a small share of the bits in an instruction. (This one
still remains, although instructions are now so long it's probably less
important.)
Some delay-line machines had two different delay line sizes (since size is
equivalent to average access time) - what one might consider 'registers' were
kept in the small ones, for fast access at all times, whereas main memory
used the longer ones.
Noel
> From: Bakul Shah
> one dealt with it by formatting the disk so that the logical blocks N &
> N+1 (from the OS PoV) were physically more than 1 sector apart. No
> clever coding needed!
An old hack. ('Nothing new', and all that.) DEC Rx01/02 floppies used the
same thing, circa 1976.
Noel
After my posting on Sat, 10 Dec 2022 17:42:14 -0700 about the recent
work on kermit 10.0, some readers asked why a serial line connection
and file transfer tool was still of interest, and a few others
responded with use cases.
Modern kermit has for several years supported ssh connections, and
Unicode, as well: here is a top-level command list:
% kermit
(~/) C-Kermit>? Command, one of the following:
add define hangup msleep resend telnet
answer delete HELP open return touch
apc dial if orientation rlogin trace
array directory increment output rmdir translate
ask disable input pause run transmit
askq do INTRO pdial screen type
assign echo kcd pipe script undeclare
associate edit learn print send undefine
back enable LICENSE pty server version
browse end lineout purge set void
bye evaluate log push shift wait
cd exit login pwd show where
change file logout quit space while
check finish lookup read ssh who
chmod for mail receive statistics write
clear ftp manual redial status xecho
close get message redirect stop xmessage
connect getc minput redo SUPPORT
convert getok mget reget suspend
copy goto mkdir remote switch
date grep mmove remove tail
decrement head msend rename take
or one of the tokens: ! # ( . ; : < @ ^ {
Here are the descriptions of connection and character set translations:
(~/) C-Kermit>help ssh
Syntax: SSH [ options ] <hostname> [ command ]
Makes an SSH connection using the external ssh program via the SET SSH
COMMAND string, which is "ssh -e none" by default. Options for the
external ssh program may be included. If the hostname is followed by a
command, the command is executed on the host instead of an interactive
shell.
(~/) C-Kermit>help connect
Syntax: CONNECT (or C, or CQ) [ switches ]
Connect to a remote computer via the serial communications device given in
the most recent SET LINE command, or to the network host named in the most
recent SET HOST command. Type the escape character followed by C to get
back to the C-Kermit prompt, or followed by ? for a list of CONNECT-mode
escape commands.
Include the /QUIETLY switch to suppress the informational message that
tells you how to escape back, etc. CQ is a synonym for CONNECT /QUIETLY.
Other switches include:
/TRIGGER:string
One or more strings to look for that will cause automatic return to
command mode. To specify one string, just put it right after the
colon, e.g. "/TRIGGER:Goodbye". If the string contains any spaces, you
must enclose it in braces, e.g. "/TRIGGER:{READY TO SEND...}". To
specify more than one trigger, use the following format:
/TRIGGER:{{string1}{string2}...{stringn}}
Upon return from CONNECT mode, the variable \v(trigger) is set to the
trigger string, if any, that was actually encountered. This value, like
all other CONNECT switches applies only to the CONNECT command with which
it is given, and overrides (temporarily) any global SET TERMINAL TRIGGER
string that might be in effect.
Your escape character is Ctrl-\ (ASCII 28, FS)
(~/) C-Kermit>help translate
Syntax: CONVERT file1 cs1 cs2 [ file2 ]
Synonym: TRANSLATE
Converts file1 from the character set cs1 into the character set cs2
and stores the result in file2. The character sets can be any of
C-Kermit's file character sets. If file2 is omitted, the translation
is displayed on the screen. An appropriate intermediate character-set
is chosen automatically, if necessary. Synonym: XLATE. Example:
CONVERT lasagna.txt latin1 utf8 lasagna-utf8.txt
Multiple files can be translated if file2 is a directory or device name,
rather than a filename, or if file2 is omitted.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Clem Cole mentions kermit in connection with the question raised about
the uses of the cu utility.
As an FYI, Kermit's author, Frank da Cruz, is preparing a final
release, version 10.0, and I've been working with him on testing
builds in numerous environments. There are frequent updates during
this work, and the latest snapshots can be found at
https://kermitproject.org/ftp/kermit/pretest/
The x-YYYYMMDD.* bundles do not contain a leading directory, so be
careful to unpack them in an empty directory. The build relies on a
lengthy makefile with platform-specific target names, like irix65,
linux, and solaris11: the leading comments in the makefile provide
further guidance.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Good day all, this may be COFF instead, but I'm not joined over there yet, might need some Warren help/approval.
In any case, received that 3B20S 4.1 manual in good shape, unpacked it, and out fell a little tri-fold titled "The Office Automation System (OAS) Editor-Formatter (ef) Reference Card" emblazoned with the usual Bell Laboratories, non-disclosure note abut the Bell System, and a nice little picture of a terminal I can't identify as well as the full manual for this OAS leaning against it: "The Office Automation System (OAS)" with a nice big Bell logo at the bottom of the spine.
The latter is likely a manual I spotted in a video once and couldn't make out the name/title at the time, thought I was seeing another long-lost UNIX manual. I've never heard of this before, and Google isn't turning up much as Office Automation System appears to be a general industry term.
I seem to recall hearing about ef itself once or twice, some sort of pre-vi screen editor from Bell methinks? Not super familiar with it though, I just seem to recall reading about that before somewhere.
Anywho, dealing with a move in the near future that is hopefully into a home I own, so pretty distracted from that scanning I keep talking about, but hopefully when I'm settled in in a few months I can setup a proper scan bench in my new place and really go to town on things.
- Matt G.
Exciting development in the process of finding lost documentation, just sealed this one on eBay: https://www.ebay.com/itm/385266550881?mkcid=16&mkevt=1&mkrid=711-127632-235…
After the link is a (now closed) auction for a Western Electric 3B20S UNIX User's Manual Release 4.1, something I thought I'd never see and wasn't sure actually exited: print manuals for 4.x.
Once received I'll be curious to see what differences are obvious between this and the 3.0 manual, and this should be easy to scan given the comb binding. What a nice cover too! I always expected if a 4.x manual of some kind popped up it would feature the falling blocks motif of the two starter package sets of technical reports, but the picture of a 3B20S is nice. How auspicious given the recent discussion on the 3B series. I'm particularly curious to see what makes it specifically a 3B20S manual, if that's referring to it only having commands relevant to that one or omitting any commands/info specific to DEC machines.
Either way, exciting times, this is one of those things that I had originally set out to even verify existed when I first started really studying the history of UNIX documentation, so it's vindicating to have found something floating around out there in the wild. Between this and the 4.0 docs we now should have a much clearer picture of that gulf between III and V.
More to come once I receive it!
- Matt G.
I finally got myself a decent scanner, and have scanned my most prized
relic from my summer at Bell Labs - Draft 1 of Kernighan and Ritchie's "The
C Programming Language".
It's early enough that there are no tables of contents or index; of
particular note is that "chapter 8" is a "C Reference Manual" by Dennis
dated May 1, 1977.
This dates from approx July 1977; it has my name on the cover and various
scribbles pointing out typos throughout.
Enjoy!
https://drive.google.com/drive/folders/1OvgKikM8vpZGxNzCjt4BM1ggBX0dlr-y?us…
p.s. I used a Fujitsu FI-8170 scanner, VueScan on Ubuntu, and pdftk-java
to merge front and back pages.
(Recently I mentioned to Doug McIlroy that I had infiltrated IBM East
Fishkill, reputedly one of the largest semiconductor fabs in the world,
with UNIX back in the 1980s. He suggested that I write it up and share it
here, so here it is.)
In 1986 I was working at IBM Research in Yorktown Heights, New York. I had
rejoined in 1984 after completing my PhD in computer science at CMU.
One day I got a phone call from Rick Dill. Rick, a distinguished physicist
who had, among other things, invented a technique that was key to
economically fabricating semiconductor lasers, had been my first boss at
IBM Research. While I’d been in Pittsburgh he had taken an assignment at
IBM’s big semiconductor fab up in East Fishkill. He was working to make
production processes there more efficient. He was about to initiate a
major project, with a large capital cost, that involved deploying a bunch
of computers and he wanted a certified computer scientist at the project
review. He invited me to drive up to Fishkill, about half an hour north of
the research lab, to attend a meeting. I agreed.
At the meeting I learned several things. First of all, the chipmaking
process involved many steps - perhaps fifty or sixty for each wafer full of
chips. The processing steps individually were expensive, and the amount
spent on each wafer was substantial. Because processing was imperfect, it
was imperative to check the results every few steps to make sure everything
was OK. Each wafer included a number of test articles, landing points for
test probes, scattered around the surface. Measurements of these test
articles were carried out on a special piece of equipment, I think bought
from Fairchild Semiconductor. It would take in a boat of wafers (identical
wafers were grouped together on special ceramic holders called boats, for
automatic handling, and all processed identically) and feed each wafer to
the test station, and probe each test article in turn. The result was
about a megabyte of data covering all of the wafers in the boat.
At this point the data had to be analyzed. The analysis program comprised
an interpreter called TAHOE along with a test program, one for each
different wafer being fabricated. The results indicated whether the wafers
in the boat were good, needed some rework, or had to be discarded.
These were the days before local area networking at IBM, so getting the
data from the test machine to the mainframe for analysis involved numerous
manual steps and took about six hours. To improve quality control, each
boat of wafers was only worked during a single eight-hour shift, so getting
the test results generally meant a 24 hour pause in the processing of the
boat, even though the analysis only took a couple of seconds of time on the
mainframe.
IBM had recently released a physically small mainframe based on customized
CPU chips from Motorola. This machine, the size of a large suitcase and
priced at about a million dollars, was suitable to locate next to each test
machine, thus eliminating the six hour wait to see results.
Because there were something like 50 of the big test machines at the
Fishkill site, project represented a major capital expenditure. Getting
funding of this size approved would take six to twelve months, and this
meeting was the first step in seeking this approval.
At the end of the meeting I asked for a copy of the manual for the TAHOE
test language. Someone gave me a copy and I took it home over the weekend
and read through it.
The following Monday I called Rick up and told him that I thought I could
implement an interpreter for the TAHOE language in about a month of work.
That was a tiny enough investment that Rick simply wrote a letter to Ralph
Gomory, then head of IBM Research, to requisition me for a month. I told
the Fishkill folks that I needed a UNIX machine to do this work and they
procured an RT PC running AIX 1. AIX 1 was based on System V. The
critical thing to me was that it had lex, yacc, vi, and make.
They set me up in an empty lab room with the machine and a work table.
Relatively quickly I built a lexical analyzer for the language in lex and
got an approximation to the grammar for the TAHOE language working in
yacc. The rest was implementing the functions for each of the TAHOE
primitives.
I adopted rigorous test automation early, a practice people now call test
driven development. Each time I added a capability to the interpreter I
wrote a scrap of TAHOE code to test it along with a piece of reference
input. I created a test target in the testing Makefile that ran the
interpreter with the test program and the reference input. There were four
directories, one for test scripts, one for input data, one for expected
outputs, and one for actual outputs. There was a big make file that had a
target for each test. Running all of the tests was simply a matter of
typing ‘make test’ in the root of the testing tree. Only if all of the
tests succeeded would I consider a build acceptable.
As I developed the interpreter I learned to build tests also for bugs as I
encountered them. This was because I discovered that I would occasionally
reintroduce bugs, so these tests, with the same structure (test scrap,
input data, reference output, make target) were very useful at catching
backsliding before it got away from me.
After a while I had implemented the entire TAHOE language. I named the
interpreter MONO after looking at the maps of the area near Lake Tahoe and
seeing Mono Lake, a small lake nearby.
Lake Tahoe and Mono Lake, with walking routes between them. Source: Google
Maps
At this point I asked my handler at Fishkill for a set of real input data,
a real test program, and a real set of output data. He got me the files
and I set to work.
The only tricky bit at this stage was the difference in floating point
between the RT PC machine, which used the recently adopted IEEE 754
floating point standard and the idiosyncratic floating point implemented in
the System 370 mainframes at the time. The problem was that the LSB
rounding rules were different in the two machines, resulting in mismatches
in results. These mismatches were way below the resolution of the actual
data, but deciding how to handle this was tricky.
At this point I had an interpreter, MONO, for the TAHOE language that took
one specific TAHOE program, some real data, and produced output that
matched the TAHOE output. Almost done.
I asked my handler, a lovely guy whose name I am ashamed I do not remember,
to get me the regression test suite for TAHOE. He took me over and
introduced me to the woman who managed the team that was developing and
maintaining the TAHOE interpreter. The TAHOE interpreter had been under
development, I gathered, for about 25 years and was a large amount of
assembler code. I asked her for the regression test suite for the TAHOE
interpreter. She did not recognize the term, but I was not dismayed - IBM
had their own names for everything (disk was DASD and a boot program was
IPL) and I figured it would be Polka Dot or something equally evocative. I
described what my regression test suite did and her face lit up. “What a
great idea!” she exclaimed.
Anyway, at that point I handed the interpreter code over to the Fishkill
organization. C compilers were available for the PC by that time, so they
were able to deploy it on PC-AT machines that they located at each testing
machine. Since a PC-AT could be had for about $5,000 in those days the
savings from the original proposal was about $50 million and about a year
of elapsed time. The analysis of a boat’s worth of data on the PC-AT took
perhaps a minute or two, so quite a bit slower than on the mainframe, but
the elimination of the six-hour delay meant that a boat could progress
forward in its processing on the same day rather than a day later.
One of my final conversations with my Fishkill handler was about getting
them some UNIX training. In those days the only way to get UNIX training
was from AT&T. Doing business with AT&T at IBM in those days involved very
high-level approvals - I think it required either the CEO or a direct
report to the CEO. He showed me the form he needed to get approved in
order to take this course, priced at about $1,500 at the time. It required
twelve signatures. When I expressed horror he noted that I shouldn’t worry
because the first six were based in the building we were standing in.
That’s when I began to grasp how big IBM was in those days.
Anyway, about five years later I left IBM. Just before I resigned the
Fishkill folks invited me up to attend a celebratory dinner. Awards were
given to many people involved in the project, including me. I learned that
there was now a department of more than 30 people dedicated to maintaining
the program that had taken me a month to build. Rick Dill noted that one
of the side benefits of the approach that I had taken was the production of
a formal grammar for the TAHOE language.
At one point near the end of the project I had a long reflective
conversation with my Fishkill minder. He spun a metaphor about what I had
done with this project. Roughly speaking, he said, “We were a bunch of
guys cutting down trees by beating on them with stones. We heard that
there was this thing called an axe, and someone sent a guy we thought would
show us how to cut down trees with an axe. Imagine our surprise when he
whipped out a chainsaw.”
=====
nygeek.netmindthegapdialogs.com/home <https://www.mindthegapdialogs.com/home>
All, thank you all for all the congratulations! I was going to pen an e-mail
to the list last night but, after a few celebratory glasses of wine, I demurred.
It still feels weird that Usenix chose me for the Flame award, given such
greats as Doug, Margo, Radia and others have previously received the
same award. In reality, the award belongs to every TUHS member who has
contributed documents, source code, tape images, anecdotes, knowledge
and wisdom, and who have given their time and energy to help others
with problems. I've been a steward of a remarkable community over three
decades and I feel honoured and humbled to receive recognition for it.
Casey told me the names of the people who nominated me. Thank you for
putting my name forward. Getting the e-mail from Casey sure was a surprise :-)
https://www.tuhs.org/Images/flame.jpg
Many thanks for all your support over the years!
Warren
Hello all,
I'm giving a presentation on the AT&T 3B2 at a local makerspace next month, and while I've been preparing the talk I became curious about an aspect that I don't know has been discussed elsewhere.
I'm well aware that the 3B2 was something of a market failure with not much penetration into the wider commercial UNIX space, but I'm very curious to know more about what the reaction was at Bell Labs. When AT&T entered the computer hardware market after the 1984 breakup, I get the impression that there wasn't very much interest in any of it at Bell Labs, is that true?
Can anyone recall what the general mood was regarding the 3B2 (and the 7300 and the 6300, I suppose!)
-Seth
--
Seth Morabito
Poulsbo, WA
web(a)loomcom.com
Around 1985 the computer division of Philips Electronics had a Moterola
68010 based server running MPX (Multi Processor Unix) based on System 5.3
with modification. The 'Multi' part was related to the intelligent LAN and
WAN controllers each with their own 68010 processor and memory. A separate
system image would be downloaded at server boot-time. Truly Multi-Processor
:-)
Here an announcement of the latest (probably last) model, from 1988.
https://techmonitor.ai/technology/philips_ready_with_68030_models_for_its_p…
--
The more I learn the better I understand I know nothing.
> Has anyone roughly calculated “man years” spent developing Unix to 1973 or 1974?
> Under 25 "man-years”? (person years now)
I cannot find the message at the moment (TUHS mail archive search is not working anymore?), but I recall that Doug McIlroy mentioned on this list that 1973 was a miracle year, where Ken & Dennis wrote and debugged over 100,000 lines of code between them. In software, “man year” is an elastic yardstick...
There is also this anecdote by Andy Herzfeld:
===
Quickdraw, the amazing graphics package written entirely by Bill Atkinson, was at the heart of both Lisa and Macintosh. "How many man-years did it take to write QuickDraw?", the Byte magazine reporter asked Steve [Jobs].
Steve turned to look at Bill. "Bill, how long did you spend writing Quickdraw?"
"Well, I worked on it on and off for four years", Bill replied.
Steve paused for a beat and then turned back to the Byte reporter. "Twenty-four man-years. We invested twenty-four man-years in QuickDraw."
Obviously, Steve figured that one Atkinson year equaled six man years, which may have been a modest estimate.
===
There is also another anecdote involving Atkinson. At some point all Apple programmers had to file a weekly report with how many lines of code they wrote that week. After a productive week of refactoring and optimising, he filed a report saying “minus 2,000 lines”.
On DEC's TRU64 UNIX it was /mdec
Making a system image with mkisofs I'd follow with
disklabel -rw -f ${UTMP}/${NAME_ISO} /mdec/rzboot.cdfs /mdec/bootrz.cdfs
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
> From: Dave Horsfall
> MAINDEC was certainly on all of their standalone diagnostic media
Actually, it was the name for all their diagnostics (usually stand-alone),
dating back to the paper tape days - when that was the only form they were
distributed in. So it makes sense that it's a short form of 'MAINDEC'.
Noel
I'm curious about the origin of the directory name /usr/mdec.
(I am reminded of it because I've noticed that it lives on in
at least one of the BSDs.)
I had a vague notion that it meant `DEC maintenance' but that
seems a bit clumsy to describe a place holding boot blocks.
A random web board suggests it meant `magnetic DECtape.'
That's certainly not true by the time I came along, when it
contained the master copy of the disk boot block(s).
But I suppose it could have meant that early on and
the name just carried forward.
A quick skim of the V1-V7 manuals doesn't explain the name.
Anyone have any clearer memories than I do? Doug or Ken or
anyone who was there when it was coined, do you still recall?
Norman Wilson
Toronto ON
> Date: Sat, 12 Nov 2022 17:56:24 -0800
> From: Larry McVoy <lm(a)mcvoy.com>
> Subject: [TUHS] Re: DG UNIX History
>
> It sounds like they could have supported mmap() easily. I'd love to see
> this kernel, it sounds to me like it was SunOS with nicely done SMP
> support. The guy that said he'd never seen anything like it before or
> since, just makes me want to see it more.
> I know someone who was friends with one of the kernel guys, haven't talked
> to her in years but I'll see if I can find anything.
Following on from the exchange on TUHS about DG-UX, it would seem to me that the (Unix) unified cache was invented at least three times for Unix:
- John Reiser at AT&T
- At Sun
- At DG
As to the latter I could find two leads that might help you finding out more. It would seem that this unique Unix is specifically DG-UX version 4:
https://web.archive.org/web/20070930212358/http://www.accessmylibrary.com/c…
and
Michael H. Kelly and Andrew R. Huber, "Engineering a (Multiprocessor) Unix Kernel", Proceedings of the Autumn 1989 EUUG Conference, European Unix Systems User Group, Vienna, Austria, 1989, pp. 7- 19.
The unified cache isn’t mentioned, but it would seem that the multiprocessor redesign might have included it. Maybe the author names are helpful. I could not find the paper online, but there was a web page suggesting that a paper copy still exists in a (university?) library in Sweden.
=====
Publication: DG Review
Publication Date: 01-NOV-88
Author: Huber, Andrew R.
DG-UX 4.00: DG's redesigned kernel lays the foundation for future UNIX systems. (includes related article on DG-UX 4.00's file system and an excerpt from Judith S. Hurwitz's 'Data General's UNIX strategy: an evaluation' report)
COPYRIGHT 1988 New Media Publications
DG/UX 4.00
Revision 4.00 of Data General's native UNIX operating system siginificantly enhances the product and adds unique capabilities not found in other UNIX implementations. This article reviews the goals of DG/UX 4.00 and discusses some of its features.
When DG released DG/UX 1.00 in March, 1985, it was based on AT&T's System V Release 2 and incorporated the Berkeley UNIX file system and networking.
As DG/UX grew, it continued to incorporate functions of the major standard UNIX systems, as illustrated in the following timeline:
* DG/UX 1.00 March, 1985 Based on System V Release 2 and Berkely 4.1.
Included Berkely 4.2 file system and TCP/IP (LAN).
* DG/UX 2.00, September, 1985 Added Berkeley 4.2 system calls.
* DG/UX 3.00, April 1986 Added support for new DG hardware.
* DG/UX 3.10 March, 1987 Added Sun Microsystem's Network File System.sup.(R) Added X Windows.
* DG/UX 4.00, August, 1988 Re-designed and re-implemented kernel and file system.
I spotted this when glancing through a book catalogue; well, with a title
like that how could I miss it?
Subtitled "How 26 Lines of Code Changed the World", edited by Torie Bosch
and illustrated by Kelly Chudler (can't say that I've heard of them).
Summary:
``Programming is behind so much of life today, and this book draws together
a group of distinguished thinkers and technologists to reveal the
stories and people behind the computer coding that shapes our
world. From how university's [sic] databases were set up to
recognise only two genders to the first computer worm and the
first pop-up ad, the diverse topics reveal the consequences of
historical decisions and their long-lasting, profound implications.
Pb $34.99''
Lines of code, eh? :-)
Abbey's Bookshop: www.abbeys.com.au
Disclaimer: I have no connection with them, but I'll likely buy it.
-- Dave
Clem Cole:
Yep -- but not surprising. There were a bunch of folks at DG that had
worked on a single-level store system (Project Fountain-Head) that had
failed [some of that story is described in Kidder's book].
====
Are you sure? I thought Fountainhead was a Rand project.
Norman Wilson
Toronto ON
PS: if you don't get it, consider yourself fortunate.