> From: Tim Bradshaw
> There is a paper, by Flowers, in the Annals of the History of Computing
> which discusses a lot of this. I am not sure if it's available online
Yes:
http://www.ivorcatt.com/47c.htm
(It's also available behind the IEEE pay-wall.) That issue of the Annals (5/3)
has a couple of articles about Colossus; the Coombs one on the design is
available here:
http://www.ivorcatt.com/47d.htm
but doesn't have much on the reliability issues; the Chandler one on
maintenance has more, but alas is only available behind the paywall:
https://www.computer.org/csdl/mags/an/1983/03/man1983030260-abs.html
Brian Randell did a couple of Colossus articles (in "History of Computing in
the 20th Century" and "Origin of Digital Computers") for which he interviewed
Flowers and others, there may be details there too.
Noel
Tim Bradshaw <tfb(a)tfeb.org> commented on a paper by Tommy Flowers on
the design of the Colossus: here is the reference:
Thomas H. Flowers, The Design of Colossus, Annals of the
History of Computing 5(3) 239--253 July/August 1983
https://doi.org/10.1109/MAHC.1983.10079
Notice that it appeared in the Annnals..., not the successor journal
IEEE Annals....
There is a one-column obituary of Tommy Flowers at
http://doi.ieeecomputersociety.org/10.1109/MC.1998.10137
Last night, I finished reading this recent book:
Thomas Haigh and Mark (Peter Mark) Priestley and Crispin Rope
ENIAC in action: making and remaking the modern computer
MIT Press 2016
ISBN 0-262-03398-4
https://doi.org/10.7551/mitpress/9780262033985.001.0001
It has extensive commentary about the ENIAC at the Moore School of
Engineering at the University of Pennsylvania in Philadelphia, PA. Its
construction began in 1943, with a major instruction set redesign in
1948, and was shutdown permanently on 2 October 1955 at 23:45.
The book notes that poor reliability of vacuum tubes and thousands of
soldered connections was a huge problem, and in the early years, only
about 1 hour out of 24 was devoted to useful runs; the rest of the
time was used for debuggin, problem setup (which required wiring
plugboards), testing, and troubleshooting. Even so, runs generally
had to be repeated to verify that the same answers could be obtained:
often, they differed.
The book also reports that reliability was helped by never turning off
power: tubes were more susceptible to failure when power was restored.
The book reports that reliability of the ENIAC improved significantly
when on 28 April 1948, Nick Metropolis (co-inventor of the famous
Monte Carlo method) had the clock rate reduced from 100kHz to 60kHz.
It was only several years later that, with vacuum tube manufacturing
improvements, the clock rate was eventually moved back to 100Khz.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Tim Bradshaw wrote:
"Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in. So by the time of Whirlwind it was presumably well-understood that this was possible."
"Colossus: The Secrest of Bletchley Park's Codebreaking Computers"
by Copeland et al has little to say about reliability. But one
ex-operator remarks, "Often the machine broke down."
Whether it was the (significant) mechanical part or the electronics
that typically broke is unclear. Failures in a machine that's always
doing the same thing are easier to detect quickly than failures in
a mchine that has a varied load. Also the task at hand could fail
for many other reasons (e.g. mistranscribed messages) so there was
no presumption of correctness of results--that was determined by
reading the decrypted messages. So I think it's a stretch to
argue that reliability was known to be a manageable issue.
Doug
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Core memory wiped out competing technologies (Williams tube, mercury
> delay line, etc) almost instantly and ruled for over twent years.
I never lived through that era, but reading about it, I'm not sure people now
can really fathom just how big a step forward core was - how expensive, bulky,
flaky, low-capacity, etc, etc prior main memory technologies were.
In other words, there's a reason they were all dropped like hot potatoes in
favour of core - which, looked at from our DRAM-era perspective, seems
quaintly dinosaurian. Individual pieces of hardware you can actually _see_
with the naked eye, for _each_ bit? But that should give some idea of how much
worse everything before it was, that it killed them all off so quickly!
There's simply no question that without core, computers would not have
advanced (in use, societal importance, technical depth, etc) at the speed they
did without core. It was one of the most consequential steps in the
development of computers to what they are today: up there with transistors,
ICs, DRAM and microprocessors.
> Yet late in his life Forrester told me that the Whirlwind-connected
> invention he was most proud of was marginal testing
Given the above, I'm totally gobsmacked to hear that. Margin testing was
important, yes, but not even remotely on the same quantum level as core.
In trying to understand why he said that, I can only suppose that he felt that
core was 'the work of many hands', which it was (see e.g. "Memories That
Shaped an Industry", pg. 212, and the things referred to there), and so he
only deserved a share of the credit for it.
Is there any other explanation? Did he go into any depth as to _why_ he felt
that way?
Noel
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
> I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum
> level as core.
> In trying to understand why he said that, I can only suppose that he felt that
> core was 'the work of many hands'...and so he only deserved a share of the
> `credit for it.
It is indeed a striking comment. Forrester clearly had grave concerns
about the reliability of such a huge aggregation of electronics. I think
jpl gets to the core of the matter, regardless of national security:
> Whirlwind ... was tube based, and I think there was a tradeoff of speed,
> as determined by power, and tube longevity. Given the purpose, early
> warning of air attack, speed was vital, but so, too, was keeping it alive.
> So a means of finding a "sweet spot" was really a matter of national
> security. I can understand Forrester's pride in that context.
If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
radio to a 5000-tube computer (say nothing of the 50,000-tube machines
for which Whirlwind served as a prototype), computing looks like a
crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
computed reliably--a sine qua non for the nascent industry.
On 2018-06-16 21:00, Clem Cole <clemc(a)ccc.com> wrote:
> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
<jnc(a)mercury.lcs.mit.edu> wrote:
>>
>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>> have
>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>> but I don't recall exactly where I saw it.)
> ​I think it was part of the same paper where he made the observation that
> the greatest mistake an architecture can have is too few address bits.​
I think the paper you both are referring to is the "What have we learned
from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learn…
There is some additional comments in
https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper…
> My understanding is that the problem was that UNIBUS was perceived as an
> I/O bus and as I was pointing out, the folks creating it/running the team
> did not value it, so in the name of 'cost', more bits was not considered
> important.
Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was
very clearly designed a the system bus for all needs by DEC, and was
used just like that until the 11/70, which introduced a separate memory
bus. In all previous PDP-11s, both memory and peripherals were connected
on the Unibus.
Why it only have 18 bits, I don't know. It might have been a reflection
back on that most things at DEC was either 12 or 18 bits at the time,
and 12 was obviously not going to cut it. But that is pure speculation
on my part.
But, if you read that paper again (the one from Bell), you'll see that
he was pretty much a source for the Unibus as well, and the whole idea
of having it for both memory and peripherals. But that do not tell us
anything about why it got 18 bits. It also, incidentally have 18 data
bits, but that is mostly ignored by all systems. I believe the KS-10
made use of that, though. And maybe the PDP-15. And I suspect the same
would be true for the address bits. But neither system was probably
involved when the Unibus was created, but made fortuitous use of it when
they were designed.
> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
> engineering at DEC for many years. Henk was notoriously frugal (we might
> even say 'cheap'), so I can imagine that he did not want to spend on
> anything that he thought was wasteful. Just like I retold the
> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
> I don't know for sure, but I can see that without someone really arguing
> with Henk as to why 18 bits was not 'good enough.' I can imagine the
> conversation going something like: Someone like me saying: *"Henk, 18 bits
> is not going to cut it."* He might have replied something like: *"Bool
> sheet *[a dutchman's way of cursing in English], *we already gave you two
> more bit than you can address* (actually he'd then probably stop mid
> sentence and translate in his head from Dutch to English - which was always
> interesting when you argued with him).
Quite possible. :-)
> Note: I'm not blaming Henk, just stating that his thinking was very much
> that way, and I suspect he was not not alone. Only someone like Gordon and
> the time could have overruled it, and I don't think the problems were
> foreseen as Noel notes.
Bell in retrospect thinks that they should have realized this problem,
but it would appear they really did not consider it at the time. Or
maybe just didn't believe in what they predicted.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
Core memory wiped out competing technologies (Williams tube, mercury
delay line, etc) almost instantly and ruled for over twent years. Yet
late in his life Forrester told me that the Whirlwind-connected
invention he was most proud of was marginal testing: running the
voltage up and down once a day to cause shaky vacuum tubes to
fail during scheduled maintenance rather than randomly during
operation. And indeed Wirlwind racked up a notable record of
reliability.
Doug
> As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
The Newcastle Connection (aka Unix United) implemented this idea.
Name spaces could be pasted together simply by making .. at the
roots point "up" to a new superdirectory. I do not remember whether
UIDS had to agree across the union (as in NFS) or were mapped (as
in RFS).
Doug
We lost the founder of IBM, Thomas J. Watson, on this day in 1956 (and I
have no idea whether or not he was buried 9-edge down).
Oh, and I cannot find any hard evidence that he said "Nobody ever got
fired for buying IBM"; can anyone help? I suspect that it was a media
beat-up from his PR department i.e. "fake news"...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."