> From: Warren Toomey
> Computing history is maybe too generic.
There already is a "computer-history" list, hosted at the Postel Institute:
http://www.postel.org/computer-history/
Unlike its sibling "Internet-history" list, it didn't catch on, though. (No
traffic for some years now.)
Noel
> From: Tim Bradshaw
> There is a paper, by Flowers, in the Annals of the History of Computing
> which discusses a lot of this. I am not sure if it's available online
Yes:
http://www.ivorcatt.com/47c.htm
(It's also available behind the IEEE pay-wall.) That issue of the Annals (5/3)
has a couple of articles about Colossus; the Coombs one on the design is
available here:
http://www.ivorcatt.com/47d.htm
but doesn't have much on the reliability issues; the Chandler one on
maintenance has more, but alas is only available behind the paywall:
https://www.computer.org/csdl/mags/an/1983/03/man1983030260-abs.html
Brian Randell did a couple of Colossus articles (in "History of Computing in
the 20th Century" and "Origin of Digital Computers") for which he interviewed
Flowers and others, there may be details there too.
Noel
Tim Bradshaw <tfb(a)tfeb.org> commented on a paper by Tommy Flowers on
the design of the Colossus: here is the reference:
Thomas H. Flowers, The Design of Colossus, Annals of the
History of Computing 5(3) 239--253 July/August 1983
https://doi.org/10.1109/MAHC.1983.10079
Notice that it appeared in the Annnals..., not the successor journal
IEEE Annals....
There is a one-column obituary of Tommy Flowers at
http://doi.ieeecomputersociety.org/10.1109/MC.1998.10137
Last night, I finished reading this recent book:
Thomas Haigh and Mark (Peter Mark) Priestley and Crispin Rope
ENIAC in action: making and remaking the modern computer
MIT Press 2016
ISBN 0-262-03398-4
https://doi.org/10.7551/mitpress/9780262033985.001.0001
It has extensive commentary about the ENIAC at the Moore School of
Engineering at the University of Pennsylvania in Philadelphia, PA. Its
construction began in 1943, with a major instruction set redesign in
1948, and was shutdown permanently on 2 October 1955 at 23:45.
The book notes that poor reliability of vacuum tubes and thousands of
soldered connections was a huge problem, and in the early years, only
about 1 hour out of 24 was devoted to useful runs; the rest of the
time was used for debuggin, problem setup (which required wiring
plugboards), testing, and troubleshooting. Even so, runs generally
had to be repeated to verify that the same answers could be obtained:
often, they differed.
The book also reports that reliability was helped by never turning off
power: tubes were more susceptible to failure when power was restored.
The book reports that reliability of the ENIAC improved significantly
when on 28 April 1948, Nick Metropolis (co-inventor of the famous
Monte Carlo method) had the clock rate reduced from 100kHz to 60kHz.
It was only several years later that, with vacuum tube manufacturing
improvements, the clock rate was eventually moved back to 100Khz.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Tim Bradshaw wrote:
"Making tube (valve) machines reliable was something originally sorted out by Tommy Flowers, who understood, and convinced people with some difficulty I think, that you could make a machine with one or two thousand valves (1,600 for Mk 1, 2,400 for Mk 2) reliable enough to use, and then produced ten or eleven Colossi from 1943 on which were used to great effect in. So by the time of Whirlwind it was presumably well-understood that this was possible."
"Colossus: The Secrest of Bletchley Park's Codebreaking Computers"
by Copeland et al has little to say about reliability. But one
ex-operator remarks, "Often the machine broke down."
Whether it was the (significant) mechanical part or the electronics
that typically broke is unclear. Failures in a machine that's always
doing the same thing are easier to detect quickly than failures in
a mchine that has a varied load. Also the task at hand could fail
for many other reasons (e.g. mistranscribed messages) so there was
no presumption of correctness of results--that was determined by
reading the decrypted messages. So I think it's a stretch to
argue that reliability was known to be a manageable issue.
Doug
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Core memory wiped out competing technologies (Williams tube, mercury
> delay line, etc) almost instantly and ruled for over twent years.
I never lived through that era, but reading about it, I'm not sure people now
can really fathom just how big a step forward core was - how expensive, bulky,
flaky, low-capacity, etc, etc prior main memory technologies were.
In other words, there's a reason they were all dropped like hot potatoes in
favour of core - which, looked at from our DRAM-era perspective, seems
quaintly dinosaurian. Individual pieces of hardware you can actually _see_
with the naked eye, for _each_ bit? But that should give some idea of how much
worse everything before it was, that it killed them all off so quickly!
There's simply no question that without core, computers would not have
advanced (in use, societal importance, technical depth, etc) at the speed they
did without core. It was one of the most consequential steps in the
development of computers to what they are today: up there with transistors,
ICs, DRAM and microprocessors.
> Yet late in his life Forrester told me that the Whirlwind-connected
> invention he was most proud of was marginal testing
Given the above, I'm totally gobsmacked to hear that. Margin testing was
important, yes, but not even remotely on the same quantum level as core.
In trying to understand why he said that, I can only suppose that he felt that
core was 'the work of many hands', which it was (see e.g. "Memories That
Shaped an Industry", pg. 212, and the things referred to there), and so he
only deserved a share of the credit for it.
Is there any other explanation? Did he go into any depth as to _why_ he felt
that way?
Noel
> > Yet late in his life Forrester told me that the Whirlwind-connected
> > invention he was most proud of was marginal testing
> I'm totally gobsmacked to hear that. Margin testing was
> important, yes, but not even remotely on the same quantum
> level as core.
> In trying to understand why he said that, I can only suppose that he felt that
> core was 'the work of many hands'...and so he only deserved a share of the
> `credit for it.
It is indeed a striking comment. Forrester clearly had grave concerns
about the reliability of such a huge aggregation of electronics. I think
jpl gets to the core of the matter, regardless of national security:
> Whirlwind ... was tube based, and I think there was a tradeoff of speed,
> as determined by power, and tube longevity. Given the purpose, early
> warning of air attack, speed was vital, but so, too, was keeping it alive.
> So a means of finding a "sweet spot" was really a matter of national
> security. I can understand Forrester's pride in that context.
If you extrapolate the rate of replacement of vacuum tubes in a 5-tube
radio to a 5000-tube computer (say nothing of the 50,000-tube machines
for which Whirlwind served as a prototype), computing looks like a
crap shoot. In fact, thanks to the maintenance protocol, Whirlwind
computed reliably--a sine qua non for the nascent industry.
On 2018-06-16 21:00, Clem Cole <clemc(a)ccc.com> wrote:
> below... > On Sat, Jun 16, 2018 at 9:37 AM, Noel Chiappa
<jnc(a)mercury.lcs.mit.edu> wrote:
>>
>> Let's start with the UNIBUS. Why does it have only 18 address lines? (I
>> have
>> this vague memory of a quote from Gordon Bell admitting that was a mistake,
>> but I don't recall exactly where I saw it.)
> I think it was part of the same paper where he made the observation that
> the greatest mistake an architecture can have is too few address bits.
I think the paper you both are referring to is the "What have we learned
from the PDP-11", by Gordon Bell and Bill Strecker in 1977.
https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learn…
There is some additional comments in
https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper…
> My understanding is that the problem was that UNIBUS was perceived as an
> I/O bus and as I was pointing out, the folks creating it/running the team
> did not value it, so in the name of 'cost', more bits was not considered
> important.
Hmm. I'm not aware of anyone perceiving the Unibus as an I/O bus. It was
very clearly designed a the system bus for all needs by DEC, and was
used just like that until the 11/70, which introduced a separate memory
bus. In all previous PDP-11s, both memory and peripherals were connected
on the Unibus.
Why it only have 18 bits, I don't know. It might have been a reflection
back on that most things at DEC was either 12 or 18 bits at the time,
and 12 was obviously not going to cut it. But that is pure speculation
on my part.
But, if you read that paper again (the one from Bell), you'll see that
he was pretty much a source for the Unibus as well, and the whole idea
of having it for both memory and peripherals. But that do not tell us
anything about why it got 18 bits. It also, incidentally have 18 data
bits, but that is mostly ignored by all systems. I believe the KS-10
made use of that, though. And maybe the PDP-15. And I suspect the same
would be true for the address bits. But neither system was probably
involved when the Unibus was created, but made fortuitous use of it when
they were designed.
> I used to know and work with the late Henk Schalke, who ran Unibus (HW)
> engineering at DEC for many years. Henk was notoriously frugal (we might
> even say 'cheap'), so I can imagine that he did not want to spend on
> anything that he thought was wasteful. Just like I retold the
> Amdahl/Brooks story of the 8-bit byte and Amdahl thinking Brooks was nuts;
> I don't know for sure, but I can see that without someone really arguing
> with Henk as to why 18 bits was not 'good enough.' I can imagine the
> conversation going something like: Someone like me saying: *"Henk, 18 bits
> is not going to cut it."* He might have replied something like: *"Bool
> sheet *[a dutchman's way of cursing in English], *we already gave you two
> more bit than you can address* (actually he'd then probably stop mid
> sentence and translate in his head from Dutch to English - which was always
> interesting when you argued with him).
Quite possible. :-)
> Note: I'm not blaming Henk, just stating that his thinking was very much
> that way, and I suspect he was not not alone. Only someone like Gordon and
> the time could have overruled it, and I don't think the problems were
> foreseen as Noel notes.
Bell in retrospect thinks that they should have realized this problem,
but it would appear they really did not consider it at the time. Or
maybe just didn't believe in what they predicted.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> jay forrester first described an invention called core memory in a lab
notebook 69 years ago today.
Core memory wiped out competing technologies (Williams tube, mercury
delay line, etc) almost instantly and ruled for over twent years. Yet
late in his life Forrester told me that the Whirlwind-connected
invention he was most proud of was marginal testing: running the
voltage up and down once a day to cause shaky vacuum tubes to
fail during scheduled maintenance rather than randomly during
operation. And indeed Wirlwind racked up a notable record of
reliability.
Doug
> As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
The Newcastle Connection (aka Unix United) implemented this idea.
Name spaces could be pasted together simply by making .. at the
roots point "up" to a new superdirectory. I do not remember whether
UIDS had to agree across the union (as in NFS) or were mapped (as
in RFS).
Doug
We lost the founder of IBM, Thomas J. Watson, on this day in 1956 (and I
have no idea whether or not he was buried 9-edge down).
Oh, and I cannot find any hard evidence that he said "Nobody ever got
fired for buying IBM"; can anyone help? I suspect that it was a media
beat-up from his PR department i.e. "fake news"...
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On 2018-06-18 14:17, Noel Chiappa wrote:
> > The "separate" bus for the semiconductor memory is just a second Unibus
>
> Err, no. :-) There is a second UNIBUS, but... its source is a second port on
> the FASTBUS memory, the other port goes straight to the CPU. The other UNIBUS
> comes out of the CPU. It _is_ possible to join the two UNIBI together, but
> on machines which don't do that, the _only_ path from the CPU to the FASTBUS
> memory is via the FASTBUS.
Ah. You and Ron are right. I am confused.
So there were some previous PDP-11 models who did not have their memory
on the Unibus. The 11/45,50,55 accessed memory from the CPU not through
the Unibus, but through the fastbus, which was a pure memory bus, as far
as I understand. You (obviously) could also have memory on the Unibus,
but that would be slower then.
Ah, and there is a jumper to tell which addresses are served by the
fastbus, and the rest then go to the Unibus. Thanks, I had missed these
details before. (To be honest, I have never actually worked on any of
those machines.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole
> My experience is that more often than not, it's less a failure to see
> what a successful future might bring, and often one of well '*we don't
> need to do that now/costs too much/we don't have the time*.'
Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.
By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.
In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.
> I can imagine that he did not want to spend on anything that he thought
> was wasteful.
Understandable. But see above...
The art is in finding a path that leave the future open (i.e. reduces future
costs, when you 'hit the wall'), without running up costs now.
A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.
They managed to have their cake (fairly minimal costs now) and eat it too
(later expansion).
> Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
> thinking Brooks was nuts
Don't think I've heard that one?
>> the decision to remove the variable-length addresses from IPv3 and
>> substitute the 32-bit addresses of IPv4.
> I always wondered about the back story on that one.
My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.
(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)
> 32-bits seemed infinite in those days and no body expected the network
> to scale to the size it is today and will grow to in the future
Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!
And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.
>> So, is poor vision common? All too common.
> But to be fair, you can also end up with being like DEC and often late
> to the market.
Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...
This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
makes sense.
So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.
> I think in both cases would have been allowed Alpha to be better
> accepted if DEC had shipped earlier with a few hacks, but them improved
> Tru64 as a better version was developed (*i.e.* replace the memory
> system, the I/O system, the TTY handler, the FS just to name a few that
> got rewritten from OSF/1 because folks thought they were 'weak').
But you can lose with that strategy too.
Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.
Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:
http://www.multicians.org/myths.html
Sigh, sometimes you can't win!
Noel
> From: "Theodore Y. Ts'o"
> To be fair, it's really easy to be wise to after the fact.
Right, which is why I added the caveat "seen to be such _at the time_ by some
people - who were not listened to".
> failed protocols and designs that collapsed of their own weight because
> architects added too much "maybe it will be useful in the future"
And there are also designs which failed because their designers were too
un-ambitious! Converting to a new system has a cost, and if the _benefits_
(which more or less has to mean new capabilities) of the new thing don't
outweigh the costs of conversion, it too will be a failure.
> Sometimes having architects being successful to add their "vision" to a
> product can be worst thing that ever happened
A successful architect has to pay _very_ close attention to both the 'here and
now' (it has to be viable for contemporary use, on contemporary hardware, with
contemporary resources), and also the future (it has to have 'room to grow').
It's a fine edge to balance on - but for an architecture to be a great
success, it _has_ to be done.
> The problem is it's hard to figure out in advance which is poor vision
> versus brilliant engineering to cut down the design so that it is "as
> simple as possible", but nevertheless, "as complex as necessary".
Absolutely. But it can be done. Let's look (as an example) at that IPv3->IPv4
addressing decision.
One of two things was going to be true of the 'Internet' (that name didn't
exist then, but it's a convenient tag): i) It was going to be a failure (in
which case, it probably didn't matter what was done, or ii) it was going to be
a success, in which case that 32-bit field was clearly going to be a crippling
problem.
With that in hand, there was no excuse for that decision.
I understand why they ripped out variable-length addresses (I was just about
to start writing router code, and I know how hard it would have been), but in
light of the analysis immediately above, there was no excuse for looking
_only_ at the here and now, and not _also_ looking to the future.
Noel
> From: Tony Finch <dot(a)dotat.at>
> Was this written down anywhere?
Alas, no. It was a presentation at a group seminar, and used either hand-drawn
transparencies, or a white-board - don't recall exactly which. I later tried to
dig it up for use in Nimrod, but without success.
As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
Or something like that! :-)
The issue with 'top-down' is that you have to have some global 'authority' to
manage the top level - hand out chunks, etc, etc. (For a spectacular example
of where this can go, look at NSAP's.) And what do you do when you run out of
top-level space? (Although in the NSAP case, they had such a complex top
couple of layers, they probably would have avoided that issue. Instead, they
had the problem that their name-space was spectacularly ill-suited to path
selection [routing], since in very large networks, interface names
[adddresses] must have a topological aspect if the path selection is to
scale. Although looking at the Internet nowadays, perhaps not!)
'Bottom-up' is not without problems of course (e.g. what if you want to add
another layer, e.g. to support potentially-nested virtual machines).
I'm not sure how well Dave understood the issue of path selection scaling at
the time he proposed it - it was very early on, '78 or so - since we didn't
understand path selection then as well as we do now. IIRC, I think he was
mostly was interested in it as a way to avoid having to have an asssignment
authority. The attraction for me was that it was easier to ensure that the
names had the needed topological aspect.
Noel
> From: Dave Horsfall
> idiots keep repeating those "quotes" ... in some sort of an effort to
> make the so-called "experts" look silly; a form of reverse
> jealousy/snobbery or something? It really pisses me off
You've just managed to hit one of my hot buttons.
I can't speak to the motivations of everyone who repeats these stories, but my
professional career has been littered with examples of poor vision from
technical colleagues (some of whom should have known better), against which I
(in my role as an architect, which is necessarily somewhere where long-range
thinking is - or should be - a requirement) have struggled again and again -
sometimes successfully, more often, not.
So I chose those two only because they are well-known examples - but, as you
correctly point out, they are poor examples, for a variety of reasons. But
they perfectly illustrate something I am _all_ too familiar with, and which
happens _a lot_. And the original situation I was describing (the MIT core
patent) really happened - see "Memories that Shaped an Industry", page 211.
Examples of poor vision are legion - and more importantly, often/usually seen
to be such _at the time_ by some people - who were not listened to.
Let's start with the UNIBUS. Why does it have only 18 address lines? (I have
this vague memory of a quote from Gordon Bell admitting that was a mistake,
but I don't recall exactly where I saw it.) That very quickly became a major
limitation. I'm not sure why they did it (the number of contact on a standard
DEC connector probably had something do with it, connected to the fact that
the first PDP-11 had only 16-bit addressing anyway), but it should have been
obvious that it was not enough.
And a major one from the very start of my career: the decision to remove the
variable-length addresses from IPv3 and substitute the 32-bit addresses of
IPv4. Alas, I was too junior to be in the room that day (I was just down the
hall), but I've wished forever since that I was there to propose an alternate
path (the details of which I will skip) - that would have saved us all
countless billions (no, I am not exaggerating) spent on IPv6. Dave Reed was
pretty disgusted with that at the time, and history shows he was right.
Dave also tried to get a better checksum algorithm into TCP/UDP (I helped him
write the PDP-11 code to prove that it was plausible) but again, it got turned
down. (As did his wonderful idea for bottom-up address allocation, instead of
top-down. Oh well.) People have since discussed issues with the TCP/IP
checksum, but it's too late now to change it.
One place where I _did_ manage to win was in adding subnetting support to
hosts (in the Host Requirements WG); it was done the way I wanted, with the
result that when CIDR came along, even though it hadn't been forseen at the
time we did subnetting, it required _no_ hosts changes of any kind. But
mostly I lost. :-(
So, is poor vision common? All too common.
Noel
On 2018-06-18 04:00, Ronald Natalie <ron(a)ronnatalie.com> wrote:
>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
> That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
Eh... Yes... But...
The "separate" bus for the semiconductor memory is just a second Unibus,
so the statement is still true. All (earlier) PDP-11 models had their
memory on the Unibus. Including the 11/45,50,55.
It's just that those models have two Unibuses.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
We lost the Father of Computing, Alan Turing, on this day when he suicided
in 1954 (long story). Just imagine where computing would've been now...
Yes, there are various theories surrounding his death, such as a jealous
lover, the FBI knowing that he knew about Verona and could be compromised
as a result of his sexuality, etc. Unless they speak up (and they ain't),
we will never know.
Unix reference? Oh, that... No computable devices (read his paper), no
computers... And after dealing with a shitload of OSs in my career, I
daresay that there is no more computable OS than Unix. Sorry, penguins,
but you seem to require a fancy graphical interface these days, just like
Windoze.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Derek Fawcus <dfawcus+lists-tuhs(a)employees.org>
> my scan of it suggests that only the host part of the address which were
> extensible
Well, the division into 'net' and 'rest' does not appear to have been a hard
one at that point, as it later became.
> The other thing obviously missing in the IEN 28 version is the TTL
Yes... interesting!
> it has the DF flag in the TOS field, and an OP bit in the flags field
Yeah, small stuff like that got added/moved/removed around a lot.
> the CIDR vs A/B/C stuff didn't really change the rest.
It made packet processing in routers quite different; routing lookups, the
routing table, etc became much more complex (I remember that change)! Also in
hosts, which had not yet had their understanding of fields in the addresses
lobotomized away (RFC-1122, Section 3.3.1).
Yes, the impact on code _elsewhere_ in the stack was minimal, because the
overall packet format didn't change, and addresses were still 32 bits, but...
> The other bit I find amusing are the various movements of the port
> numbers
Yeah, there was a lot of discussion about whether they were properly part of
the internetwork layer, or the transport. I'm not sure there's really a 'right'
answer; PUP:
http://gunkies.org/wiki/PARC_Universal_Packet
made them part of the internetwork header, and seemed to do OK.
I think we eventually decided that we didn't want to mandate a particular port
name size across all transports, and moved it out. This had the down-side that
there are some times when you _do_ want to have the port available to an
IP-only device, which is why ICMP messages return the first N bytes of the
data _after_ the IP header (since it's not clear where the port field[s] will
be).
But I found, working with PUP, there were some times when the defined ports
didn't always make sense with some protocols (although PUP didn't really have
a 'protocol' field per se); the interaction of 'PUP type' and 'socket' could
sometimes be confusing/problemtic. So I personally felt that was at least as
good a reason to move them out. 'Ports' make no sense for routing protocols,
etc.
Overall, I think in the end, TCP/IP got that all right - the semantics of the
'protocol' field are clear and simple, and ports in the transport layer have
worked well; I can't think of any places (other than routers which want to
play games with connections) where not having ports in the internetwork layer
has been an issue.
Noel
> From: Derek Fawcus
> Are you able to point to any document which still describes that
> variable length scheme? I see that IEN 28 defines a variable length
> scheme (using version 2)
That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).
> and that IEN 41 defines a different variable length scheme, but is
> proposing to use version 4.
Right, that's a draft only (no code ever written for it), from just before the
meeting that substituted 32-bit addresses.
> (IEN 44 looks a lot like the current IPv4).
Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
stuff). :-)
Noel
> From: Johnny Billquist
> It's a separate image (/netnix) that gets loaded at boot time, but it's
> run in the context of the kernel.
ISTR reading that it runs in Supervisor mode (no doubt so it could use the
Supervisor mode virtual address space, and not have to go crazy with overlays
in the Kernet space).
Never looked at the code, though.
Noel
On 2018-06-17 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > It's a separate image (/netnix) that gets loaded at boot time, but it's
> > run in the context of the kernel.
>
> ISTR reading that it runs in Supervisor mode (no doubt so it could use the
> Supervisor mode virtual address space, and not have to go crazy with overlays
> in the Kernet space).
Yes. That rings a bell now that you mention it. Pretty sure you are correct.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Johnny Billquist
> incidentally have 18 data bits, but that is mostly ignored by all
> systems. I believe the KS-10 made use of that, though. And maybe the
> PDP-15.
The 18-bit data thing is a total kludge; they recycled the two bus parity
lines as data lines.
The first device that I know of that used it is the RK11-E:
http://gunkies.org/wiki/RK11_disk_controller#RK11-E
which is the same cards as the RK11-D, with a jumper set for 18-bit operation,
and a different clock crystal. The other UNIBUS interface that could do this
was the RH11 MASSBUS controller. Both were originally done for the PDP-15;
they were used with the UC15 Unichannel.
The KS10:
http://gunkies.org/wiki/KS10
wound up using the 18-bit RH11 hack, but that was many years later.
Noel
On 2018-06-16 04:00, Tom Ivar Helbekkmo<tih(a)hamartun.priv.no> wrote:
> Warner Losh<imp(a)bsdimp.com> writes:
>
>> It looks like retrobsd hasn't been active in the last couple of years
>> though. A cool accomplishment, but with some caveats. All the network
>> is in userland, not the kernel, for example.
> Isn't 2.11BSD networking technically in userland? I forget. Johnny?
No, networking in 2.11BSD is not in userland. But it's not a part of
/unix either. It's a separate image (/netnix) that gets loaded at boot
time, but it's run in the context of the kernel.
I'd have to go and check this if anyone wants details. It's been quite a
while since I was fooling around inside there. Or maybe someone else
remembers more details on how it integrates.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole
> The 8 pretty much had a base price in the $30k range in the mid to late
> 60s.
His statement was made in 1977 (ironically, the same year as the Apple
II).
(Not really that relevant, since he was apparently talking about 'smart
homes'; still, the history of DEC and personal computers is not a happy one;
perhaps why that quotation was taken up.)
> Later models used TTL and got down to a single 3U 'drawer'.
There was eventually a single-chip micro version, done in the mid-70's; it
was used in a number of DEC word-processing products.
Noel