On 2018-06-18 14:17, Noel Chiappa wrote:
> > The "separate" bus for the semiconductor memory is just a second Unibus
>
> Err, no. :-) There is a second UNIBUS, but... its source is a second port on
> the FASTBUS memory, the other port goes straight to the CPU. The other UNIBUS
> comes out of the CPU. It _is_ possible to join the two UNIBI together, but
> on machines which don't do that, the _only_ path from the CPU to the FASTBUS
> memory is via the FASTBUS.
Ah. You and Ron are right. I am confused.
So there were some previous PDP-11 models who did not have their memory
on the Unibus. The 11/45,50,55 accessed memory from the CPU not through
the Unibus, but through the fastbus, which was a pure memory bus, as far
as I understand. You (obviously) could also have memory on the Unibus,
but that would be slower then.
Ah, and there is a jumper to tell which addresses are served by the
fastbus, and the rest then go to the Unibus. Thanks, I had missed these
details before. (To be honest, I have never actually worked on any of
those machines.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole
> My experience is that more often than not, it's less a failure to see
> what a successful future might bring, and often one of well '*we don't
> need to do that now/costs too much/we don't have the time*.'
Right, which is why I later said a "successful architect has to pay _very_
close attention to both the 'here and now' (it has to be viable for
contemporary use, on contemporary hardware, with contemporary resources)".
They need to be sensitive to the (real and valid) concerns of people about who
are looking at today.
By the same token, though, the people you mention need to be sensitive to the
long-term picture. Too often they just blow it off, and focus _solely_ on
today. (I had a really bad experience with someone like that just before I
retired.) They need to understand, and accept, that that's just as serious an
error as an architect who doesn't care about 'workable today'.
In retrospect, I'm not sure you can fix people who are like that. I think the
only solution is to find an architect who _does_ respect the 'here and now',
and put them in control; that's the only way. IBM kinda-sorta did this with
the /360, and I think it showed/shows.
> I can imagine that he did not want to spend on anything that he thought
> was wasteful.
Understandable. But see above...
The art is in finding a path that leave the future open (i.e. reduces future
costs, when you 'hit the wall'), without running up costs now.
A great example is the QBUS 22-bit expansion - and I don't know if this was
thought out beforehand, or if they just lucked out. (Given that the expanded
address pins were not specifically reserved for that, probably the latter.
Sigh - even with the experience of the UNIBUS, they didn't learn!)
Anyway.. lots of Q18 devices (well, not DMA devices) work fine on Q22 because
of the BBS7 signal, which indicates an I/O device register is being looked
at. Without that, Q18 devices would have either i) had to incur the cost now
of more bus address line transceivers, or ii) stopped working when the bus was
upgraded to 22 address lines.
They managed to have their cake (fairly minimal costs now) and eat it too
(later expansion).
> Just like I retold the Amdahl/Brooks story of the 8-bit byte and Amdahl
> thinking Brooks was nuts
Don't think I've heard that one?
>> the decision to remove the variable-length addresses from IPv3 and
>> substitute the 32-bit addresses of IPv4.
> I always wondered about the back story on that one.
My understanding is that the complexity of variable-length address support
(which impacted TCP, as well as IP) was impacting the speed/schedule for
getting stuff done. Remember, it was a very small effort, code-writing
resources were limited, etc.
(I heard, at the time, from someone who was there, that one implementer was
overheard complaining to Vint about the number of pointer registers available
at interrupt time in a certain operating system. I don't think it was _just_
that, but rather the larger picture of the overall complexity cost.)
> 32-bits seemed infinite in those days and no body expected the network
> to scale to the size it is today and will grow to in the future
Yes, but like I said: they failed to ask themselves 'what are things going to
look like in 10 years if this thing is a success'? Heck, it didn't even last
10 years before they had to start kludging (adding A/B/C addresses)!
And ARP, well done as it is (its ability to handle just about any combo of
protocol and hardware addresses is because DCP and I saw eye-to-eye about
generality), is still a kludge. (Yes, yes, I know it's another binding layer,
and in some ways, another binding layer is never a bad thing, but...) The IP
architectural concept was to carry local hardware addresses in the low part of
the IP address. Once Ethernet came out, that was toast.
>> So, is poor vision common? All too common.
> But to be fair, you can also end up with being like DEC and often late
> to the market.
Gotta do a perfect job of balance on that knife edge - like an Olympic gymnast
on the beam...
This is particularly true with comm system architecture, which has about the
longest lifetime of _any_ system. If someone comes up with a new editor or OS
paradigm, people can convert to it if they want. But converting to a new
communication system - if you convert, you cut yourself off. A new one has to
be a _huge_ improvement over the older gear (as TCP/IP was) before conversion
makes sense.
So networking architects have to pay particularly strong attention to the long
term - or should, if they are to be any good.
> I think in both cases would have been allowed Alpha to be better
> accepted if DEC had shipped earlier with a few hacks, but them improved
> Tru64 as a better version was developed (*i.e.* replace the memory
> system, the I/O system, the TTY handler, the FS just to name a few that
> got rewritten from OSF/1 because folks thought they were 'weak').
But you can lose with that strategy too.
Multics had a lot of sub-systems re-written from the ground up over time, and
the new ones were always better (faster, more efficient) - a common even when
you have the experience/knowledge of the first pass.
Unfortunately, by that time it had the reputation as 'horribly slow and
inefficient', and in a lot of ways, never kicked that:
http://www.multicians.org/myths.html
Sigh, sometimes you can't win!
Noel
> From: "Theodore Y. Ts'o"
> To be fair, it's really easy to be wise to after the fact.
Right, which is why I added the caveat "seen to be such _at the time_ by some
people - who were not listened to".
> failed protocols and designs that collapsed of their own weight because
> architects added too much "maybe it will be useful in the future"
And there are also designs which failed because their designers were too
un-ambitious! Converting to a new system has a cost, and if the _benefits_
(which more or less has to mean new capabilities) of the new thing don't
outweigh the costs of conversion, it too will be a failure.
> Sometimes having architects being successful to add their "vision" to a
> product can be worst thing that ever happened
A successful architect has to pay _very_ close attention to both the 'here and
now' (it has to be viable for contemporary use, on contemporary hardware, with
contemporary resources), and also the future (it has to have 'room to grow').
It's a fine edge to balance on - but for an architecture to be a great
success, it _has_ to be done.
> The problem is it's hard to figure out in advance which is poor vision
> versus brilliant engineering to cut down the design so that it is "as
> simple as possible", but nevertheless, "as complex as necessary".
Absolutely. But it can be done. Let's look (as an example) at that IPv3->IPv4
addressing decision.
One of two things was going to be true of the 'Internet' (that name didn't
exist then, but it's a convenient tag): i) It was going to be a failure (in
which case, it probably didn't matter what was done, or ii) it was going to be
a success, in which case that 32-bit field was clearly going to be a crippling
problem.
With that in hand, there was no excuse for that decision.
I understand why they ripped out variable-length addresses (I was just about
to start writing router code, and I know how hard it would have been), but in
light of the analysis immediately above, there was no excuse for looking
_only_ at the here and now, and not _also_ looking to the future.
Noel
> From: Tony Finch <dot(a)dotat.at>
> Was this written down anywhere?
Alas, no. It was a presentation at a group seminar, and used either hand-drawn
transparencies, or a white-board - don't recall exactly which. I later tried to
dig it up for use in Nimrod, but without success.
As best I now recall, the concept was that instead of the namespace having a
root at the top, from which you had to allocate downward (and then recurse),
it built _upward_ - if two previously un-connected chunks of graph wanted to
unite in a single system, they allocated a new naming layer on top, in which
each existing system appeared as a constituent.
Or something like that! :-)
The issue with 'top-down' is that you have to have some global 'authority' to
manage the top level - hand out chunks, etc, etc. (For a spectacular example
of where this can go, look at NSAP's.) And what do you do when you run out of
top-level space? (Although in the NSAP case, they had such a complex top
couple of layers, they probably would have avoided that issue. Instead, they
had the problem that their name-space was spectacularly ill-suited to path
selection [routing], since in very large networks, interface names
[adddresses] must have a topological aspect if the path selection is to
scale. Although looking at the Internet nowadays, perhaps not!)
'Bottom-up' is not without problems of course (e.g. what if you want to add
another layer, e.g. to support potentially-nested virtual machines).
I'm not sure how well Dave understood the issue of path selection scaling at
the time he proposed it - it was very early on, '78 or so - since we didn't
understand path selection then as well as we do now. IIRC, I think he was
mostly was interested in it as a way to avoid having to have an asssignment
authority. The attraction for me was that it was easier to ensure that the
names had the needed topological aspect.
Noel
> From: Dave Horsfall
> idiots keep repeating those "quotes" ... in some sort of an effort to
> make the so-called "experts" look silly; a form of reverse
> jealousy/snobbery or something? It really pisses me off
You've just managed to hit one of my hot buttons.
I can't speak to the motivations of everyone who repeats these stories, but my
professional career has been littered with examples of poor vision from
technical colleagues (some of whom should have known better), against which I
(in my role as an architect, which is necessarily somewhere where long-range
thinking is - or should be - a requirement) have struggled again and again -
sometimes successfully, more often, not.
So I chose those two only because they are well-known examples - but, as you
correctly point out, they are poor examples, for a variety of reasons. But
they perfectly illustrate something I am _all_ too familiar with, and which
happens _a lot_. And the original situation I was describing (the MIT core
patent) really happened - see "Memories that Shaped an Industry", page 211.
Examples of poor vision are legion - and more importantly, often/usually seen
to be such _at the time_ by some people - who were not listened to.
Let's start with the UNIBUS. Why does it have only 18 address lines? (I have
this vague memory of a quote from Gordon Bell admitting that was a mistake,
but I don't recall exactly where I saw it.) That very quickly became a major
limitation. I'm not sure why they did it (the number of contact on a standard
DEC connector probably had something do with it, connected to the fact that
the first PDP-11 had only 16-bit addressing anyway), but it should have been
obvious that it was not enough.
And a major one from the very start of my career: the decision to remove the
variable-length addresses from IPv3 and substitute the 32-bit addresses of
IPv4. Alas, I was too junior to be in the room that day (I was just down the
hall), but I've wished forever since that I was there to propose an alternate
path (the details of which I will skip) - that would have saved us all
countless billions (no, I am not exaggerating) spent on IPv6. Dave Reed was
pretty disgusted with that at the time, and history shows he was right.
Dave also tried to get a better checksum algorithm into TCP/UDP (I helped him
write the PDP-11 code to prove that it was plausible) but again, it got turned
down. (As did his wonderful idea for bottom-up address allocation, instead of
top-down. Oh well.) People have since discussed issues with the TCP/IP
checksum, but it's too late now to change it.
One place where I _did_ manage to win was in adding subnetting support to
hosts (in the Host Requirements WG); it was done the way I wanted, with the
result that when CIDR came along, even though it hadn't been forseen at the
time we did subnetting, it required _no_ hosts changes of any kind. But
mostly I lost. :-(
So, is poor vision common? All too common.
Noel
On 2018-06-18 04:00, Ronald Natalie <ron(a)ronnatalie.com> wrote:
>> Well, it is an easy observable fact that before the PDP-11/70, all PDP-11 models had their memory on the Unibus. So it was not only "an ability at the lower end", b
> That’s not quite true. While the 18 bit addressed machines all had their memory directly accessible from the Unibus, the 45/50/55 had a separate bus for the (then new) semiconductor (bipolar or MOS) memory.
Eh... Yes... But...
The "separate" bus for the semiconductor memory is just a second Unibus,
so the statement is still true. All (earlier) PDP-11 models had their
memory on the Unibus. Including the 11/45,50,55.
It's just that those models have two Unibuses.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
We lost the Father of Computing, Alan Turing, on this day when he suicided
in 1954 (long story). Just imagine where computing would've been now...
Yes, there are various theories surrounding his death, such as a jealous
lover, the FBI knowing that he knew about Verona and could be compromised
as a result of his sexuality, etc. Unless they speak up (and they ain't),
we will never know.
Unix reference? Oh, that... No computable devices (read his paper), no
computers... And after dealing with a shitload of OSs in my career, I
daresay that there is no more computable OS than Unix. Sorry, penguins,
but you seem to require a fancy graphical interface these days, just like
Windoze.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Derek Fawcus <dfawcus+lists-tuhs(a)employees.org>
> my scan of it suggests that only the host part of the address which were
> extensible
Well, the division into 'net' and 'rest' does not appear to have been a hard
one at that point, as it later became.
> The other thing obviously missing in the IEN 28 version is the TTL
Yes... interesting!
> it has the DF flag in the TOS field, and an OP bit in the flags field
Yeah, small stuff like that got added/moved/removed around a lot.
> the CIDR vs A/B/C stuff didn't really change the rest.
It made packet processing in routers quite different; routing lookups, the
routing table, etc became much more complex (I remember that change)! Also in
hosts, which had not yet had their understanding of fields in the addresses
lobotomized away (RFC-1122, Section 3.3.1).
Yes, the impact on code _elsewhere_ in the stack was minimal, because the
overall packet format didn't change, and addresses were still 32 bits, but...
> The other bit I find amusing are the various movements of the port
> numbers
Yeah, there was a lot of discussion about whether they were properly part of
the internetwork layer, or the transport. I'm not sure there's really a 'right'
answer; PUP:
http://gunkies.org/wiki/PARC_Universal_Packet
made them part of the internetwork header, and seemed to do OK.
I think we eventually decided that we didn't want to mandate a particular port
name size across all transports, and moved it out. This had the down-side that
there are some times when you _do_ want to have the port available to an
IP-only device, which is why ICMP messages return the first N bytes of the
data _after_ the IP header (since it's not clear where the port field[s] will
be).
But I found, working with PUP, there were some times when the defined ports
didn't always make sense with some protocols (although PUP didn't really have
a 'protocol' field per se); the interaction of 'PUP type' and 'socket' could
sometimes be confusing/problemtic. So I personally felt that was at least as
good a reason to move them out. 'Ports' make no sense for routing protocols,
etc.
Overall, I think in the end, TCP/IP got that all right - the semantics of the
'protocol' field are clear and simple, and ports in the transport layer have
worked well; I can't think of any places (other than routers which want to
play games with connections) where not having ports in the internetwork layer
has been an issue.
Noel
> From: Derek Fawcus
> Are you able to point to any document which still describes that
> variable length scheme? I see that IEN 28 defines a variable length
> scheme (using version 2)
That's the one; Version 2 of IP, but it was for Version 3 of TCP (described
here: IEN-21, Cerf, "TCP 3 Specification", Jan-78 ).
> and that IEN 41 defines a different variable length scheme, but is
> proposing to use version 4.
Right, that's a draft only (no code ever written for it), from just before the
meeting that substituted 32-bit addresses.
> (IEN 44 looks a lot like the current IPv4).
Because it _is_ the current IPv4 (well, modulo the class A/B/C addressing
stuff). :-)
Noel
> From: Johnny Billquist
> It's a separate image (/netnix) that gets loaded at boot time, but it's
> run in the context of the kernel.
ISTR reading that it runs in Supervisor mode (no doubt so it could use the
Supervisor mode virtual address space, and not have to go crazy with overlays
in the Kernet space).
Never looked at the code, though.
Noel