Larry - it's the compiler (code generator) folks that I really feel bad
for. They had to deal with the realities of the ISA in many ways more than
we do and for them, it's getting worse and worse. BTW - there was a
misstatement in a previous message. A current CISC system like the INTEL*64
is not implemented as a RISC µcode, nor are current more RISCy machines
like the SPARX and Alpha much less the StrongARM and its followers. What
they are internally are *data flow machines *which is why you getting a
mixing of instruction ordering, scoreboarding, and all sorts of
complexities that blows our mind.
At least at the OS we have been used to doing things in parallel,
exceptions and interrupts occurring and we have reasoned our ways through
things. Butler Lampson and Leslie Lamport gave a parallel calculus to help
verify things (although Butler once observed at an old SOSP talk that the
problem with parallel is what does 'single step the processor mean
anymore.' ).
So the idea while the processor is not a PDP-10 or PDP-11 much less a
360/91 or a CDC-6600, we build a model in our heads that does simplify the
machine(s) as much as possible. We ensure at least that is correct and
then, build up more complexity from there.
To me, the problem is that we too often do a poor job of what should be the
simple stuff and we continue to make it too complicated. Not to pick on
any one group/code base, but Jon's recent observation about the Linux
kernel FS interface is a prime point. It's not the processor that was made
complex, it's the SW wanting to be all things to all people.
To me what Unix started and succeed at its time, and clearly Plan9 was
attempted in its time (but failed commercially) was to mask if not toss out
as much of the complexity of the HW and get to a couple of simple and
common ideas and all programs could agree. Going back to the idea of the
bear of the 'slittle brain and try to expose the simplest way to
computation.
Two of the best Unix talks/papers ever, Rob's "cat -v is a bad idea" and
Tom's "All Chips that Fit" has morphed into "I have 64-bits of
address
space I can link anything into my framework" and "what I power and cool in
my current process technology" [a SoC is not different that the board level
products that some of us lived].
I recently read a suggestion that the best way to teach begging students to
be "good programmers" was to "introduce them to as many frameworks as
possible and teach as little theory as they need." I nearly lost my
dinner. Is this what programming has come to?
Framework/Access Methods/Smart Objects .... To be fair, my own employer
is betting on DPC++ and believing OneAPI as the one ring to rule them all.
There is a lot to be said of "small is beautiful." How did we get from
Sixth Edition UNIX with K&R1 to today? One transistor and one line a code
at a time.
ᐧ
On Fri, Sep 3, 2021 at 1:29 PM Larry McVoy <lm(a)mcvoy.com> wrote:
I am exactly as Adam described, still thinking like it
is a PDP-11.
Such an understandable machine. For me, out of order execution kind
of blew up my brain, that's when I stopped doing serious kernel work,
I just couldn't get to a mental model of how you reasoned about that.
Though I was talking to someone about it, maybe Clem, recently and
came to the conclusion that it is fine, we already sort of had this
mess with pipelines. So maybe it is fine, but out of order bugs my
brain.
On Fri, Sep 03, 2021 at 10:10:57AM -0700, Adam Thornton wrote:
Much of the problem, I think, is that:
1) an idealized PDP-11 (I absolutely take Warner's point that that
idealization never really existed) is a sufficiently simple model that a
Bear Of Little Brain, such as myself, can reason about what's going to
happen in response to a particular sequence of instructions, and get
fairly
proficient in instructing the machine to do so in
a non-geological
timeframe.
2) a modern CPU? Let alone SoC? Fuggedaboutit unless you're way, way
smarter than I am. (I mean, I do realize that this particular venue has
a
lot of those people in it...but, really, those
are people with
extraordinary minds.)
There are enough people in the world capable of doing 1 and not 2 that we
can write software that usually mostly kinda works and often gets stuff
done before collapsing in a puddle of nasty-smelling goo. There aren't
many people at all capable of 2, and as the complexity of systems
increases, that number shrinks.
In short, this ends up being the same argument that comes around every so
often, "why are you people still pretending that the computer is a PDP-11
when it clearly isn't?" Because, as with the keys and the streetlight,
that's what we have available to us. Only a grossly oversimplified model
fits into our heads.
Adam
On Fri, Sep 3, 2021 at 8:57 AM Warner Losh <imp(a)bsdimp.com> wrote:
>
>
> On Wed, Sep 1, 2021 at 4:00 PM Dan Cross <crossd(a)gmail.com> wrote:
>
>> I'm curious about other peoples' thoughts on the talk and the overall
>> topic?
>>
>
> My comment is that the mental map that he presents has always been a
lie.
> At least it's been a lie from a very
early time.
>
> Even in Unibus/Qbus days, the add-in cards had some kind of processor
> on it from an early time. Several of the VAX boards had 68000 or
similar
> CPUs that managed memory. Even the simpler
MFM boards had buffer
> memory that needed to be managed before the DMA/PIO pulled it out
> of the card. There's always been an element of different address spaces
> with different degrees of visibility into those address spaces.
>
> What has changed is all of these things are now on the SoC die so
> you have good visibility (well, as good as the docs) into these things.
> The number of different things has increased, and the for cross domain
> knowledge has increased.
>
> The simplistic world view was even inaccurate at the start....
>
> Warner
>
--
---
Larry McVoy lm at
mcvoy.com
http://www.mcvoy.com/lm