When I took Computer Architecture, "reasoning" about out-of-order execution
involved 30-page worksheets where we could track the state of the Tomasulo algorithm
through each iteration. It was ludicrously slow work, and wouldn't be a lot of fun
even if you had a computerized tool to help step through things instead.
If you're talking about a modern Intel CPU where your compiler emits CISC
instructions which are actually implemented in RISC instructions in the microcode, which
in turn get rewritten and reordered internally by the CPU... it's hard to fault
programmers for thinking at the level of the instruction set that's presented to
them, even if it looks like a PDP-11.
The above should not be read as an endorsement of the CPU status quo, of course :)
john
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, September 3rd, 2021 at 10:28 AM, Larry McVoy <lm(a)mcvoy.com> wrote:
I am exactly as Adam described, still thinking like it
is a PDP-11.
Such an understandable machine. For me, out of order execution kind
of blew up my brain, that's when I stopped doing serious kernel work,
I just couldn't get to a mental model of how you reasoned about that.
Though I was talking to someone about it, maybe Clem, recently and
came to the conclusion that it is fine, we already sort of had this
mess with pipelines. So maybe it is fine, but out of order bugs my
brain.
On Fri, Sep 03, 2021 at 10:10:57AM -0700, Adam Thornton wrote:
Much of the problem, I think, is that:
1. an idealized PDP-11 (I absolutely take Warner's point that that
idealization never really existed) is a sufficiently simple model that a
Bear Of Little Brain, such as myself, can reason about what's going to
happen in response to a particular sequence of instructions, and get fairly
proficient in instructing the machine to do so in a non-geological
timeframe.
2. a modern CPU? Let alone SoC? Fuggedaboutit unless you're way, way
smarter than I am. (I mean, I do realize that this particular venue has a
lot of those people in it...but, really, those are people with
extraordinary minds.)
There are enough people in the world capable of doing 1 and not 2 that we
can write software that usually mostly kinda works and often gets stuff
done before collapsing in a puddle of nasty-smelling goo. There aren't
many people at all capable of 2, and as the complexity of systems
increases, that number shrinks.
In short, this ends up being the same argument that comes around every so
often, "why are you people still pretending that the computer is a PDP-11
when it clearly isn't?" Because, as with the keys and the streetlight,
that's what we have available to us. Only a grossly oversimplified model
fits into our heads.
Adam
On Fri, Sep 3, 2021 at 8:57 AM Warner Losh imp(a)bsdimp.com wrote:
> On Wed, Sep 1, 2021 at 4:00 PM Dan Cross crossd(a)gmail.com wrote:
>
> > I'm curious about other peoples' thoughts on the talk and the
overall
> >
> > topic?
>
> My comment is that the mental map that he presents has always been a lie.
>
> At least it's been a lie from a very early time.
>
> Even in Unibus/Qbus days, the add-in cards had some kind of processor
>
> on it from an early time. Several of the VAX boards had 68000 or similar
>
> CPUs that managed memory. Even the simpler MFM boards had buffer
>
> memory that needed to be managed before the DMA/PIO pulled it out
>
> of the card. There's always been an element of different address spaces
>
> with different degrees of visibility into those address spaces.
>
> What has changed is all of these things are now on the SoC die so
>
> you have good visibility (well, as good as the docs) into these things.
>
> The number of different things has increased, and the for cross domain
>
> knowledge has increased.
>
> The simplistic world view was even inaccurate at the start....
>
> Warner
Larry McVoy lm at
mcvoy.com http://www.mcvoy.com/lm