Bakul Shah writes:
I have mixed feelings about this. Unix didn't
"throw away"
the mainframe world of computing. It simply created a new
ecosystem, more suited for the microprocessor age. For IBM it
was perhaps the classic Innovator's Dilemma. Similarly now we
have (mostly) the Linux ecosystem, while the actual hardware
has diverged a lot from the C memory model. There are
security issues. There is firmware running on these system
about which the OS knows nothing. We have processors like
Esperanto Tech's 1088 64 bit Risc-V cores, each with its own
vector/tensor unit, 160MB onchip sram and 23.8B transistors
but can take only limited advantage of it. We have super
performant GPUs but programming them is vendor dependent and
a pain. If someone can see a clear path through all this,
and create a new software system, they will simply generate a
new ecosystem and not worry about 50 years worth of work.
You're kind of reminding me of the HEP (heterogeneous element
processor) talk that I saw at I think Usenix in Santa Monica.
My opinion is that it was a "kitchen sink" project - let's
throw in a few of these and a few of those and so on. Also
analogous to what I saw in the housing market up here when
people started cashing in their California huts for Oregon
mansions - when we lived in California we could afford two
columns out front but now we can afford 6 columns, 8 poticos,
6 dormers, 4 turrets, and so on. Just because you can built
it doesn't keep it from being an ugly mess.
So my question on many of these processors is, has anybody
given any thought to system architecture? Most likely all
of us have had to suffer with some piece of spiffy hardware
that was pretty much unprogrammable. Do the performance
numbers mean anything if they can't be achieved in an actual
system configuration?
Jon