Bakul Shah writes:
On Sep 17, 2021, at 11:56 AM, Jon Steinhart
<jon(a)fourwinds.com> wrote:
Bakul Shah writes:
IMHO the real issue is that the software folks are *not* providing
and *can no*t provide any sort of guidance for general purpose
computing, as the memory underlying modern programming languages
is so far removed from reality. The situation is sorta like what
happens with people with newly acquired incredible wealth but no
background in how to spend or manage it wisely (and I don't mean
*investing* to get more wealth). At least in that case there are
people who can help you and a tremendous need. Here we can put
billions and billions of gates on a chip and even do wafer scale
integration but these gates are not fungible like money.
Are you sure about this? Or is it just that the hardware folks got to
work without ever asking software/systems people for input?
Let me ask you.
You have a budget of 24 Billion transistors (& a much more limited
power budget). How would you spend it to maximize general purpose
computing?
Note that Esperanto Tech. was founded by Dave Ditzel, not exactly a
h/w newbie. [EE/CS, worked with Dave Patterson, CTO @ Sun, Transmeta
founder etc. etc.]. Now it is entirely possible that Esperanto folks
are building such processors for special purpose applications like
Machine learning but the fact is we simply have an incredible riches
of transistors that we don't how to spend wisely, [My opinion, of course]
Tough question to answer. I would say that maybe we're getting to the
point where a dream of mine could happen, which is eliminating the
traditional multi-tasking operating system and just having one processor
per process. One could even call it "hardware containers". Gak! Big
unsolved problem the incompatibility between the processes for this sort
of stuff and DRAM processes. Had hoped that some of the 3D stuff that I
saw Dave D. talk about a few years ago would have panned out by now.
So just because I've never heard anybody else say this, and because we
have named laws like Cole's law, I have Steinhart's law. It states that
a bad investment to give money to people who have had successful startup
companies. My opinion is that people are lucky to get it right once, and
very few get it right more than once, so I wouldn't bet on a company
founded by people who have already done successful companies. A corollary
is that to find a successful startup, look for someone who has been
perfecting an idea over a handful of failed startup companies. In many
respects, Apple fits that model; they let other people invent stuff and
then they "do it right".
The more interesting question that you raise is, why would you expect
better stuff because we can now do tens of billions of transistors?
From the software side, we have a mind-boggling about
of memory, at
least for those of us who began with 4K or less, and I can't see
that
it's used wisely. I've had people tell me that I was wasting my time
being efficient because memory was cheaper than my time. Which I guess
was true until you multiplied it by the number of systems. As an EE/CS
guy, I don't really expect the HW people to be any different than the
SW people now that designing HW is just writing SW.
Jon