Hey Greg,
It's "tuhs" not "tubs" and it stands for The Unix Historical
Society so far
as I know. Lots of Bell Labs and other folks on the list, it's a fun walk
down memory lane, you should join.
What started the conversation was someone musing on whether Linux is the
end of OS development. Steve was talking about problems that SGI worked
on so I wanted to point out that there might be something to mine there.
--lm
On Tue, Jun 03, 2014 at 08:42:56PM -0700, Greg Chesson wrote:
Hi,
I certainly know scj from Murray Hill. Not sure who is tubs.
Distributed and parallel hardware has evolved far beyond the ability
of most sw. Everyone agrees, I think; and, GPUs just make it more so.
On the other hand, cpu architecture remains classic Von Neumann
with registers, alus, and familiar memory hierarchies. Close to the
hardware where instructions are visible, there is not much change.
And I think there are compelling technical reasons on why that
is so as a consequence of digital logic and transistor circuitry
and electrical signalling. C/C++ remains a good fit at that level
because it is still a reflection of the hardware.
I agree with Steve - and so would Ken as I have heard him say
it several times going back many years - the kernel is basically
a multiplexor for memory, io, and time. What happens above that
in terms of data representation and anything like intelligence
is a higher-order function and demands higher-order programming
than what works for most kernel tasks. I use the term "higher-order"
in the same sense that Church's type theory (lambda calculus)
is higher order than first order predicate logic.
I've always liked what Knuth wrote long ago about the evolution
of programming languages. He observed that languages
have evolved first from binary, to assembly language, and
then jumped to Fortran because programmers who found
they were writing the same kind of stuff over and over,
sometimes in the same program, looked for ways to express
more with less. That still happens today and will continue.
But wait, there's more.
Imagine what would be needed to implement a human brain model
with something like 10^11 neurons and 10^14 connections.
Out of reach today.
People are just now trying to emulate the 302 neurons in the nervous
system of Caenorhabditis elegans. C/C++ and Von Neumann cpus
are not a good match for such problems - and there are many
computations (and searches) of stupendous enormous scale that would dwarf
any supercomputer in the planning stages today.
I hope I get to see some of what comes next, and maybe even
help carry forward a few building blocks.
What started this conversation?
Greg
On Tue, Jun 3, 2014 at 6:10 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> First, let me say how cool it is to be replying to the guy who did yacc.
> I've used it for decades, thank you for that.
>
> I was at SGI when they did the Origin servers, the architecture (as I
> remember it) was a board with 2 CPUS, some local memory, and an connection
> to a hypercube style of memory. An interesting aside is that this is
> where I learned that infinitely large packets in a network are infinitely
> stupid because you have to buffer a packet if the outgoing port is busy.
> I used to be a fan of big packets on ethernet, that system taught me
> that ethernet is just fine. The Origin "network" had ~32 byte packets.
> Buffering those was "easy".
>
> The problem with the lots-of-cpus design is that memory latency goes up.
> It was OK for local memory, it was not OK for remote memory. Don't get
> me wrong, SGI did a really great job at it, but when you compare a bunch
> of networked boxes with the SMP model SGI had, SGI won on jobs that needed
> shared memory, the bunch of networked boxes kicked ass on everything else.
> Cross reference: google. Intel's test harness. And a bunch of others.
> When you are benchmarking against a 10,000 node cluster and the cluster
> is winning, yeah, you need to rethink what you are doing.
>
> I've copied Greg Chesson, you guys should know him, he's ex Bell Labs,
> he can correct my ramblings, I worked for him at SGI.
>
> --lm
>
> On Tue, Jun 03, 2014 at 11:48:25AM -0700, scj(a)yaccman.com wrote:
> > Well, I'm sure my biases are showing, but I see the Kernel as a means for
> > supplying features for a model of computation, and the programming
> > language as the delivery vehicle for that model of computation. And in
> my
> > view, C/C++ is far more obsolete than Linux.
> >
> > Hardware has left software in the dust. It is quite feasible to produce
> a
> > chip with 1000 or 10,000 processors on it, each with a bit of memory and
> a
> > communication fabric. That's what tomorrow's technology is giving
us.
> > Multicore and named threads are just not going to cut it when using such
> a
> > system. A central supplier of any service is a bottleneck. We've got to
> > write our software to act more like an ant farm than a military
> hierarchy.
> >
> > Otherwise said, we have to learn to think different. Very different.
> And
> > the hardest part of that is letting go of the old ways of thinking.
> > Perhaps encroaching senility is help in this...
> >
> > Steve
>
--
---
Larry McVoy lm at
mcvoy.com http://www.mcvoy.com/lm