On May 8, 2018, at 7:53 AM, Clem Cole <clemc(a)ccc.com> wrote:
I'll take a slight different road on this topic. I think we are dealing with the
paradox of progress. I don't blame Linux's developers any more than I blame BSDs
or anyone else. Really it was hardware progress/Moore's law that changed the rules
:-)
Agree about the paradox of progress. Success slowed down further
evolution.
While h/w speed and number of gates increased tremendously
the particular *direction* processor architecture took is
heavily influenced by prevailing software. So I think it is
really software that is to blame :-) Intel did add callgates
for better containment but they didn't get used much.
I think Ted makes an excellent point, that *BSD and
Linux, by their nature of being 'open' and 'available' pushed the code
along and that needs to continue to be the high order bit. Open and freely available had
a huge positive effect, because many of these new feature (like X-Windows, Networking)
were 'useful' and the cost to add them was accessible -- so people added them.
But .... slowly the system changed and the expectations of the users with them.
The price of success. It is easier to bolt-on solutions to new
problems on existing platform than refine the platform. Plan9
tried that. That it failed has more to do with non-tech reasons.
Bakul's observation of little
>>practical<< progress is an interesting one. In many ways I agree, in
others I'm not so sure. I think Ted knows that my major gripe with some of the Linux
(and Gnu style) community has been a focus on reimplementing things that were already
there instead of using what could have been taken from somewhere else such as BSD, or
replacing subsystems just because they could without really adding anything (i.e. the
whole systemd argument).
The lack of progress I am talking about has to do with
the devil's bargain of trading simplicity for "efficiency".
The 16bit CPU running at 5Mhz, with 256K of RAM and 5MB of disk
in 1981 is replaced with a 64 bit processor with 16 cores,
64GB of RAM, with 10TB of disk space, with HDMI screen and
1GB/s network port (and is cheaper in terms of 1981 dollars).
And yet we continue making this bargain.
Pipes provided a perfect way to modularize programs. But we
use shared memory and threads - mainly for squeezing out more
performance. We have known about microkernels for over 30
years. CSP for over 40 years. Capability systems for over
50 years. Yet these are largely absent. Even Go provides
unsafe shared memory access in spite of providing channels.
But over all, as much as I respect and think Ken and
Dennis did amazing work at the time, I do tend to love when new ideas/things have been
done beyond the original ideas from Ken and Dennis. For instance, just as BSD can take
credit (or blame) or sockets and making UNIX really a 'networked OS', Sun really
gave us multiple file systems with the VFS, but I strongly credit Linux for really making
kernel modules a reality (yup Solaris had them, as did a few other systems - but it was
really Linux that made it wide spread). I wish Linux had taken the idea of a modular
kernel a tad farther, and picked up things like Locus vproc work, because I personally
think modularity under the covers is better than the containers mess we have today (and I
agree with Tannenbaum that uKernel's make more sense to me in the long run - even if
they do cost something in speed).
Plan9 did a much cleaner job of adding filesystems and networking.
May be the 9p protocol doesn't scale so well but that could've been
fixed once it was used in production systems.
A capability system would obviate the need for containers.
If all external communication a process did was via capabilities,
it doesn't matter if they directly access some physical device or
virtual device, a local server or a remote server, a real file or
a synthetic file. Microkernels and caps are a good match. But they
too haven't been used enough in practice and will need evolution.
it's also why I liked Plan 9 and have high hopes
that a new OS, maybe written is something like Rust might finally appear. But I
don't want it re-implement UNIX or Linux. Grab from them the subsystems that you
need to duplication, but don't re-invent.
Warren - at the risk of being political -- I think the paradox we have it larger than
just UNIX, although it is simple. We can wallow and say everything is bad, it was simpler
in 1959 -- which exactly what some folks in my country seem to be doing in other areas. I
personally can say my world was simple in those days and I certainly have fond memories
[read Bill Bryson's
https://www.amazon.com/Life-Times-Thunderbolt-Kid-Memoir/dp/0767919378 which parrots many
of my own memories of those times ], but UNIX, like the world, has grown up and changed
and is a lot more complicated. I like progress we have now. I don't want the world
the way it was any more than I want run Fifth Edition on my Mac for day-2-day work.
Yes, I would like us to look at the good from the past and see if we can have some of the
good things again; but if it means giving up what gained too, then we have gone backwards.
The problem is how to decide what was good and what was bad? What is real progress and
what is just 'showing off your money' to use a Forest Gump concept.
I suspect we will argue for a long time about what qualifies as progress from the core
idea of UNIX and what does not.
I think it is possible to gain equivalent feature comforts using
simpler systems. But my guess is "compatibility" requirement is
what will hold us back.