Hmm ... do I want to get in the middle of a fight with my friends and get
them all mad at me… but I guess I cann’t just keep quiet on this debate.
Basically I think Larry and Dan are having the Dan Akroyd/Gilda Radner
“Shimmer”
Dessert Topping / Floor Wax <https://www.youtube.com/watch?v=wPO8PqHGWFU>
debate. That said, I do think systemd is a giant step backward for most
of the same reasons others have already said, so I’ll not pile on and stick
to the basic what does it mean to be Unix or not/what is an improvement or
not, discussion.
Larry observes: “I maybe think the reason you think that things aren't
relevant anymore are because young people don't get Unix, they just pile on
to this framework and that framework, NONE OF WHICH THEY UNDERSTAND, they
just push more stuff onto the stack.”
Simply, I could not have said that better. But that is a symptom of the
problem. Frankly, with frameworks and the like, we have added so many
levels of abstraction, we have completely lost sight of the real issues and
many folks really don’t “think like a programmer” any more. Remember,
UNIX was a system *by programmers for programmers.* Much work since has
been hiding a lot of what made it great IMO - which I think has caused the
most damage.
What makes a system a flavor UNIX or not is almost personal, based on what
you value (or not). But the fact is no matter what you call it, “UNIX” is
the core kernel with a small and simple set of operations, plus a set of
programs that are all based on the simple ideas, basic concepts and mostly
solid mathematics/computer science that a small group of people in NJ came
up with at the time, when they suggested there might be a better way to put
things together than had been otherwise popular at the time. Their
concepts and ideas collectively were different enough that they were
encouraged to write a paper about it. That paper was accepted and
published in CACM, it got a lot of people in the community interested in
those ideas and the rest is history as we say.
But a huge difference between then and now is the *economics* and thus the
matching equipment they used to explore those new ideas. So some
solutions, we take for granted today, would not be practical, much less
even possible in those times.
Just like some people trying to claim that ‘Linux’ is not UNIX, falls away
deftly as well as it did years ago when other ‘progressive’ features were
added to 'Research UNIX' and we got systems like BSD 4.1 much less 4.2.
Remember smart people from Rob’s “cat -v” paper to Henry Spencer (“BSD is
just like UNIX, only different” net.noise comment) in those days railed on
the new BSD system as not being ‘UNIX’ either. For good or for bad, many
of the things we take for granted today as being required in a UNIX system,
go back to UCB features (or features from AUUG, MIT, CMU or similar).
I’ve also stated many times, I do so miss the simplicity and cleanliness of
V6 and V7, but I would not want to use it for my daily work today. So many
of those very features that Henry and Rob pointed out back in the day, have
somehow proven in fact to be ‘helpful’ or at least comfortable. While a
few of us probably could live with something like ed(1), particularly with
a front-end/feature more like Plan9’s sam(1), than absolutely having to
have VI/EMACS.
Let me offer an example as a thought exercise. I was recently helping a
young hacker trying to get V5/V6/V7 going on a PDP-11 simulator so he could
play with it to learn more about the 11 and UNIX itself. I gave him some
help and without thinking about it in my email I mentioned that he try
something, but I had forgotten that the program head(1) was a Bill Joy
program wrote in 1977. What UNIX box would not have it today? As it is
screwed into the ROM in my fingers when I type (actually the sources to head
(1) is less than 200 lines with ¼ that being the BSD copyright, I sent him
them and suggested he recompile it - partly as an exercise to see how C
changed).
But note that when wnj wrote head(1), Joy followed the famous ‘Unix
Philosophy’ of doing one (small) job well. Which means he did not add a
feature *i.e. *abusing, an old program, like cat(1), and add some new
switch to it that that told the program stop outputting after n lines.
Instead Joy wrote a simple new tool.
This is the problem with what Larry was pointing out, I think frameworks
and like are just adding features on features on features to current
subsystems. Yeech!!!
To me what made Unix ‘great’ was that this small (16-bit) system, with
48k-256K of memory can could practically owned by a department at $150-250K
would allow you to do the same sorts of things that took a $1-4M type of
system (be it a PDP-10 or IBM or the like).
Mashey did his ACM lectures called “Small Is Beautiful”, sometime after
they completed PWB 1.0 and this was one of his points. Unix was a small
SW system on a small HW system platform, but was clean and did what you
(the programmer) needed without a lot of extra cruft. IIRC for PWB 1.0,
that was an 11/45 as the recommended system to run it. Not the smallest
(11/40), but hardly the 11/70 either. But as the 32-bit systems became
available, many of the different constraints of the PDP-11 were removed –
first data size and then text size. When the VAX came and 32-bits was
infinite (probably still is for text space), performance got better (and
cheaper), disks got bigger, *etc*.
And because it was less and less of a problem, quickly programmers got a
tad lazy or at least stopped paying attention to things they were required
to consider in the past. Ex (or for that matter EMACS) in thinking was the
first the UCB “explosions” and maybe where UNIX began to be a tad
different and look less and less like what had come from the NJ avengers.
The resources of the new systems were such that you did not need a new set
of small programs, you added features (extensions) to the old – and that
was (is) a trap which we seem to follow today.
I think other folks like to point out all the new wonder new functionality
in the Linux kernel (or FreeBSd or macOS etc...) have brought to the UNIX
world besides BSD’s sockets networking (loadable drivers, dynamic devices
are super and I would not want to be without either). I like shared
libraries, and better memory systems. Although of course on my
supercomputer what are two things we turn off (shared libraries and much of
the fancy memory stuff - cause they get in the way of real programs ;-).
BTW: I also think sockets were (are) a terrible addition to UNIX and we
did not really need them. We are stuck with them now, but I would rather
have had something more like what ChaosNet and the original UofI Arpanet
code, or what UNET did, where much of the network stack was in user space
[which in fact was what the Interface BBN originally used]. In other
places, over time, we have added window managers GUI’s *et al*. Hey the
core C compiler’s code generator is remarkable compared to what it was for
the PDP-11, plus we have new languages be they Rust, Go or even C++ and
Java.
The key point is few people are really asking the question, *if I add this
new feature what does it cost, and what do we really get for it?*
The truth is many of these features are all comforts and *I do like many of
them* and I use them and want many of them. I like kernel support for MxN
threading, but that took kernel support – what did we get beyond fork/exec
that we really did not have (the answer is probably better performance due
to tighter control, but I wonder if we bought much more than that). I’m
typing on a Mac, while most of why ‘work’ these days is targeted Linux.
But what is common is mostly I can work in the manner I want. I have
something that does make some things nice (easier)… Both are ‘UNIX’ in some
manner – that core of both are the same core ideas I see in V5/V6/V7 –
that’s good. *i.e.* they both work as a floor wax….
But all of those new features have come at a cost, the biggest is
complexity/bloat. We have lost a lot in the design because today’s
programmers just don’t have to deal with the kind of constraints that some
of had to deal with years ago and I think you hear a number of us groaning
that when papers like this one come out, they have completely missed the
point and really don’t understand what it means to be UNIX.
I suspect if VM/TSO or VMS had become the core technology of the future,
the papers being written today would be just as damning. If you never
lived in the old world, I’m not sure you understand the improvement.
I’m going to toss the gauntlet with a slightly even more political
statement. When I read that paper, it reminded me of the current round of
anti-vaxxers. Those of us that remember polio and other ‘childhood
diseases’ have no desire to go back in time and relive it. I really don't
want to run VMS/RSX or for that matter TSS/360 which was the first system I
really ever knew how to use well.
So in the same way, UNIX is the core technology we have today and I’m
damned glad it ‘won’ be it called Linux, FreeBSD or macOS. It was not
perfect then and it is hardly perfect today. But it was different from
what we had at that time, and so much better - *that today we now use those
ideas* from UNIX as our core ideas when we build systems. I just wish
people would understand what they have and see it for the positive instead
of trying to knock it down - ‘standing on the shoulders of the giants’ and
showing what they did as a help, but based on solid ideas, instead of
stepping on the toes trying to make their new thing/feature more valuable
and claim it’s not made from the origin story.
Hey Chevy… can I have some “shimmer” for my pudding.
ᐧ