On Sun, Jun 5, 2022 at 8:35 PM Theodore Ts'o <tytso(a)mit.edu> wrote:
[snip]
Part of this comes from the the fact that the Linux kernel, C library,
and core utilities are all shipped separately. The BSDs have often
criticized this, claiming that shipping all of the OS in a single
source control system makes it easier to rollout new features. There
is no doubt upsides from having a single source tree; but one of the
advantages of keeping things separate is that definition of the kernel
<-> userspace interface is much more explicit.
Isn't that just an accident of history, though? The GNU stuff was
far enough along when Linux got going that he could crib most of
it by default to build a "complete" working system; he was just
providing the kernel, whereas Unix traditionally had been distributed
as a complete system, so the BSDs just kind of followed that model.
That being said, I will note that this always
hasn't been true. There
was a brief period where an early Red Hat Enterprise Linux version
suffered from the "legacy Unix value-add disease", where Red Hat had
added some kernel changes that impacted kernel interfaces, which
didn't make it upstream, or made it upstream with a changed interface,
such that when users wanted to use a newer upstream kernel, which had
newer features, and newer device driver support, it wouldn't work with
that version RHEL. Red Hat has criticized *heavily* for that, both by
the upstream development community and by its users, and since then it
has stuck to a "usptream first" policy, especially where new system
calls, or some other kernel interface is concerned.
One of the reasons why that early RHEL experience kept Red Hat in line
was because none of the other Linux distributions had that property
--- and because the core development in upstream hadn't slacked off,
so there was a strong desire to upgrade to newer kernels on RHEL, and
when that didn't worked, not only did that make customers and
developers upset, but it also made life difficult for Red Hat
engineers, since they now need to figure out how to forward port their
"value add" changes onto the latest and greatest kernel release.
Sounds very familiar. I'll wager that just about any large organization
with a heavy investment in Linux has a similar problem on their hands.
I've _heard_ that Meta is different, but have no first-hand knowledge.
An interesting question is if CSRG had been actively
pushing the state
of the art foreward, would that have provided sufficient centripetal
force to keep the HP/UX, SunOS, DG/UX, etc., from spintering? After
all, it's natural to want to get a competitive advantage over your
competition by adding new features --- this is what I call the "Legacy
Unix value-add disease". But if you can't keep up with the upstream
developments, that provides a strong disincentive from making
permanent forks. For that matter, why was it that successive new
releases of AT&T System V wasn't able to play a similar role? Was it
because the rate of change was too slow? Was it because applications
weren't compatible anyway due to ISA differences? I don't know....
Was CSRG doing research, or producing a production system? It sure
seems like they were trying to thread a needle there that put them into
a weird position.
But more generally, it feels like this doesn't take into account the context
of the times. $n$ manufacturers back in the minicomputer days were
used to writing their own OS for each new machine; sure they adapted
Unix, but the idea that they'd treat it differently from that standpoint
feels like something that didn't really occur to anyone until the late 80s.
By then, the stage was set for the arrival of something like Linux.
One other dynamic might be the whole worse is better
is worse debate.
As an example of this, Linux had PCMCIA support at least a year or two
before NetBSD did, and in particular Linux had hot-add support where
you could insert an ethernet PCMCIA into your laptop after the OS had
booted, and the ethernet card would work. However, if you ejected the
ethernet card, there was a roughly 1 in 4 chance that your system
would crash. NetBSD took a lot longer to get PCMCIA support --- but
when it did, it had hot-add and hot-remove working perfectly, while
Linux took a year or two more after that point before hot-remove was
solidly reliable.
So from a computer science point of view, one could argue that NetBSD
was "better", and that Linux had a whole bunch of hacks, and some
might even argue was written by a bunch of hacks. :-) However, from
the user's perspective, who Just Wanted Their Laptop To Work, the fact
that Linux had some kind of rough PCMCIA support first mattered a lot
more than a "we will ship no code before its time" attitude. And
some of those users would become developers, which would cause a
positive feedback loop.
This I can totally buy, but my perception (as very much an outsider) was
that people ran --- and continue to run --- Linux because, simply, they
want to run Linux. They choose Not to run a BSD or illumos or whatever
else because they want to run Linux instead.
There was a time in the early 90s when FreeBSD was objectively better
in almost all metrics than Linux, including faster networking. But people
still wanted to run Linux instead. Why? It just captured the zeitgeist better;
the barrier to entry was lower, you didn't have to put up with the "old school
Unix" mentality of many of the players (my term for what you have referred
to as, "the Gods of BSD" and some of the big egos of the USENIX crowd),
and people got religious about the license.
Many of the explanations we throw around here are interesting, but too
often feel like justifications after the fact. So much of it was simply
preference.
- Dan C.