On Fri, Jun 03, 2022 at 06:05:43PM -0700, Larry McVoy wrote:
So in all of this, the thing that keeps getting missed is Linux won.
And it didn't win because of the lawsuit, it didn't get crushed by the
GPL that all the BSD people hate so much, it won on merits. And it
won because there was one Linux kernel. You could, and I have many,
many times, just clone the latest kernel and compile and install on
any distribution. It worked. The same was never true for all the BSD
variants. Tell me about the times you cloned the OpenBSD kernel and
it worked on FreeBSD. I'm maybe sure there was maybe one point in time
where that worked but then for all the other points in time it didn't.
So in essence what you're saying is that OpenBSD and FreeBSD weren't
ABI compatible, and that's your definition of when two OS's are
different. And so when Warner says that there are hundreds of forks
of the Linux kernel, to the extent that the ABI's are compatible, they
really aren't "different".
Part of this comes from the the fact that the Linux kernel, C library,
and core utilities are all shipped separately. The BSDs have often
criticized this, claiming that shipping all of the OS in a single
source control system makes it easier to rollout new features. There
is no doubt upsides from having a single source tree; but one of the
advantages of keeping things separate is that definition of the kernel
<-> userspace interface is much more explicit.
That being said, I will note that this always hasn't been true. There
was a brief period where an early Red Hat Enterprise Linux version
suffered from the "legacy Unix value-add disease", where Red Hat had
added some kernel changes that impacted kernel interfaces, which
didn't make it upstream, or made it upstream with a changed interface,
such that when users wanted to use a newer upstream kernel, which had
newer features, and newer device driver support, it wouldn't work with
that version RHEL. Red Hat has criticized *heavily* for that, both by
the upstream development community and by its users, and since then it
has stuck to a "usptream first" policy, especially where new system
calls, or some other kernel interface is concerned.
One of the reasons why that early RHEL experience kept Red Hat in line
was because none of the other Linux distributions had that property
--- and because the core development in upstream hadn't slacked off,
so there was a strong desire to upgrade to newer kernels on RHEL, and
when that didn't worked, not only did that make customers and
developers upset, but it also made life difficult for Red Hat
engineers, since they now need to figure out how to forward port their
"value add" changes onto the latest and greatest kernel release.
An interesting question is if CSRG had been actively pushing the state
of the art foreward, would that have provided sufficient centripetal
force to keep the HP/UX, SunOS, DG/UX, etc., from spintering? After
all, it's natural to want to get a competitive advantage over your
competition by adding new features --- this is what I call the "Legacy
Unix value-add disease". But if you can't keep up with the upstream
developments, that provides a strong disincentive from making
permanent forks. For that matter, why was it that successive new
releases of AT&T System V wasn't able to play a similar role? Was it
because the rate of change was too slow? Was it because applications
weren't compatible anyway due to ISA differences? I don't know....
One other dynamic might be the whole worse is better is worse debate.
As an example of this, Linux had PCMCIA support at least a year or two
before NetBSD did, and in particular Linux had hot-add support where
you could insert an ethernet PCMCIA into your laptop after the OS had
booted, and the ethernet card would work. However, if you ejected the
ethernet card, there was a roughly 1 in 4 chance that your system
would crash. NetBSD took a lot longer to get PCMCIA support --- but
when it did, it had hot-add and hot-remove working perfectly, while
Linux took a year or two more after that point before hot-remove was
solidly reliable.
So from a computer science point of view, one could argue that NetBSD
was "better", and that Linux had a whole bunch of hacks, and some
might even argue was written by a bunch of hacks. :-) However, from
the user's perspective, who Just Wanted Their Laptop To Work, the fact
that Linux had some kind of rough PCMCIA support first mattered a lot
more than a "we will ship no code before its time" attitude. And
some of those users would become developers, which would cause a
positive feedback loop.
- Ted