On Sun, Jun 5, 2022 at 6:32 PM Theodore Ts'o <
tytso@mit.edu> wrote:
On Fri, Jun 03, 2022 at 06:05:43PM -0700, Larry McVoy wrote:
>
> So in all of this, the thing that keeps getting missed is Linux won.
> And it didn't win because of the lawsuit, it didn't get crushed by the
> GPL that all the BSD people hate so much, it won on merits. And it
> won because there was one Linux kernel. You could, and I have many,
> many times, just clone the latest kernel and compile and install on
> any distribution. It worked. The same was never true for all the BSD
> variants. Tell me about the times you cloned the OpenBSD kernel and
> it worked on FreeBSD. I'm maybe sure there was maybe one point in time
> where that worked but then for all the other points in time it didn't.
So in essence what you're saying is that OpenBSD and FreeBSD weren't
ABI compatible, and that's your definition of when two OS's are
different. And so when Warner says that there are hundreds of forks
of the Linux kernel, to the extent that the ABI's are compatible, they
really aren't "different".
Yes. The forks might not have been bad, and there's always some
logical fallacy presented to say they are somehow the "same" because
of some artificial criteria.
And I'm not convinced all the forks are bad, per se. Just that the narrative
that says there's only one is false and misleading in some ways because
there always was a diversity of 'add in' patches for different distributions,
both commercial and hobbyist... Much, but by no means all, of that wound
up upstream, though not always the best and the reasons for rejection
could be arbitrary at times, or just made too hard to bother trying to upstream
at others. There was something in the diversity, though, that I'll readily
admit was beneficial.
Part of this comes from the the fact that the Linux kernel, C library,
and core utilities are all shipped separately. The BSDs have often
criticized this, claiming that shipping all of the OS in a single
source control system makes it easier to rollout new features. There
is no doubt upsides from having a single source tree; but one of the
advantages of keeping things separate is that definition of the kernel
<-> userspace interface is much more explicit.
That being said, I will note that this always hasn't been true. There
was a brief period where an early Red Hat Enterprise Linux version
suffered from the "legacy Unix value-add disease", where Red Hat had
added some kernel changes that impacted kernel interfaces, which
didn't make it upstream, or made it upstream with a changed interface,
such that when users wanted to use a newer upstream kernel, which had
newer features, and newer device driver support, it wouldn't work with
that version RHEL. Red Hat has criticized *heavily* for that, both by
the upstream development community and by its users, and since then it
has stuck to a "usptream first" policy, especially where new system
calls, or some other kernel interface is concerned.
I suffered through MontaVista Linux which definitely wasn't ABI compatible.
And all of their board support packages were based on different versions
of Linux, making it a nightmare to support lots of architectures...
One of the reasons why that early RHEL experience kept Red Hat in line
was because none of the other Linux distributions had that property
--- and because the core development in upstream hadn't slacked off,
so there was a strong desire to upgrade to newer kernels on RHEL, and
when that didn't worked, not only did that make customers and
developers upset, but it also made life difficult for Red Hat
engineers, since they now need to figure out how to forward port their
"value add" changes onto the latest and greatest kernel release.
An interesting question is if CSRG had been actively pushing the state
of the art foreward, would that have provided sufficient centripetal
force to keep the HP/UX, SunOS, DG/UX, etc., from spintering? After
all, it's natural to want to get a competitive advantage over your
competition by adding new features --- this is what I call the "Legacy
Unix value-add disease". But if you can't keep up with the upstream
developments, that provides a strong disincentive from making
permanent forks. For that matter, why was it that successive new
releases of AT&T System V wasn't able to play a similar role? Was it
because the rate of change was too slow? Was it because applications
weren't compatible anyway due to ISA differences? I don't know....
CSRG's funding dried up when the DARPA work was over. And even before
it was over, CSRG was more an academic group than one who had a desire
to impose its will on commercial groups that it had no leverage over.
And AT&T had become all about monetization of unix, which meant it imposed
new terms that were unfavorable, making it harder for old-time licensees to
justify pulling in the new code that would have kept the world from Balkanizing
as badly as it did. So there were complex issues at play here as well.
One other dynamic might be the whole worse is better is worse debate.
As an example of this, Linux had PCMCIA support at least a year or two
before NetBSD did, and in particular Linux had hot-add support where
you could insert an ethernet PCMCIA into your laptop after the OS had
booted, and the ethernet card would work. However, if you ejected the
ethernet card, there was a roughly 1 in 4 chance that your system
would crash. NetBSD took a lot longer to get PCMCIA support --- but
when it did, it had hot-add and hot-remove working perfectly, while
Linux took a year or two more after that point before hot-remove was
solidly reliable.
Except FreeBSD's PAO project had PCMCIA support about two years
before NetBSD did, and hot plug worked on it too.. So that's a bit of an
apples to oranges comparison. To be fair, the main FreeBSD project
was slow to take up changes from PAO and that set back PC Card
and CardBus support a number of years.
So from a computer science point of view, one could argue that NetBSD
was "better", and that Linux had a whole bunch of hacks, and some
might even argue was written by a bunch of hacks. :-) However, from
the user's perspective, who Just Wanted Their Laptop To Work, the fact
that Linux had some kind of rough PCMCIA support first mattered a lot
more than a "we will ship no code before its time" attitude. And
some of those users would become developers, which would cause a
positive feedback loop.
At the time, though, FreeBSD ran on the busiest FTP server on the
internet could handle quite a bit more load than an equivalent Linux
box at the time. And NetBSD was much more in the "no code before
its time" camp than FreeBSD, which tried to get things out faster
and often did a good job at that. Though it did well with networking,
it didn't do so well with PC Card, so it's rather a mixed bag.
The only reason I keep replying to this thread is that the simple
narratives that people keep repeating often times aren't so simple
and the factors going into things tend to be much more complex
and nuanced.
Warner