AT&T wanted to monetise all work across the window. Personally, like a
lot of other ex BSD admin in a uni people I had strong opinion, and I
disliked the experience of SysV which was a mixture of unfamiliarity,
and bullshit process overlay.
I think when you put "but only for money" combined with "this is not
how I think" combined with "sorry, your bug is not going to be
important to us" combined with "thats not how AT&T thinks" it was
profoundly de-motivating for people to cohere around.
It is entirely tenable that at the time, a SysV kernel was "best of
breed" but also, in the period. all the exciting work on the network
stack was happening in Van Jacobsen's group, in BSD kernels.
It was only later when he moved to Google, the pace of work in kernel
network stack moved to Linux. Since then, BBR has been a low adoption
path, painful in the extreme, driven by Netflix engineers in BSD. We
now have 4 or 5 competing models to TCP flow management, and they
don't play nice, so its not only kernel fragmentation, its
kernel-featureset-richness fragmentation: inside a kernel, you have
too many tunables to think about.
Same with ZFS. Amazing system. At one point, the meme "it needs >4GB
memory" escaped & people said "you can't run this on a rpi" which
is
factually incorrect, as much as "needs > 4GB" is also incorrect: it
works BETTER with a huge arc, and de-dupe burns memory, but if you
don't run de-dupe, you can run ZFS fine on smaller memory model
machines. But the number of tunables... Oh my gosh.
Kernels might be good: they're also horrendously complex now.
-G