On Sun, Jun 5, 2022 at 10:36 PM Larry McVoy <lm(a)mcvoy.com> wrote:
On Sun, Jun 05, 2022 at 09:40:44PM -0400, Dan Cross
wrote:
[snip]
But every distribution has its own installer, and they vary wildly.
Indeed they do. I'd put RedHat as the best of the best, but truth be
told, Debian is not bad, it's more basic but it works.
I don't think I've installed RedHat in years, honestly. I believe
you that it's a smooth process.
I disagree about the BSDs being similar to Linux, go
partition a
disk with FreeBSD and then compare that to Linux. It's night and day.
The Linux stuff works and is obvious, the FreeBSD stuff only makes sense
if you have been using that forever, it's awful if you are a newbie.
But define "Linux" here. Do you mean RedHat, specifically? Because
with Arch, you've got to manually run `fdisk` or `gdisk` or whatever, and
add partitions in that tool, set their type manually, etc, then manually
create the filesystems, install the boot loader and configure it. The steps
aren't necessarily hard, but it is tedious. The FreeBSD installer, on the
other hand, does pretty much all of that for you. My point is that YMMV
widely between Linux distributions, which vary between extremes of,
"manually partition the disk" and "this graphical wizard does all the
nasty
stuff for you" and FreeBSD is somewhere between those two.
And I say that as a guy who went through Sun's
stuff, it was similar to
FreeBSD but a bit better.
Linux really did just make stuff work.
Huh. I remember before GPT you had to manually create MBR partitions
and, if you wanted more than 3 or 4 (or whatever the number was...) you
had to go and explicity create an extended partition and then subdivide
that. With FreeBSD, you just created one MBR partition and then the
installer let you create filesystems within that using their pseudo-graphical
installer (pseudo- in the sense that it was all text-based, but at least it
was menu driven if that was your bag). I found whatever Linux distribution
I was installing at the time a lot more complex than FreeBSD, but I get
that individuals differ here.
Then again, I care a _lot_ less about carefully dividing my disk up into
different filesystems these days. Back in the day, especially on multiuser
machines, you had to either due to limitations in the filesystem code
(2GB FFS partitions!) or to keep random users from filling up / or /usr.
Most of the need for that kind of subdivision has gone away.
Is it elegant like v7 was, absolutely
not. Does it handle a ton of stuff that nobody could imagine in the v7 days?
Oh, absolutely!
Absolutely yes. Is it more complex than it should be?
I dunno, it is more
complex than I like but I'm an old graybeard. I really wanted coarse graine
locking with what Clem and crew did with the cluster approach. The vproc
stuff. I loved that and I think that is the knee of the curve, scale up
a bit on SMP but then cluster to scale up more.
The world that might have been.
I was thinking more about the ABI compatibility stuff that Ted had mentioned,
and the more I think about it, the less I think that the kernel ABI is all that
relevant. Yes, it's nice that you can take a binary compiled on one distribution
of Linux and drop it onto another distribution and it will
theoretically run because
the system call numbers will mostly line up and the convention for trapping into
the kernel is basically stable, but that's only part of the equation,
and for any
non-trivial binaries, you've also got to make sure that a whole slew of shared
libraries are installed and the correct version (hence why my colleague can't
use the Salae controller software on his Arch machine). To compensate, we've
built up a huge amount of complexity around containers and flatpaks and all of
that stuff, which sometimes doesn't work; not to mention moving across ISAs.
A stable kernel ABI may be necessary, but it's definitely not sufficient.
- Dan C.