On Mon, Nov 14, 2022 at 04:54:54PM -0700, Warner Losh wrote:
The
interesting thing again, is that while they while all of these
implementations seem to have been technologically 'better' - only Mach
lived on from the original developers. And in the case of Mach, by the
time it was mainstream (macOS) the original implementation had been
replaced a few times - so while the concepts are there, I don't think much
of the Original CMU code is left in XNU/Darwin [or for that matter in the
OSF flavors -- Tru64 rewrote it but it died and the OSF/RI kernel never
went anywhere either].
Both FreeBSD and NetBSD re-wrote the vm layer. FreeBSD incrementally, and
NetBSD with a new uvm. At least for FreeBSD, this is when the buffer cache
was fully (or more fully?) unified because it wasn't quite complete in
4.4BSD as shipped IIRC (or maybe it was that it was really buggy, it's been
so long ago now that I've forgotten).
FWIW, Linux also has a (mostly) unified page cache. Which is to say,
for all of the major file systems, there is only a single unified page
cache for file data.
There are some legacy file systems in Linux (e.g., befs, qnx4, qnx6,
minix, vfat, etc.) that still use the buffer cache, but the buffer
cache is implemented in terms of the page cache infrastructure, and
that's mainly because no one has really bothered to update those file
systems to use the unified page cache. After all, you're using vfat
to read and write for super-slow USB thumb drive, who cares if data is
getting copied from the USB thumb drive, to the buffer cache, and then
to the page cache? :-)
As I said, the
lesson to TUHS -- as much as I'm a techie and I am
interested in the 'proper' way of doing things ... "good enough" is
often
what rules.
Good enough, and a little more polish to make it even better :)
Good enough, and backwards compatibility, sure. But for the file
systems where performance is an issue, having a unified page is the
only way to go. :-)
> It's too bad none of the good memory
implementations made it into
> >>systems<< that lasted.ᐧ
At least in my book, it's not the implementations, but the ideas that
matter. And sometimes implementations are constrained by hardware
limitations (example: clists), and for less contrained hardware,
sometimes you're better re-evaluating whether the ideas in use, and if
that means that you need to reimplement the ideas that still make
sense, there's nothing wrong with that.
- Ted
P.S. I remember back in the day when I was engaging in a friendly
competition with a FreeBSD hacker on improving the serial driver for
the 8250 chip (with no FIFO's!), and I shared with him my idea of
using a pair of ring buffers that would get flipped back and forth
between the interrupt handler and the tty "bottom-half" (read:
software interrupt) handler, and I was told that clists were handed
down from Olympus by the AT&T/Unix Gods and he could never get that
kind of change into the FreeBSD tty layer. Of course, I was free to
make all of the radical changes to Linux's tty layer --- and I did,
all in the name of the number of 115kbaud connections that could be
handled on a single 40 MHz 386 processor...