On 8/26/2019 8:30 PM, Larry McVoy wrote:
I really don't understand the love for ZFS. I hired Bonwick and I
hired Moore, I had high expectations but they were all dashed when I
realized ZFS doesn't use the page cache. That's so crazy busted I lost
all interest in ZFS. ZFS took us back to HP-UX mmap semantics.
At the risk of going off-topic:
From a system-administration standpoint, and data-integrity standpoint,
ZFS was a huge step forward. In my humble opinion ;)
Besides the obvious (to me) benefits of adding mount points, adjusting
volume sizes, and all the other things that ZFS does, I have yet to find
any mainstream filesystem (if you can call ZFS "just" a filesystem) that
guarantees data integrity. I have an office server, that contains a lot
of source code and archived data that I depend on religiously. I do
copious backups to LTO tapes as well as an off-site Amazon EC2 instance.
Within the recent past few years, I had an issue with a Dell MD Raid
array where ZFS was complaining about checksum errors on a certain disk.
Data was being corrupted on the fly. It seems that the writes were being
corrupted, not reads. Thankfully, it was on a RAIDZ2 volume, where it
could correct the corruption. The corruption in question was on files
that are dated back to the early 90's.
Stopping bit-rot in it's tracks, ZFS has done me well.
As for what mmap() doesn't do right, I started using memory mapped files
back in the early 80s on VMS on a VAX-11/780 when I and a colleague were
converting a database from TOPS-10 to VMS. Perhaps I am misunderstanding
your dislike for mmap() but please, enlighten me. It was my
understanding at the time that it was akin to swapping/virtual-memory
using an MMU. The difference was that instead of using the main paging
area, the kernel would use an actual file. Why would mmap() be a bad
thing, when it's hooked into the kernel, and possibly hardware, at such
a low point?
art k.