One thing to remember about Mach is that it really was a *research* project. Some of the
things that have been complained about, e.g. “pointless” or “needless” abstraction and
layering, were done specifically to examine the effects of having those layers of
abstraction. Does their presence enable different approaches to problems? Do they enable
new features altogether? What’s given up by having them? And so on.
Just as an example, a lot of the complexity in the Mach VM system comes from the idea that
it could provide a substrate for all sorts of different types of systems, and it could
have all sorts of different mechanisms underneath supporting it. This means that Mach’s
creators got to do things like try dedicated network virtual memory, purpose-specific
pagers, compressing pagers, etc. You may not need as much flexibility in a non-research
system.
For another example, Mach did a lot of extra work around things like processor sets that
wouldn’t be needed on (say) a dual-CPU shared-cache uniform-memory systems, but turns out
to be important when dealing with things like systems with a hierarchy of CPUs, caches,
and memories. Did they know about all the possible needs for that before they started?
Having met some of them, the people who created and worked on Mach were passionate about
exploring the space of operating system architecture and worked to create a system that
would be a good vehicle for that. That wasn’t their only goal—they were also part of the
group creating what was at the time CMU’s next-generation academic computing
environment—but the sum of their goals generally led to a very pragmatic approach to
making things possible to try while also shipping.
-- Chris