On Wed, Sep 1, 2021 at 5:58 PM Dan Cross <crossd(a)gmail.com> wrote:
[snip]
First, thank you for all of the thoughtful responses, both on-list and off.
An interesting theme in many of the responses was essentially questioning
whether the underlying OS still matters, since the focus on development has
shifted to higher levels? E.g., we now provision components of our
enormously large and complicated distributed applications with
building blocks like containers, less physical machines, let alone
processes etc. That is certainly a trend, but it strikes me that those
containers have to run somewhere, and at some point, we've still got
instructions executing on some CPU, modifying words of memory, registers,
etc; presumably all of that runs under the control of an operating system.
It is a worthwhile question to ask whether that operating system still
matters at all: what we have works, and since it's so hidden behind layers
upon layers of abstraction, do we really care what it is? But I claim that
it does perhaps more than most folks realize. Certainly, there are metrics
that people care about (tail latency, jitter, efficiency at the 90th, 95th,
99th percentile...) and OS effects can have outsized impacts there; Mothy's
talk alludes to this when he talks about all of the hidden processing
that's happening all over a modern computer, eventually some of that
trickles onto the cores that are running one's containerized Node
application or whatever (lookin' at you, SMM mode...). At the end of the
day, the code we care about still runs in some process under some OS on
some bit of physical hardware, regardless of all of the abstractions we've
placed on top of those things. What that system does, and the abstractions
that its interface provides to programs, still matters.
Perhaps another question worth asking is, does it make sense to look at
different models for those systems? My subjective impression is that, back
in the 60s and 70s, there was much greater variation in system
architectures than today. A common explanation for this is that we didn't
know how to build systems at the time, so folks threw a lot of stuff at the
wall to see what would stick. But we no longer do that...again, Mothy
alludes to this in his brief survey of OSDI papers: basically, new systems
aren't being presented. Rob Pike also lamented that state of affairs 20
years ago, so it's been going on for a while. Does that mean that we've
come up with a recipe for systems that work and work well, and therefore we
don't need to rethink those basic building blocks? Or does that mean that
we're so used to our systems working well enough that we've become myopic
about their architecture, and thus blind to their faults?
- Dan C.