From: Richard Salz
Having read this, and seen the subsequent discussion, I think both sides have
good points.
What I perceive to be happening is something I've described previously, but
never named, which is that as a system scales up, it can be necessary to take
one subsystem which did two things, and split it up so there's a custom
subsystem for each.
I've seen this a lot in networking; I've been trying to remember some of the
examples I've seen, and here's the best one I can come up with at the moment:
having the routing track 'unique-ID network interface names' (i.e. interface
'addresses') - think 48-bit IEEE interface IDs' - directly. In a small
network, this works fine for routing traffic, and as a side-benefit, gives you
mobility. Doesn't scale, though - you have to build an 'interface ID to
location name mapping system', and use 'location names' (i.e.
'addresses') in
the routing.
So classic Unix 'fork' does two things: i) creates a new process, and ii)
replicates
the environment/etc of an existing process. (In early Unix, the latter was pretty
simple, but as the paper points out, it has now become a) complex and b) expensive.)
I think the answer has to include decomposing the functionality of old fork()
into several separate sub-primitives (albeit not all necessarily directly
accessible to the user): a new-process primitive, which can be bundled with a
number of different alternatives (e.g. i) exec(), ii) environment replication,
iii) address-space replication, etc) - perhaps more than one at once.
So that shell would want a form of fork() which bundled in i) and ii), but
large applications might want something else. And there might be several
variants of ii), e.g. one might replicate only environment variables, another
might add I/O channels, etc.
In a larger system, there's just no 'one size fits all' answer, I think.
Noel