Looking at other systems that were available roughly around the time of Unix (TENEX, Multics), it strikes me that the Unix was a bit of an odd-duck with the way it handled exec in terms of destructively overlaying the memory of the user portion of a process with a new image; am I wrong here?
That's a great paper and I've really enjoyed revisiting it over the years, but while it does a great job of explaining how the Unix mechanism worked, and touches on the "why", it doesn't contrast with other schemes. I suppose my question could be rephrased as, if the early Unix implementers had had more resources to work with, would they have chosen a model more along the lines used by Multics and Twenex, or would they have elected to do basically what they did? That's probably impossible to answer, but gets at what they thought about how other systems operated.
Early versions of PDP-7 Linux used the same process model as Tenex: one process per terminal which alternated between running the shell and a user program. So exec() loaded the user program on top of the shell. Indeed, this wasn't even a syscall; the shell itself wrote a tiny program loader into the top of memory that read the new program, which was open for reading, and jumped to it. Likewise, exit() was a specialized exec() that reloaded the shell. The Tenex and Multics shells had more memory to play with and didn't have to use these self-overlaying tricks[*]: they loaded your program into available memory and called it as a subroutine, which accounts for the name "shell".
Presumably the virtual memory hardware could also be used to protect the shell from a malicious or errant program trashing the image of the shell in memory.
[snip]
* It doesn't compose.
* It is insecure by default.
* It is slow (there are about 25 properties a process has in addition to its memory and hardware state, and each of these needs to be copied or not) even using COW (which is itself a Good Thing and can and should be provided separately)
* It is incompatible with a single-address-space design.
In short, spawn() beats fork() like a drum, and fork() should be deprecated. To be sure, the paper comes out of Microsoft Research, but I find it pretty compelling anyway.
[*] My very favorite self-overlaying program was the PDP-8 bootstrap for the DF32 disk drive. You toggled in two instructions at locations 30 and 31 meaning "load disk registers and go" and "jump to self" respectively, hit the Clear key on the front panel, which cleared all registers, and started up at 30.
The first instruction told the disk to start reading sector 0 of the disk into location 0 in memory (because all the registers were 0, including the disk instruction register where 0 = READ) and the second instruction kept the CPU busy waiting. As the sector loaded, the two instructions were overwritten by "skip if disk ready" and "jump to previous address", which would wait until the whole sector was loaded. Then the OS could be loaded using the primitive disk driver in block 0.
One wonders if the PDP-8 was one of Sergeant's inspirations?
- Dan C.