On Sun, Aug 1, 2021 at 5:55 PM Dan Cross <crossd(a)gmail.com> wrote:
Looking at other systems that were available roughly
around the time of
Unix (TENEX, Multics), it strikes me that the Unix was a bit of an odd-duck
with the way it handled exec in terms of destructively overlaying the
memory of the user portion of a process with a new image; am I wrong here?
See dmr's paper at <https://www.bell-labs.com/usr/dmr/www/hist.html> for
details, but in short exec and its equivalents elsewhere have always
overlaid the running program with another program. Early versions of PDP-7
Linux used the same process model as Tenex: one process per terminal which
alternated between running the shell and a user program. So exec() loaded
the user program on top of the shell. Indeed, this wasn't even a syscall;
the shell itself wrote a tiny program loader into the top of memory that
read the new program, which was open for reading, and jumped to it.
Likewise, exit() was a specialized exec() that reloaded the shell. The
Tenex and Multics shells had more memory to play with and didn't have to
use these self-overlaying tricks[*]: they loaded your program into
available memory and called it as a subroutine, which accounts for the name
"shell".
So it was the introduction of fork(), which came from the Berkeley Genie
OS, that made the current process control regime possible. In those days,
fork() wrote the current process out to the swapping disk and set up the
process table with a new entry. For efficiency, the in-memory version
became the child and the swapped-out version became the parent. Instantly
the shell was able to run background processes by just not waiting for
them, and pipelines (once the syntax was invented) could be handled with N
- 1 processes in an N-stage pipeline. Huge new powers landed on the user's
head.
Nowadays it's a question whether fork() makes sense any more. "A fork()
in the road" [Baumann et al. 2019] <
https://www.microsoft.com/en-us/research/uploads/prod/2019/04/fork-hotos19.…
is an interesting argument against fork():
* It doesn't compose.
* It is insecure by default.
* It is slow (there are about 25 properties a process has in addition to
its memory and hardware state, and each of these needs to be copied or not)
even using COW (which is itself a Good Thing and can and should be provided
separately)
* It is incompatible with a single-address-space design.
In short, spawn() beats fork() like a drum, and fork() should be
deprecated. To be sure, the paper comes out of Microsoft Research, but I
find it pretty compelling anyway.
[*] My very favorite self-overlaying program was the PDP-8 bootstrap for
the DF32 disk drive. You toggled in two instructions at locations 30 and
31 meaning "load disk registers and go" and "jump to self"
respectively,
hit the Clear key on the front panel, which cleared all registers, and
started up at 30.
The first instruction told the disk to start reading sector 0 of the disk
into location 0 in memory (because all the registers were 0, including the
disk instruction register where 0 = READ) and the second instruction kept
the CPU busy waiting. As the sector loaded, the two instructions were
overwritten by "skip if disk ready" and "jump to previous address",
which
would wait until the whole sector was loaded. Then the OS could be loaded
using the primitive disk driver in block 0.