At Sat, 9 Apr 2022 08:10:21 -0400, tytso <tytso(a)mit.edu> wrote:
Subject: Re: [TUHS] Interesting commentary on Unix from Multicians.
To be fair, Multics had it a lot easier, because core memory meant
that every single memory access could be considered "durable storage",
so there was no real need for "fsync(2)" or "msync(2)".
As Dan mentioned, actual "core" memory was long gone by the time Multics
was in commercial use. Core memory was only used in the
first-generation 6000 series (and presumably the GE-635 too),
i.e. before 1975.
One source says there was cache memory in the CPUs as early as 1974.
The later machines, such as the DPS-8/70M ("M" for Multics) definitely
came with 8KW cache memory that could be expanded up to 32KW.
These systems were also generally used in multi-processing
configurations, especially for Multics, with multiple CPUs, each with
potentially multiple memory units (and multiple IOMm (I/O Multiplexer),
later IMUs (Information Multiplexer Unit)).
As for how this cache memory was used and how its consistency with main
memory and with secondary storage was maintained, I don't know myself,
but of course much documentation about Multics internals is available,
along with all the source code, and it is running on simulators today.
Multics
dynamic linking makes any unix-ish implementation look most
ugly, incredibly insecure, and horribly inefficient. Unix shared
libraries are bad hack to simulate what Multics did natively and
naturally, with elegance, and with direct hardware security support.
What if we compared Multics dynamic linking to Solaris Doors?
I don't think Doors is really relatable to Multics dynamic linking.
As I understand it, Doors is more like RPC than directly shared code.
In Multics all code, even kernel code, is generally shared -- in broad
terms user processes just "gate" (i.e. jump) directly to code in a
lower/inner "ring" to run system calls (e.g. Ring-0). Code doesn't even
have to exist when first called (similar to how methods don't have to
pre-exist in Smalltalk) -- the user gets a "linkage fault" and can write
and compile the missing code and continue a program at the fault point.
Also there's no such thing as "relocation" of code in Multics --
segmented memory means all segments have their own zero address and so
all code is pure procedure and can be executed by many processes
simultaneously simply by mapping the shared code segments into each
process (as read-only, executable). Shared libraries of a sort (bound
segments) do exist, partly because of performance issues (dynamic link
faults are expensive in Multics, especially on hardware of the day), and
partly because of addressing limitations (only a quite limited number of
segments can be attached to any given process), and partly because
otherwise there would be effectively one global namespace for all
symbols exposed by each compilation unit in a user's $PATH.
One of the difficult things for Unix people to wrap their heads around
is the difference between a "process" in Unix and in Multics. One way
to think about it that has helped me figure it out is this: In Multics
you have a "shell" program which you interact with just like with
"sh"
in Unix, but it's not a separate process in Multics -- just a unit of
code that effectively runs when no other program is running. So
generally a user only runs one process at a time, even though it may
transition through hundreds or thousands of programs during its
existence.
Multics did eventually have the concept of multi-threading and a form of
co-operative multi-tasking too, though I don't know as much about these.
I think the latter would/could effectively be like having multiple
processes in one login session in Unix and it is said this facility was
used to implement network service daemons for example.
Now I don't believe the Multics concept of "process" is strictly
necessary to support its concepts of Single Level Storage or even
Dynamic Linking. I believe those could be used equally well in a system
with more Unix-like processes, though some mix of concepts might be
required to truly benefit from all the features of Multics-style dynamic
linking.
One way I've been thinking of Multics lately is that it's more like
modern virtualization (e.g. Xen), but with added OS services such as a
filesystem and controlled access to shared "inner ring" code; and indeed
that's more or less how it was described in early Multics concept
papers, assuming you transmogrify the old terminology of the day into
modern lingo. Each user gets a computing environment that gives them
the impression they have full use of the machine, and it comes along
with pre-existing useful code and various useful services, including a
filesystem the ability to communicate with other users and other
machines.
--
Greg A. Woods <gwoods(a)acm.org>
Kelowna, BC +1 250 762-7675 RoboHack <woods(a)robohack.ca>
Planix, Inc. <woods(a)planix.com> Avoncote Farms <woods(a)avoncote.ca>