I probably should add one more thing... RT11 and RSX, and in particular
the DEC FTN compiler, supported thunks and overlays - i.e. larger-sized
programs before UNIX did. IIRC, they needed them because by then the FTN
subsystem needed overlays to run itself. So .. assuming my memory is
correct, this was the reason why Fred rewrote the V7 code, and DEC pushed
it out as part of the v7m release. Ken's original overlay support code was
not sufficient for the DEC language tools.
I don't remember if Ultrix-11 or v7m for that matter, ultimately got the
FTN compiler. That said, Paul W might remember as he did the linker work
for Ultrix moving the VMS linker to Ultrix to support the 'Technical
Languages Group' (TLG ). Utrix-32 certainly did get FTN I think a couple
of other of the RSX and RSTS languages, but my memory of Fred's work was to
support overlays in V7 so it could support it. This was all during the
Unix wars inside of DEC. Fred was part of the 'Telephone Industries Group'
(TIG) in MRK.
ᐧ
On Thu, Aug 3, 2023 at 6:54 PM Clem Cole <clemc(a)ccc.com> wrote:
ᐧ
below... [and I meant to answer the second ½ of your question before]
On Thu, Aug 3, 2023 at 6:09 PM Will Senn <will.senn(a)gmail.com> wrote:
Clem,
Oh, so... Without I/D, you're stuck with 64k max per process, with I/D,
you can use 64k for I and 64k for D.
Exactly but ... more in a minute.
Was that it, or were there other tricks to get
even more allocated
(didn't the 11 max out at 256k)?
Different issues... the MMU on the 40 class and the 45/55 allows 256K [18
bits], the MMU for the 70 class is 4M [22 bits], Unibus I/O controllers
had 18 bits of address and RH70 controllers could support 22 bits of
extended addresses - see the processor and peripheral handbooks for details
[or I can explain offline].
What the PDP-11 books calls 'pages' are 64-byte segments. So the MMU is
set up to allow the processor to address 64K or 64KI/64KD at the time,
depending on if you have the I/D hardware, and the MMU is set up as to
which 'pages' are being addressed.
But you could overlay things ... [0405 files] with 'thunks'.
So to allow a process (or the kernel) to have more than 64K, overlays can
be loaded into memory and since the total physical space memory space is
either 18 or 22 bits, if the kernel supports overlays - processes could get
bigger [which is part of your first question].
V7 was when 0405 [text only] overlays were added. With DEC's release of
v7m - Fred Cantor rewrote the overlay code and they became more general
[and that would go into 2.9BSD].
So the programmer needs to decide what you wanted to put into what
overlay. For processes, the kernel can swap out segments and replace them
as needed. The key is that link needs to generate near/far style calls
and it can be a PITA. If you want to access a routine that is not
currently mapped into memory, the 'thunk' needs to ask the OS to switch it.
Great thought of what was going to be stored where.
The kernel could be compiled either with, or without separate I/D. The
only reason not to is if you didn't have more then 64k or were there other
reasons?
Well by V6, UNIX needed at least 64K of physical memory and it was really
slow with anything less than 256K. For the kernel, using I/D allowed the
kernel to grow more easily. By the time of trying to cram networking into
it, running on anything less than an 11/44 was pretty hard.
That said, Able made an alternate MMU called the ENABLE that allow 4M of
memory on a Unibus system. It worked at a cache/bus repeater. So you set
the internal MMU to point to it and then use its MMU. Very cool and a
soft spot for me. I ran an 11/60 [which is 40 class] with 2M of memory in
Teklabs with the first Enable board.
For whatever its worth, even with 4M the kernel had started to become a
problem for V7 on an 11/70. Data buffers eat a lot of memory.
So, besides the kernel what apps tended to be split? If I remember
correctly, vi was one, pascal another?
Anything that started to get big ;-)
Ppeople ran out of data space and text space from 64K fairly fast. With
the 32-bit Vax, the UNIX Small is Beautiful thinking started to fall away.
Rob has an excellent paper -> "cat -v considered harmful" BSD UNIX, and
the Vaxen greatly fueled that. Adding features and thinking less about
what functionality was really needed started to get lost [so now we have
Gnu - but I digress]. Werner and the BSD 2.9 folks are to be commended for
what they did with so few resources. They moved things back from the Vax
by using the overlays, but if you were to have any semblance of
performance, you need the overlays to stay resident so you need that full
4M of memory.
As for this specific question first two subsystems for the 11 that ran out
of text space were indeed vi and Pascal subsystems (Joy having had his hand
in both, BTW). But they were hardly the only ones once the genie was out
of the bottle. Data space quickly became the real issue. People really
wanted larger heaps in particular. In fact, by the 1990s, I knew of few
programs that run out of 32-bit worth of text space, but many that
started to run out of 32-bits of data space -> hence Alpha. But BTW:
DEC took a performance hit originally, and there was a huge discussion at
the time if 64-bits was really needed.
ᐧ