On Tue, Apr 15, 2025 at 4:05 PM Folkert van Heusden <folkert(a)vanheusden.com>
wrote:
Most of them do, the ones for instruction-emulation
that is. MMU emulation
not completely
That is a little worrisome to me. But of course, which ones. While I knew
and probably worked with and thus probably have met who wrote the
diagnostics, I do not think I can not come up with a name of the guilty
party here to ask.
As I said, where to loo, the stack grow logic is the real suspect because
as the system runs, and it's not until you start to put a load on and start
to swap a little that you are getting some failures. So some type of
pressure is occurring. I guess that a number of mallocs have caused
breaks to enlarge memory, but that happens a lot with so many commands. I
reluctantly think there is something else wonky. Stack growth is
surprisingly rare due to how processes are created. IIRC a stack on a
separate I/D system is often about a few K to start which is usually
"enough." Big compiles like the kernel, however, might push that limit,
and the red/yellow zone stuff starts to get worked out.
What I don't know is if there are diagnostics for that logic. I also know
is the best description of what is >>supposed<< to happen is pages 278-279
of my 1981 processor handbook. (a.k.a. EB-19402-20 —
https://bitsavers.org/pdf/dec/pdp11/handbooks/EB-19402-20_PDP-11_Processor_…
).
I think there are two parts to this... You have the CPU itself generating
16-bit addresses (user mode) being mapped through your MMU. If the MMU has
a >>slightly<< wonky corner case, I wonder if that is tripping up stack
fault logic somehow, so 2.11BSD is getting confused. Since this is
happening to >>user space<< code (we never "grow" kernel stacks),
it's not
fatal to the OS, but it will cause processes to go south.
I can not guarantee this is the issue, but it certainly would be a scenario
that would explain what you are seeing.
Clem
ᐧ