A) The original WD UART chip had very limited buffering. The timing was
such that as high rates it could not empty accept a second character
without the first being overwritten. This was a long-standing issue for
many UARTs long in the 1990s. The original chip NS built and IBM used on
the PC (the NS8250) was notorious for the same problem. By the time of
Motorola's 6881, it had 8 characters of buffering IIRC.
Great, now I'm having flashbacks to upgrading my 4-port serial card with
16450s and then 16550s in the early 90s.
On Fri, Jan 10, 2020 at 8:49 AM Clem Cole <clemc(a)ccc.com> wrote:
On Fri, Jan 10, 2020 at 10:00 AM Mary Ann Horton <mah(a)mhorton.net> wrote:
Yes, it was a real concern. Physical memory on
the shared PDP-11 was
limited, and if everyone had a separate copy of vi running the machine
would swap itself silly.
This only mattered if everyone had their own separate copy of vi
installed. The fix was to put vi in a single system directory, such as
/usr/ucb or /exptools. The instruction part of its memory would be
shared among all the users, resulting in much less swapping.
Actually it was much worse than that...
What Mary Ann points out was mostly true of your PDP-11 had DH11's
installed; which had deeper hardware buffering and 16 character DMA on
output. But these were expensive from DEC and also took up a 'full system
unit' in the CPU for 16 lines. Until Able (much later) released the
DMAX-11 (*a.k.a.* DH/DM) product of a DH11 clone on a single board, many
university sites did not use them; but used multiple DL-11/KL-11's instead.
If your system was configured with DL/KL11s or similar (CMU had it's own
called 'ASLIs' - a synchronous line interfaces) each character took one
interrupt for each either input or output. Moreover, the UARTS that DEC
used which were made by Western Digital had 2 >>or less<< characters of
input buffering, so they could drop chars[1]. The ASLI's used a later chip
with a slightly better buffer IIRC but I admit I've forgotten the details
(Jim Tetter probably remembers them).
So if you had a single line, the interrupt load was huge on a PDP-11. For
this reason, a lot of sites limited glass TTYs to speeds like 2400 or 4800
baud, not the full 9600.
DEC later released the DZ-11 which worked on units of 8 ports per board.
Unfortunately, it was not DMA and the buffering was still pretty shallow.
Joy did a lot of work on 4.1BSD in the DZ driver to cut down the
interrupts because 9600 baud DZ lines could swamp a vax and when running
the BerkNet between systems (before UCB had ethernet), 9600 baud serial
lines were standard.
[1] Two things
A) The original WD UART chip had very limited buffering. The timing was
such that as high rates it could not empty accept a second character
without the first being overwritten. This was a long-standing issue for
many UARTs long in the 1990s. The original chip NS built and IBM used on
the PC (the NS8250) was notorious for the same problem. By the time of
Motorola's 6881, it had 8 characters of buffering IIRC.
B) As I understand the history, Gordon developed the original idea of the
UART at DEC for the PDP-1. But I'm not sure of the patent details. He does
not list the UART patent on his web site although he does mention inventing
it. I have been under the impression CMU was somehow mixed up in the
patent and licensing of it, *i.e.* WD got the license from CMU to make
them not DEC; which was part of why we had the ASLI's. Again, IIRC, we got
the UART chips from WD at cost and could make the ALSI's locally much
cheaper than DL-11s. >>I think<< the story was that one of Gordon's
student's designed a chip, which WD fabbed and licensed. Before that DEC
had built UARTs on boards from transistors and later logic gates.