On Tue, May 29, 2018 at 9:10 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Wed, 30 May 2018, Johnny Billquist wrote:
>
> Uh? Say what? What does XON/XOFF flow control have to do with 8bit data?
>>
>
> Err, how do you send data that happen to be XON/OFF? By futzing around
> with DLE (which I've never seen used)?
Right - you have to escape things and it is a real mess. I have seen some
of the SI/SO/DLE stuff in mechanical systems like ticker tape. I never saw
a real 8-bit interface try to do it and succeed -- its messy and suspect
when data overruns occurs all hell break loose. By the time of 8 bits and
speed of 9.6K protocol's like UUCP, or IP over serial did not even bother.
I suspect in the old 5-bit baudot code times, it was more popular to get a
larger character set, but the speeds were much slower (and the UART not yet
invented by Gordon Bell).
By the time of the PDP-11, hardware flow was the norm in many (most)
interfaces. I never understood why DEC got ^S/^Q happy. You can really
see the issues with it on my PiDP-8, running DOS/8 because the flow is done
by the OS and its much to late to be able to react. So you get screens
full of data before the ^S is proceeded.
Once Gordon creates the UART, Western Digital licensed it and made it into
a chip which most folks used. DEC started to use the WD chipset in PDP-8
serial interfaces - it always struck me as strange that HW flow was not
used more. The KL11/DL11 supported the wires, although the SW had to do
the work. As I said, MacNamara supported it in the DH11, if I recall (I
don't have the prints in any more), he put an AND gate in the interrupt
bit, so the OS does not get an transfer complete interrupt unless the "I'm
ready" signal is available from the other side.
When I lectured data com to folks years ago, I liked to express the ECMA
serial interface in this manner:
There are 6 signal wires and one reference (signal) ground:
1. Data XMT (output)
2. Data RCV (input)
3. I'm Alive (output)
4. He's Alive (input)
5. I'm Ready (output)
6. He's Ready (input)
Everything else is either extra protocol to solve some other problem, or is
for signal quality (*i.e. *shielding, line balancing etc.). The names of
which signals match to which actual pins on the ECMA interface can be a
little different depending on the manufacturer of the 'Data Communicating
Equipment (DCE - *a.k.a.* the modem) and the Data Terminating Equipment
(DTE - *a.k.a.* the host).
BTW: ECMA and the WE did specify both signal names and the connectors to
use and I think the European Telco's did also (but I only ever saw a German
spec and it was in German which I do not read). DCE's are supposed to be
sockets (female and originally DB25)s and DTE's were suppose to plugs
(male). If you actually follow the spec properly, you never have issues.
The problem was the a number terminal manufactures used the wrong sex
connector, as a lot of them never read the specs [the most famous being the
Lear Siegler ADM3 which was socketed like a DCE but pinned as a DTE -
probably was it was cheap and thus became very popular).
Also to confuse things where it got all weird was how the DCE's handled
answering the phone. And it was because of the answering that we got all
the messy stuff like Ring, Data Set/Terminal Ready/Carrier and the like.
The different DCE manufacturers in different countries had different answer
protocols, than the original Bell system. IIRC the original WE103 needed
help from host. Support for Auto-answer was a later feature of the WE212.
The different protocols are all laid in well in another late 60s/early 70s
data com book from an UMass Prof, who's name I now forget (his book is
white with black letters called 'Data Communications' , as opposed to
MacNamaras dark blue DEC Press book 'Principles of Data Communications').
So ... coming back to Unix for this list... AT&T owned Western Electric
(WE) who was the largest US manufacturer of DCE's [see 1949 law suit/the
consent decree et al]. In fact at Bell Labs, a lot people using UNIX did
not have 'hardwired terminals' - they had a modem connection. So, Unix
as a system and tends to have support for DTE/DCE in the manner WE intended
it to be as a result. It's not surprising that see it in the solutions
that are there.
ᐧ
Back in 1980 or 1981, when I first started hacking
on UNIX but still had some TOPS-10 DNA lingering in
my blood, I put in a really simple control-T
implementation. Control-T became a new signal-
generating character in the tty driver; it sent
signal 16. Unlike interrupt and quit, it did not
flush input or output buffers. Unlike any other
signal, SIG_DFL caused the signal to be silently
ignored. (I don't remember why I didn't just teach
/etc/init and login to set that signal to SIG_IGN
by default; maybe I looked and found too many other
programs that monkeyed with every signal, maybe I
just didn't think of it.)
I then wrote a little program meant to be run in the
background from .profile, that dug around in /dev/kmem,
figured out what was likely nearest-to-foreground process
associated with the same terminal, and printed a little
status info for that process.
It didn't take long for the remaining TOPS-10 DNA to
leach away, and besides it is much easier to run some
program in another window now that that is almost always
possible, so I don't miss it. But I like that idea
better than, in effect, hacking a mini-ps into the kernel,
even though the kernel doesn't have to do as much work
to get the data.
I also thought it made more sense to have a general
mechanism that could be used for other things. That
even happened once. The systems I ran were used, among
other things, for developing SMP, the symbolic-manipulation
interpreter worked on by Stephen Wolfram, Geoffrey Fox,
Chris Cole, and a host of graduate and undergraduate students.
(My memory of who deserves credit differs somewhat from
that of at least one person named.) SMP, by its nature,
sometimes had to spend a substantial time sitting and
computing. Someone (probably Wolfram, says my aging
memory) heard about the control-T stuff, asked me how
to use it, and added code to SMP so that during a long
computation control-T would tell you something about
what it was doing and how it was progressing.
Since the signal was, like interrupt and kill, sent
to the whole process group, there was no conflict if
you also had my little control-T monitor running in
the background.
I never tried to send my hacked-up UNIX to anyone else,
so if anyone else did the same sort of control-T hack,
they likely invented it independently.
Norman Wilson
Toronto ON
> From: Clem Cole
> Tops-20 or TENEX (aka Twin-Ex).
ISTR the nickname we used was 'TWENEX'?
BTW, the first 20 at MIT (MIT-XX) had a 'Dos Equis' label prominently stuck to
it... :-)
Noel
> From: Dave Horsfall
> I have a clear recollection that UNSW's driver (or was it Basser?) did
> not use interrupts .. but used the clock interrupt to empty the silos
> every so often. I'd check the source in the Unix Archive, but I don't
> remember which disk image it's in ... Can anyone confirm or deny this?
I found this one:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=AUSAM/sys/dmr/dz.c
which seems to be the one you're rhinking of, or close to it.
It actually does use interrupts, on both sides - sort of. On the input side,
it uses the 'silo alarm', which interrupts when the input buffer has 16
characters in it. This has the same issue as the silo on the DH11 - if there
are less characters than that waiting, the host never gets an interrupt. Which
may be why it does the timer-based input check also?
The output side is entirely interrupt driven; it _does_ reduce the number of
interrupts by checking _every_ output line (on every DZ11 in the machine) to
see if that line's ready for a character when it gets any output interrupt,
which will definitely seriously reduce the number of output interrupts - but
even then, if _one_ line is going flat out, that's still 1000 interrupts per
second.
Noel
Arthur Krewat:
On 5/29/2018 4:11 PM, Dan Cross wrote:
> "I don't always use computers, but when I do, I prefer PDP-10s."
>
> B B B B - Dan C.
Write-in for President in 2020.
===
Only if he's Not Insane.
Norman Wilson
Toronto ON
Long ago, at Caltech, I ran a VAX which had a mix of DZ11s and
Able DH/DMs. The latter were indeed much more pleasant, both
because of their DMA output and their fuller modem control.
For the DZ11s we used a scheme that originated somewhere in
the USG-UNIX world: output was handled through a KMC11.
Output interrupts were disablled on the DZ; a program
running in the KMC fetched output data with DMA, then
spoon-fed it into the DZ, polling the status register
to see when it was OK to send more, then sending an
interrupt to the host when the entire data block had
been sent.
The KMC, for those fortunate enough not to have programmed
it, has a very simple processor sort of on the level of
microcode. It has a few specialized registers and a
simple instruction set which I don't think had all four
of the usual arithmetic ops. I had to debug the KMC
program so I had to learn about it.
When I moved to Bell Labs a few years later, we didn't
need the KMC to drive serial lines--Dennis's stream I/O
code was able to do that smoothly enough, even with DZ11s
on a VAX-11/750 (and without any assembly-language help
either!). But my experience with the KMC was useful
anyway: we used them as protocol-offload processors for
Datakit, and that code needed occasional debugging and
tweaking too. Bill Marshall had looked after that stuff
in the past, but was happy to have someone else who
wasn't scared of it.
Norman Wilson
Toronto ON
> From: Paul Winalski
> DZ11s ... the controller had no buffer
Huh? The DZ11 did have an input buffer. (See the 'terminals and communications
handbook', 1978-79 edition, page 2-238: "As each character is received ...
the data bits are placed ... in a .. 64-word deep first-in/first-out hardware
buffer, called a 'silo'.")
Or did you mean output:
> if you were doing timesharing it could bring the CPU to its knees in
> short order
The thing that killed an OS was the fact that output was programmed I/O, a
character at a time; using interrupt-driven operation, it took an interrupt
per character. So for a 9600 baud line, 9 bits/character (1 start + 7 data + 1
stop - depending on the line configuration), that's about 1000 characters per
second -> 1000 interrupts per second.
The DH11 used DMA for output, and was much easier on the machine.
Noel
Lars Brinkhoff <lars(a)nocrew.org> reports on Mon, 28 May 2018 10:31:56 +0000:
>> But apparently the inspiration came from VMS:
>> http://web.archive.org/web/20170527120123/http://www.unixtop.org:80/about.s…
That link contains the statement
>> The first version of top was completed in the early part of 1984.
However, on TOPS-20, which was developed several years before VMS, but
still from the same corporation, we had the sysdpy utility which
produced a similar display as top does.
>From my source archives, I find in score/4-utilities/sysdpy.mac the
ending comments:
;462 - DON'T DO A RLJFN AFTER A CLOSF IN NEWDPY
;<4.UTILITIES>SYSDPY.MAC.58, 2-Jun-79 14:15:54, EDIT BY DBELL
;461 - START USING STANDARD TOPS-20 EDIT HISTORY CONVENTIONS, AND
; REMOVE OLD EDIT HISTORY.
...
;COPYRIGHT (C) 1976,1977,1978,1979 BY DIGITAL EQUIPMENT \
CORPORATION, MAYNARD, MASS.
I therefore expect that there was 460-entry list of log messages that
predated 2-Jun-1979, and likely went back a few years. Two other
versions of sysdpy.mac in my archives have also dropped log messages
before 461.
Even before TOPS-20, on the CDC 6400 SCOPE operating system, there was
a similar tool (whose name I no longer recall) that gave a
continuously updated display of system-wide process activity. That was
available in at least late 1973.
I suspect that top-like displays were added to most other interactive
operating systems, as soon as screen terminals made updates convenient
without wasting console paper. One of the first questions likely to
be asked by interactive users is "what is my job doing?".
In a TOPS-20 terminal window, you could type Ctl-T to get a one-line
status report for the job that was currently running from your
terminal. For many users, that was preferable to sysdpy, and it was
heavily used.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> From: Lars Brinkhoff
> I'm surprised it appeared that late. Were there any other versions or
> similar Unix programs before that?
The MIT ~PWB1 system had a thing called 'dpy', I think written at MIT based on
'ps' (and no doubt inspired by ITS' PEEK), which had similar functionality.
Seems like it never escaped, though. Man page and source here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/man1/dpy.1http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/dpy.c
The top of my hard-copy man page says 'November 1977', but I suspect it dates
back further than that.
Noel
On this day in 1936, notable mathematician Alan Turing submitted his
thesis "On Computable Numbers", thereby laying the foundations for today's
computers.
Sigh; if only he hadn't eaten that apple... And we'll never know whether
it was murder or suicide.
-- Dave