I've assembled some notes from old manuals and other sources
on the formats used for on-disk file systems through the
Seventh Edition:
http://www.cita.utoronto.ca/~norman/old-unix/old-fs.html
Additional notes, comments on style, and whatnot are welcome.
(It may be sensible to send anything in the last two categories
directly to me, rather than to the whole list.)
----- Forwarded message from meljmel-unix(a)yahoo.com -----
Warren,
Thanks for your help. To my amazement in one day I received
8 requests for the documents you posted on the TUHS mailing
list for me. If you think it's appropriate you can post that
everything has been claimed. I will be mailing the Unix TMs
and other papers to Robert Swierczek <rmswierczek(a)gmail.com>
who said he will scan any one-of-a-kind items and make them
available to you and TUHS. The manuals/books will be going
to someone else who very much wanted them.
Mel
----- End forwarded message -----
On 31.05.18 04:00, Pete Turnbull<pete(a)dunnington.plus.com> wrote:
> On 30/05/2018 23:10, Johnny Billquist wrote:
>> On 30.05.18 02:19, Clem cole wrote:
>>> 1). If you grab ^s/^q from the alphabet it means you can send raw 8
>>> bit data because they are valid characters (yes you can use escape
>>> chars but it gets very bad)
>> But this has nothing to do with 8 bit data.
> [...]
>> What you are talking about is simply the problem when you have inband
>> signalling, and is a totally different problem than 8bit data.
> True, but what I believe Clem meant was "binary data". He did write
> "raw data".
Clem originally said:
"And the other issue was besides not enough buffering DZ11s and the TU58
assumes software (^S/^Q) for the flow control (DH11s with the DM11
option has hardware (RTS/CTS) to control the I/O rate. So this of
course made 8bit data diificult too."
And that is what I objected to. If we're talking about just getting data
through a communications channel cleanly, then obviously inband
signalling like XON/XOFF is a problem, but that was not what Clem claimed.
By the way, while I'm on this topic. The TU58 (aka DECtape II - curse
that name) initially did have overrun problems, and that was because the
protocol did not have proper handshaking. There is flow control when the
TU58 sends data to the host, but there is no handshaking, which
amplifies the problem with flow control on that device. The protocol the
TU58 uses is called RSP, for Radial Serial Protocol. DEC realized the
problem, and modified the protocol, which were then called MRSP, or
Modified Radial Serial Protocol, which addressed the specific problem of
handshaking when sending data to the host.
But in addition to the lack of handshaking, the TU58 is normally always
expected to be connected to a DL11, and the DL11 is the truly stupid,
simple device that only have a 1 character buffer. So that interface can
easily drop characters on reception. Flow control or not. And the TU58
is not really to blame. And if you have an updated TU58 which uses MRSP,
you are better off.
>> RTS/CTS signalling is certainly a part of the standard, but it is not
>> meant for flow control. It is for half duplex and the way to signal when
>> you want to change the direction of communication.
>>
>> It is meant for*half duplex* communication. Not flow control.
> Actually, for both, explicitly in the standard. The standard (which I
> have in front of me, RS232C August 1969 reaffirmed June 1981) does
> indeed explain at length how to use it for half duplex turnaround but
> also indicates how to use it for flow control. It states the use on
> one-way channels and full-duplex channels to stop transmission in a
> paragraph before the paragraph discussing half duplex.
Using RTS/CTS for flow control have eventually made it into the
standard, but this only happened around 1990 with RS-232E-E. See
https://groups.google.com/forum/#!original/comp.dcom.modems/iOZRZkTKc-o/Zcd…
for some background story on that.
And that obviously is way after DEC was making any of these serial
interfaces.
Wikipedia have a pretty decent explanation of how these signals were
defined back in the day:
"The RTS and CTS signals were originally defined for use with
half-duplex (one direction at a time) modems such as the Bell 202. These
modems disable their transmitters when not required and must transmit a
synchronization preamble to the receiver when they are re-enabled. The
DTE asserts RTS to indicate a desire to transmit to the DCE, and in
response the DCE asserts CTS to grant permission, once synchronization
with the DCE at the far end is achieved. Such modems are no longer in
common use. There is no corresponding signal that the DTE could use to
temporarily halt incoming data from the DCE. Thus RS-232's use of the
RTS and CTS signals, per the older versions of the standard, is asymmetric."
See Wikipedia itself if you want to get even more signals explained.
If you have a copy of the 1969 standards document, could you share it? I
was trying to locate a copy now, but failed. I know I've read it in the
past, but have no recollection on where I had it then, or where I got it
from.
>> The AT "standard" was never actually a standard. It's only a defacto
>> standard, but is just something Hayes did, as I'm sure you know.
> Except for ITU-T V.250, which admittedly is a Recommendation not a Standard.
This also becomes a question of who you accept as an authority to
establish standards. One could certainly in one sense say that the Hayes
AT commands were a standard. But all of that also happened long after
DEC made these interfaces, and as such are also pretty irrelevant. The
link to the post in news back in 1990 is from the standards
representative from Hayes, by the way. So they were certainly working
with the standards bodies to establish how to do things.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2018-05-29 04:00, Clem cole<clemc(a)ccc.com> wrote:
>
> And the other issue was besides not enough buffering DZ11s and the TU58 assumes software (^S/^Q) for the flow control (DH11s with the DM11 option has hardware (RTS/CTS) to control the I/O rate. So this of course made 8bit data diificult too. And as Paul points out, with HW flow the priority could have been lower. But the HW folks assumed a so called 3 wire interface because they thought they were saving money.
Uh? Say what? What does XON/XOFF flow control have to do with 8bit data?
And no, DEC was not trying to save money. They were actually sticking to
the standard. Something very few companies managed to do, especially
when it came to the RS-232 standard.
The "hardware flow control" is actually an abuse of the half duplex
signalling.
And it does not necessarily work as you might expect if modems are in
the middle.
You could get by with just three wires on serial ports, but DEC usually
did want people to have more wires connected, in order to detect when a
device was connected or not. But no, they did not abuse the half duplex
control to do hardware flow control. The DZ11 also have partial modem
control. You cannot run them in half-duplex mode, and they do not
support the secondary channel stuff, but they do support more than just
the 3 wires.
> A few years later when people tried to put high speed modems like the Trailblazers on them, let’s say the Unix folks quickly abandoned any hope. Ken O’Humundro at Able Computer has quite a business in his ‘DHDM’ board that was cheaper than the DZ, more ports, full DMA and of course full hardware flow control.
Yes. It's essentially a DH11 with a partial DM11 on just one board,
while the original DH11/DM11 was a fully 9-slot backplane. So it was a
really good alternative. The DH11 is much better than the DZ11, but did
in fact cost more. The Able board gave the DH11 capabilities at a much
lower cost, and taking much less space in the box. And it actually
performed better than the DH11. But it did not support the full
functionality of the DM11. But what was lacking was the half duplex
stuff, which noone really cared about anymore anyway.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On Tue, May 29, 2018 at 9:10 PM, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Wed, 30 May 2018, Johnny Billquist wrote:
>
> Uh? Say what? What does XON/XOFF flow control have to do with 8bit data?
>>
>
> Err, how do you send data that happen to be XON/OFF? By futzing around
> with DLE (which I've never seen used)?
Right - you have to escape things and it is a real mess. I have seen some
of the SI/SO/DLE stuff in mechanical systems like ticker tape. I never saw
a real 8-bit interface try to do it and succeed -- its messy and suspect
when data overruns occurs all hell break loose. By the time of 8 bits and
speed of 9.6K protocol's like UUCP, or IP over serial did not even bother.
I suspect in the old 5-bit baudot code times, it was more popular to get a
larger character set, but the speeds were much slower (and the UART not yet
invented by Gordon Bell).
By the time of the PDP-11, hardware flow was the norm in many (most)
interfaces. I never understood why DEC got ^S/^Q happy. You can really
see the issues with it on my PiDP-8, running DOS/8 because the flow is done
by the OS and its much to late to be able to react. So you get screens
full of data before the ^S is proceeded.
Once Gordon creates the UART, Western Digital licensed it and made it into
a chip which most folks used. DEC started to use the WD chipset in PDP-8
serial interfaces - it always struck me as strange that HW flow was not
used more. The KL11/DL11 supported the wires, although the SW had to do
the work. As I said, MacNamara supported it in the DH11, if I recall (I
don't have the prints in any more), he put an AND gate in the interrupt
bit, so the OS does not get an transfer complete interrupt unless the "I'm
ready" signal is available from the other side.
When I lectured data com to folks years ago, I liked to express the ECMA
serial interface in this manner:
There are 6 signal wires and one reference (signal) ground:
1. Data XMT (output)
2. Data RCV (input)
3. I'm Alive (output)
4. He's Alive (input)
5. I'm Ready (output)
6. He's Ready (input)
Everything else is either extra protocol to solve some other problem, or is
for signal quality (*i.e. *shielding, line balancing etc.). The names of
which signals match to which actual pins on the ECMA interface can be a
little different depending on the manufacturer of the 'Data Communicating
Equipment (DCE - *a.k.a.* the modem) and the Data Terminating Equipment
(DTE - *a.k.a.* the host).
BTW: ECMA and the WE did specify both signal names and the connectors to
use and I think the European Telco's did also (but I only ever saw a German
spec and it was in German which I do not read). DCE's are supposed to be
sockets (female and originally DB25)s and DTE's were suppose to plugs
(male). If you actually follow the spec properly, you never have issues.
The problem was the a number terminal manufactures used the wrong sex
connector, as a lot of them never read the specs [the most famous being the
Lear Siegler ADM3 which was socketed like a DCE but pinned as a DTE -
probably was it was cheap and thus became very popular).
Also to confuse things where it got all weird was how the DCE's handled
answering the phone. And it was because of the answering that we got all
the messy stuff like Ring, Data Set/Terminal Ready/Carrier and the like.
The different DCE manufacturers in different countries had different answer
protocols, than the original Bell system. IIRC the original WE103 needed
help from host. Support for Auto-answer was a later feature of the WE212.
The different protocols are all laid in well in another late 60s/early 70s
data com book from an UMass Prof, who's name I now forget (his book is
white with black letters called 'Data Communications' , as opposed to
MacNamaras dark blue DEC Press book 'Principles of Data Communications').
So ... coming back to Unix for this list... AT&T owned Western Electric
(WE) who was the largest US manufacturer of DCE's [see 1949 law suit/the
consent decree et al]. In fact at Bell Labs, a lot people using UNIX did
not have 'hardwired terminals' - they had a modem connection. So, Unix
as a system and tends to have support for DTE/DCE in the manner WE intended
it to be as a result. It's not surprising that see it in the solutions
that are there.
ᐧ
Back in 1980 or 1981, when I first started hacking
on UNIX but still had some TOPS-10 DNA lingering in
my blood, I put in a really simple control-T
implementation. Control-T became a new signal-
generating character in the tty driver; it sent
signal 16. Unlike interrupt and quit, it did not
flush input or output buffers. Unlike any other
signal, SIG_DFL caused the signal to be silently
ignored. (I don't remember why I didn't just teach
/etc/init and login to set that signal to SIG_IGN
by default; maybe I looked and found too many other
programs that monkeyed with every signal, maybe I
just didn't think of it.)
I then wrote a little program meant to be run in the
background from .profile, that dug around in /dev/kmem,
figured out what was likely nearest-to-foreground process
associated with the same terminal, and printed a little
status info for that process.
It didn't take long for the remaining TOPS-10 DNA to
leach away, and besides it is much easier to run some
program in another window now that that is almost always
possible, so I don't miss it. But I like that idea
better than, in effect, hacking a mini-ps into the kernel,
even though the kernel doesn't have to do as much work
to get the data.
I also thought it made more sense to have a general
mechanism that could be used for other things. That
even happened once. The systems I ran were used, among
other things, for developing SMP, the symbolic-manipulation
interpreter worked on by Stephen Wolfram, Geoffrey Fox,
Chris Cole, and a host of graduate and undergraduate students.
(My memory of who deserves credit differs somewhat from
that of at least one person named.) SMP, by its nature,
sometimes had to spend a substantial time sitting and
computing. Someone (probably Wolfram, says my aging
memory) heard about the control-T stuff, asked me how
to use it, and added code to SMP so that during a long
computation control-T would tell you something about
what it was doing and how it was progressing.
Since the signal was, like interrupt and kill, sent
to the whole process group, there was no conflict if
you also had my little control-T monitor running in
the background.
I never tried to send my hacked-up UNIX to anyone else,
so if anyone else did the same sort of control-T hack,
they likely invented it independently.
Norman Wilson
Toronto ON
> From: Clem Cole
> Tops-20 or TENEX (aka Twin-Ex).
ISTR the nickname we used was 'TWENEX'?
BTW, the first 20 at MIT (MIT-XX) had a 'Dos Equis' label prominently stuck to
it... :-)
Noel
> From: Dave Horsfall
> I have a clear recollection that UNSW's driver (or was it Basser?) did
> not use interrupts .. but used the clock interrupt to empty the silos
> every so often. I'd check the source in the Unix Archive, but I don't
> remember which disk image it's in ... Can anyone confirm or deny this?
I found this one:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=AUSAM/sys/dmr/dz.c
which seems to be the one you're rhinking of, or close to it.
It actually does use interrupts, on both sides - sort of. On the input side,
it uses the 'silo alarm', which interrupts when the input buffer has 16
characters in it. This has the same issue as the silo on the DH11 - if there
are less characters than that waiting, the host never gets an interrupt. Which
may be why it does the timer-based input check also?
The output side is entirely interrupt driven; it _does_ reduce the number of
interrupts by checking _every_ output line (on every DZ11 in the machine) to
see if that line's ready for a character when it gets any output interrupt,
which will definitely seriously reduce the number of output interrupts - but
even then, if _one_ line is going flat out, that's still 1000 interrupts per
second.
Noel
Arthur Krewat:
On 5/29/2018 4:11 PM, Dan Cross wrote:
> "I don't always use computers, but when I do, I prefer PDP-10s."
>
> B B B B - Dan C.
Write-in for President in 2020.
===
Only if he's Not Insane.
Norman Wilson
Toronto ON
Long ago, at Caltech, I ran a VAX which had a mix of DZ11s and
Able DH/DMs. The latter were indeed much more pleasant, both
because of their DMA output and their fuller modem control.
For the DZ11s we used a scheme that originated somewhere in
the USG-UNIX world: output was handled through a KMC11.
Output interrupts were disablled on the DZ; a program
running in the KMC fetched output data with DMA, then
spoon-fed it into the DZ, polling the status register
to see when it was OK to send more, then sending an
interrupt to the host when the entire data block had
been sent.
The KMC, for those fortunate enough not to have programmed
it, has a very simple processor sort of on the level of
microcode. It has a few specialized registers and a
simple instruction set which I don't think had all four
of the usual arithmetic ops. I had to debug the KMC
program so I had to learn about it.
When I moved to Bell Labs a few years later, we didn't
need the KMC to drive serial lines--Dennis's stream I/O
code was able to do that smoothly enough, even with DZ11s
on a VAX-11/750 (and without any assembly-language help
either!). But my experience with the KMC was useful
anyway: we used them as protocol-offload processors for
Datakit, and that code needed occasional debugging and
tweaking too. Bill Marshall had looked after that stuff
in the past, but was happy to have someone else who
wasn't scared of it.
Norman Wilson
Toronto ON