Larry McVoy:
If you have something like perl that needs a zillion sub pages, info
makes sense. For just a man page, info is horrible.
=====
This pokes me in one of my longest-standing complaints:
Manual entries, as printed by man and once upon a time in
the Programmers' Manual Volume 1, should be concise references.
They are not a place for tutorials or long-winded descriptions
or even long lists of hundreds of options (let alone descriptions
of why the developer thinks this is the neatest thing since
sliced bread and what bread he had in his sandwiches that day).
For many programs, one or two pages of concise reference is
all the documentation that's needed; no one needs a ten-page
tutorial on how to use cat or rm or ls, for example. But some
programs really do deserve a longer treatment, either a tutorial
or an extended reference with more detail or both. Those belong
in separate documents, and are why the Programmers' Manual had
a second volume.
Nowadays people think nothing of writing 68-page-long manual
entries (real example from something I'm working with right now)
that are long, chatty lists of options or configuration-file
directives with tutorial information interspersed. The result
makes the developer feel good--look at all the documentation
I've written!!--but it's useless for someone trying to figure
out how to write a configuration file for the first time, and
not so great even for someone trying to edit an existing one.
Even the Research system didn't always get this right; some
manual entries ran on and on and on when what was really
needed was a concise list of something and a longer accompanying
document. (The Tenth Edition manual was much better about
that, mostly because of all the work Doug put in. I doubt
there has ever been a better editor for technical text than
Doug.) But it's far worse now in most systems, because
there's rarely any editor at all; the manuals are just an
accreted clump.
And that's a shame, though I have no suggestions on how
to fix it.
Norman Wilson
Toronto ON
Clem Cole:
Exactly!!!! That's what Eric did when he wrote more(ucb) - he *added to
Unix*. The funny part was that USG thought more(ucb) was a good idea and
then wrote their own, pg(att); which was just as arrogant as the info
behavior from the Gnu folks!!!
======
And others wrote their own too, of course. The one I know
best is p(1), written by Rob Pike in the late 1970s at the
University of Toronto. I encountered at Caltech on the
system Rob had set up before leaving for Bell Labs (and
which I cared for and hacked on for the next four years
before following him). By the time I reached BTL it was
a normal part of the Research system; I believe it's in
all of the Eighth, Ninth, and Tenth Edition manuals.
p is interesting because it's so much lighter-weight, and
because it has rather a different interior design:
Rather than doing termcappy things, p just prints 22 lines
(or the number specified in an option), then doesn't print
the newline after the 22nd line. Hit return and it will
print the next 22 lines, and so on. The resulting text just
flows up the glass-tty screen without any fuss, cbreak, or
anything. (I believe the first version predated V7 and
therefore cbreak.)
Why 22 lines instead of 24, the common height of glass ttys
back then? Partly because that means you keep a line or two
of context when advancing pages, making reading simpler.
But also because in those days, a standard page destined for
a printer (e.g. from pr or nroff, and therefore from man) was
66 lines long. 22 evenly divides 66, so man something | p
never gives you a screen spanning pages.
p was able to back up: type - (and return) instead of just
return, and it reprints the previous 22-line page; -- (return)
the 22 lines before that; and so on. This was implemented
in an interesting and clever way: a wrapper around the standard
I/O library which kept a circular buffer of some fixed number
of characters (8KiB in early versions, I think), and a new
call that, in effect, backed up the file pointer by one character
and returned the character just backed over. That made it easy
to back over the previous N lines: just make that call until
you've seen N newlines, then discard the newline you've just
backed over, and you're at the beginning the first line you want
to reprint.
As I vaguely recall, more was able to back up, but only when
reading from a real file, not a pipeline. p could do (limited
but sufficient) backup from a pipeline too.
As a creature of its pre-window-system era, you could also type
!command when p paused as well.
I remember being quite interested in that wrapper as a
possible library for use in other things, though I never
found a use for it.
I also remember a wonderful Elements-of-Programming-Style
adventure with Rob's code. I discovered it had a bug under some
specific case when read returned less than a full bufferful.
I read the code carefully and couldn't see what was wrong.
So I wrote my own replacement for the problematic subroutine
from scratch, tested it carefully in corner cases, then with
listings of Rob's code and mine side-by-side walked through
each with the problem case and found the bug.
I still carry my own version of p (rewritten from scratch mainly
to make it more portable--Rob's code was old enough to be too
clever in some details) wherever I go; ironically, even back to
U of T where I have been on and off for the past 30 years.
more and less and pg and the like are certainly useful programs;
for various reasons they're not to my taste, but I don't scorn
them. But I can't help being particular fond of p because it
taught me a few things about programming too.
Norman Wilson
Toronto ON
KatolaZ:
> We can discuss whether the split was necessary or "right" in the first
> instance, as we could discuss whether it was good or not for cat(1) to
> leave Murray Hill in 1979 with no options and come back from Berkley
> with a source code doubled in size and 9 options in 1982.
We needn't discuss that (though of course there are opinions and
mine are the correct ones), but in the interest of historic accuracy,
I should point out by 1979 (V7) cat had developed a single option -u
to turn off stdio buffering.
Sometime before 1984 or so, that option was removed, and cat was
simplified to just
while ((n = read(fd, buf, sizeof(buf))) > 0)
write(1, buf, n)
(error checking elided for clarity)
which worked just fine for the rest of the life of the Research
system.
So it's true that BSD added needless (in my humble but correct
opinion) options, but not that it had none before they touched it.
Unless all those other programs were stuffed into cat in an earlier
Berkeley system, but I don't think they were.
Norman Wilson
Toronto ON
(Three cats, no options)
Arthur Krewat:
Which is better, creating a whole new binary to put in /usr/bin to do a
single task, or add a flag to cat?
Which is better, a proliferation of binaries w/standalone source code,
or a single code tree that can handle slightly different tasks and save
space?
======
Which is simpler to write correctly, to debug, and to maintain:
a simple program that does a single task, or a huge single program
with lots of tasks mashed together?
Which is easier to understand and use, individual programs each
with a few options specialized to a particular task, or a monolith
with many more options some of which apply only to one task or
another, others to all?
What are you trying to optimize for? The speed with which
programmers can churn out yet another featureful utility full
of bugs and corner cases, or the ease with which the end-user
can figure out what tool to use and how to use it?
Norman Wilson
Toronto ON
I fear we're drifting a bit here and the S/N ratio is dropping a bit w.r.t
the actual history of Unix. Please no more on the relative merits of
version control systems or alternative text processing systems.
So I'll try to distract you by saying this. I'm sitting on two artifacts
that have recently been given to me:
+ by two large organisations
+ of great significance to Unix history
+ who want me to keep "mum" about them
+ as they are going to make announcements about them soon *
and I am going slowly crazy as I wait for them to be offically released.
Now you have a new topic to talk about :-)
Cheers, Warren
* for some definition of "soon"
OK. I'm totally confused, and I see contradictory information around. So I
thought I'd ask here.
PWB was started to support unix time sharing at bell labs in 1973 (around
V4 time).
PWB 1.0 was released just after V6 "based on" it. PWB 2.0 was released just
after V7, also "based on it". Later Unix TS 3.0 would become System III. We
know there was no System I or System II. But was there a Unix TS 1.0 and
2.0? And were they the same thing as PWB 1.0 and 2.0, or somehow just
closely related? And I've seen both Unix/TS and Unix TS. Is there a
preferred spelling?
Thanks for all your help with this topic and sorting things out. It's been
quite helpful for my talk in a few weeks.
Warner
P.S. Would it be inappropriate to solicit feedback on an early version of
my talk from this group? I'm sure they would be rather keener on catching
errors in my understanding of Unix history than just about any other
forum...
Hello TUHS on Tues.,
Warren Toomey suggested I let the group know about a utility that exists at least for iMacs and IOS devices.
It’s called “cathode” and you can find it on the Apple App Store. Please forgive me if this has already been mentioned.
This utility provides for an xterm window that looks like the display an old *tube. You can set the curvature of the glass, the glow, various scan techniques, 9600 speed, and so on.
It adds that extra dimension to give the look and feel of working on early UNIX with a tube.
I would love to see profiles created that match actual ttys. My favorite tube is the Wyse 50. Another, one I remember is a Codex model with “slowopen” set in vi.
Remember how early UNIX terminals behaved with slowopen, right? The characters would overtype during insert mode in vi, but when you hit escape, the line you just clobbered reappears shifting the remaining text as appropriate to the right.
Cathode adds a little spice, albeit artificially, to the experience of early UNIX.
Truly,
Bill Corcoran
(*) For the uninitiated, we used to call the tty terminal a “tube.” For example, you might hear my boss say, “Corcoran, that cheese you hacked yesterday launched a runaway that’s now soaking the client’s CPU. Go jump on a free tube and fix it now!”
Noel Chiappa wrote:
> > From: Doug McIlroy
>
> > the absence of multidemensional arrays in C.
>
>?? From the 'C Reference Manual' (no date, but circa 'Typesetter C'), pg. 11:
>
> "If the unadorned declarator D would specify an n-dimensional array .. then
> the declarator D[in+1] yields an (n+1)-dimensional array"
>
>
>I'm not sure if I've _ever_ used one, but they are there.
Yes, C allows arrays of arrays, and I've used them aplenty.
However an "n-dimensional array" has one favored dimension,
out of which you can slice an array of lower dimension. For
example, you can pass a row of a 2D array to a function of a
1D variable, but you can't pass a column. That asymmetry
underlies my assertion. In Python, by contrast, n-dimensional
arrays can be sliced on any dimension.
Doug
> Excellent - thanks for the pointer. This shows nroff before troff.
> FWIW: I guess I was miss informed, but I had been under the impression
> that was the other way around. i.e. nroff was done to be compliant with
> the new troff, replacing roff, although the suggestion here is that he
> wrote it add macros to roff. I'll note that either way, the dates are all
> possible of course because the U/L case ASR 37 was introduced 1968 so by
> the early 1970's they would have been around the labs.
nroff was in v2; troff appeared in v4, which incidentally was
typeset in troff.
Because of Joe Ossanna's role in designing the model 37, we
had 37's in the Labs and even in our homes right from the
start of production. But when they went obsolete, they were
a chore to get rid of. As Labs property, they had to be
returned; and picking them up was nobody's priority.
Andy Hall had one on his back porch for a year.
Doug