> From: Douglas McIlroy
> The union table of man pages in "A Research Unix Reader', CSTR 139,
> marks with "L" local man pages that were not distributed outside of
> research. Does anyone have a copy of those pages? ... would love to get
> them all
The CB-UNIX manual has a bunch of xL "Local" pages, e.g. the Section 3 ones here:
https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/man/man3/
I know this is CB-UNIX, but perhaps they are the same?
Noel
The union table of man pages in "A Research Unix Reader', CSTR 139,
marks with "L" local man pages that were not distributed outside of
research. Does anyone have a copy of those pages? I'm particularly
interested in galloc(3), but would love to get them all, which I
unwittingly trashed when I retired.
Doug
> You didn't say, but I reckon this is a survey of man(7) macros that
> might be considered extensions?
My presentation was too cute for my own good. I pointed out the
consistency of xS/xE for various x. I apologized for EX/EE, which
varied from that form (as UR/UE did more recently), and I
questioned OQ/CQ, which utterly breaks it.
The intended point was that one should have a strong rationale
for deviating from established custom, and thereby fostering
mental overload.
Doug
I omitted one crucial fact from my post about Joe Ossanna's influence
on the TTY 37. That happened not in connection with Unix, but with
Multics. When Unix came on the scene, model 37 was already in
production.
Doug
I haven't known when or how to bring up this project idea, but figure I might as well start putting feelers out since my Dragon Quest project is starting to slow down and I might focus back on UNIX manual stuff.
So something painfully missing from my and I'm sure plenty of other folks' libraries is a nice, modern paper UNIX manual that takes the past few decades into consideration. The GNU project, BSDs, etc. ship manpages of course, and there's the POSIX manpages, but I'm a sucker for a good print manual. Something I'm thinking of producing as a "deliverable" of sorts from my documentation research is a new-age UNIX manual, derived as closely as possible from the formal UNIX documentation lineages (so Research, SysV, and BSD pages), but:
1. Including subsequent POSIX requirements
2. Including an informational section in each page with a little history and some notes about current implementations, if applicable. This would include notes about "dead on the vine" stuff like things plucked from the CB-UNIX, MERT/PG, and PWB lines. The history part could even be a separate book, that way the manual itself could stay tight and focused. This would also be a good place for luminaries to provide reflections on their involvement in given pieces.
One of the main questions that I have in mind is what the legal landscape of producing such a thing would entail. At the very least, to actually call it a UNIX Programmer's Manual, it would probably need to pass some sort of compliance with the materials The Open Group publishes. That said, the ownership of the IP as opposed to the trademarks is a little less certain, so I would be a bit curious who all would be involved in specifically getting copyright approval to publish anything that happened the commercial line after the early 80s, so like new text produced after 1982. I presume anything covered by the Caldera license at least could be published at-cost, but not for a profit (which I'm not looking for anyway.)
Additionally, if possible, I'd love to run down some authorship information and make sure folks who wrote stuff up over time are properly credited, if not on each page ala OWNER at least in a Acknowledgements section in the front.
As far as production, I personally would want to do a run with a couple of different cover styles, comb bound, maybe one echoing the original Bell Laboratories UNIX User's Manual-style cover complete with Bell logo, another using the original USENIX Beastie cover, etc. but that also then calls into question more copyrights to coordinate, especially with the way the Bell logo is currently owned, that could get complicated.
Anywho, anyone know of any such efforts like this? If I actually got such a project going in earnest, would folks find themselves interested in such a publication? In any case I do intend to start on a typesetter sources version of this project sometime in the next year or so, but ideally I would want it to blossom into something that could result in some physical media. This idea isn't even half-baked yet by the way, so just know I don't have a roadmap in place, it's just something I see being a cool potential project over the coming years.
- Matt G.
I got to wondering, based on the sendmail discussions, how many shell
escapes have appeared over the years?
uucp
sendmail
xdvi : "The "allowShell" option enables the shell escape in PostScript specials"
There must be a lot of them, however.
When looking at old xmodem code I noticed that it calculated its CRC bit-by-bit, switching to byte-wise using a table in the late 80’s. It never seems to have used the byte-wise, “on-the-fly” algorithm. This seems to match a pattern: I often come across bit-wise and table implementations, but rarely on-the-fly implementations if any. The on-the-fly algorithm was known at least since 1983, following a paper by Perez: http://www.bitsavers.org/components/fairchild/_appNotes/Byte-wise_CRC_Jun83…
The paper was noted, for example it is on the citation list of RFC1134, describing the PPP protocol. Today, a wikipedia page gives implementations for various polynomials: https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks#Paral…
Now, it would seem to me that on memory constrained personal computers and PDP-11’s the “on-the-fly” algorithm would have been a good choice, being just a few lines of code and maybe 30-50% slower than table lookup. The tables aren’t big, but a kilobyte is a lot when you only have 64.
Any suggestions as to why the on-the-fly algorithm did not catch on more in the 1980’s? Maybe it was simply less well known than I think?
> From: Paul Ruizendaal
> Any suggestions as to why the on-the-fly algorithm did not catch on
> more in the 1980's? Maybe it was simply less well known than I think?
I can't answer that directly, but I will point you at IEN-56, "CRC Checksum
Calculation", by Dave Reed (11-Sep-78):
https://www.rfc-editor.org/ien/ien56.pdf
Dave wanted the INWG to use a more powerful (in terms of detecting errors)
CRC, instead of the simple summation eventually adopted, in TCP and IP. So he
produced code to implement a particular CRC, to show people that it would not
be particularly expensive (whether in time, or space, I don't alas recall
definitively; speed would have been an important consideration, when
competing with the summation, though).
This would have been close to the leading edge of our knowledge at the time;
Dave liked playing around with math, and at about that time he did a very
fast DES implementation.
Noel
Howdy all, just passing along this auction I spotted on eBay: https://www.ebay.com/itm/256216372208
This is a subset of papers from the Documents for UNIX 4.0 set compiled as a handy volume (and likely a forerunner to the packaged volumes released with System V.) Missing is the Typing Documents with MM pamphlet which is listed in the TOC of these issues. I've already got one of these or I'd scoop it up myself. Enjoy!
- Matt G.
P.S. I have seen this and its companion document (Programming Starter Package) with the AT&T deathstar in the upper right rather than Bell Labs and the Bell logo. What I don't know is if it's an exact repackaging just with a change in logo or if those versions have revisions to any of the papers. Most notably as far as version association, the Programming Starter Package includes the Documentation Roadmap, which in my Bell Labs-branded copy, indicates it is for use with UNIX 4.0. I'd be curious if the AT&T variant retains the UNIX 4.0 reference. I'll continue to keep my eye out for an AT&T-branded set, it might reveal some differences.
>> Any suggestions as to why the on-the-fly algorithm did not catch on more in the 1980’s? Maybe it was simply less well known than I think?
>
> Could it have been the per-cpu-second billing that was (fairly?) common at the time. I was only getting in to Unix in the early 90s but saw the tail end of that.
Good point, but wasn’t per-cpu-second billing mostly used for big iron? For machines without memory constraint the table method makes the most sense, also if billing was not a factor.
>> Any suggestions as to why the on-the-fly algorithm did not catch on more in the 1980’s? Maybe it was simply less well known than I think?
>
> The CRC algorithm I'm familiar with shows up in Dragon Quest for the Famicom in 1986[1], written in 6502 assembly. Admittedly though I only recognized it due to the EOR with 0x1021 on lines 318-323. That I then only know from a quick and dirty CRC I threw in an XMODEM-CRC client [2] I did to accommodate for a bug in the JH7100 RISC-V board's recovery ROM implementation. Not sure if this is along the lines of the approach you're talking about ...
>
> [1] - https://gitlab.com/segaloco/dq1disasm/-/blob/master/src/chr3/start_pw.s
> [2] - https://gitlab.com/segaloco/riscv-bits/-/blob/master/util/sxj.c
Both of these are what I call the bit-wise method: a loop iterating over bytes, with an inner loop iterating over bits. An example of the table method is here:
https://github.com/u-boot/u-boot/blob/master/lib/crc16-ccitt.c
and an example of the on-the-fly method is here:
https://github.com/tio/tio/blob/master/src/xymodem.c#L44-L54
Note how the latter also only has one loop iterating over the bytes, but effectively calculates the table entry ‘on the fly’ for each byte. That is only a handful of instructions more than doing the table lookup. Maybe it is a “stuck in the middle” solution that was quickly forgotten.