On 2017-05-20 04:00, Warner Losh <imp(a)bsdimp.com> wrote:
> https://ia601901.us.archive.org/10/items/bitsavers_decpdp11ulLTRIX112.0SPDS…
>
> Looks like it requires MMU, but not split I/D space as it lists the
> following as compatible: M11, 11/23+, 11/24, 11/34, 11/40 and 11/60. It
> does require 256kb of memory. See table 2, page 6 for details.
Uh...? Where do you see that there is any TCP/IP support in Ultrix-11?
If any was done by someone else, there is no saying that it would be
usable on a machine without split I/D. To be honest, I've never seen any
mention of TCP/IP on any machine without split I/D space. I guess it
could be done, but it would be a rather big headache...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
At Celerity we were porting Unix to a new NCR chipset for our washing machine sized Workstation.
We had a VAX 750 as the development box and we cross compiled to the NCR box. We contracted
out the 750 maintenance to a 3rd party and had no problems for a couple of years. Then one day I
came in to work to find the VAX happy consuming power and doing nothing. Unix wasn’t running and
nothing I could do would bring it back. After about 2 hours I got my boss and we contacted the maintenance
company. They guy they sent did much what I’d done and then went around the back. He pushed on the
backplane of the machine and Lo, it started working. He then removed the pressure and it failed quite
immediately. Turns out the backplane had a broken trace in it. We had done no board swaps in many
months and the room had had no A/C faults of any kind.
The company got a new backplane and had it installed in 2 days. Being 3rd party we couldn’t get it
replaced any quicker. After that it worked like a champ.
Celerity eventually became part of Sun as Sun Supercomputer.
David
OK, I'll kick it off.
A beauty in V6 (and possibly V7) was discovered by the kiddies in Elec
Eng; by sending a signal with an appropriately-crafted negative value (as
determined from inspecting <user.h>) you could overwrite u.u_uid with
zero... Needless to say I scrambled to fix that one on my 11/40 network!
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: Dave Horsfall
> Err, isn't that the sticky bit, not the setuid bit?
Oh, right you are. I just looked in the code for ptrace(), and assumed that
was it.
The fix is _actually_ in sys1$exec() (in V6) and sys1$getxfile() (in PWB1 and
the MIT system:
/*
* set SUID/SGID protections, if no tracing
*/
if ((u.u_procp->p_flag&STRC)==0) {
if(ip->i_mode&ISUID)
if(u.u_uid != 0) {
u.u_uid = ip->i_uid;
u.u_procp->p_uid = ip->i_uid;
}
The thing is, this code is identical in V6, PWB1, and MIT system!?
So now I'm wondering - was this really the bug? Or was there some
bug in ptrace I don't see, which was the actual bug that's being
discussed here.
Because is sure looks like this would prevent the exploitation that I
described (start an SUID program under the debugger, then patch the code).
Or perhaps somehow this fix was broken by some other feature,, and that
introduced the exploit?
Noel
> From: "Steve Johnson"
> a DEC repairperson showed up to do "preventive maintenance" and managed
> to clobber the nascent file system.
> Turns out DEC didn't have any permanent file systems on machines that
> small...
A related story (possibly a different version of this one) which I read (can't
remember where, now) was that he trashed the contents of the RS04 fixed-head
hard disk, because on DEC OS's, those were only used for swapping.
Noel
Some interesting comments:
"You all are missing the point as to what the cost of passing
arrays by value or what other languages do"
I don't think so. To me the issues is that the model of what it
means to compute has changed since the punch-card days. When you
submitted a card deck in the early days, you had to include both the
function definition and the data--the function was compiled, the data
was read, and, for the most part there were no significant side
effects (just a printout, and maybe some stuff on mag tape).
This was a model that had served mathematics well for centuries, and
it was very easy to understand. Functional programming people still
like it a lot...
However, with the introduction of permanent file systems, a new
paradigm came into being. Now, interactions with the computer looked
more like database transactions: Load your program, change a few
lines, put it back, and then call 'make'. Trying to describe this
with a purely functional model leads to absurdities like:
file_system = edit( file_system, file_selector,
editing_commands );
In fact, the editing commands can change files, create new ones, and
even delete files. There is no reasonable way to handle any
realistic file systems with this model (let alone the Internet!)
In C's early days, we were just getting into the new world. Call by
value for arrays would have been expensive or impossible on the
machine with just a few kilobytes of memory for program + data. So
we didn't do it.
Structures were initially handled like arrays, but the compiler chose
to make a local copy when passed a structure pointer. This copy was,
at one time, in static memory, which caused some problems. Later, it
went on the stack. It wasn't much used...
This changed when the Blit terminal project was in place. It was
just too attractive on a 68000 to write
struct pt = { int x; int y } /* when int was
16-bits */
and I made PCC pass small structures like this in registers, like
other arguments. I seem to remember a dramatic speedup (2X or so)
from doing this...
"(did) Dennis / Brian/ Ken regret this design choice?
Not that I recall. Of course, we all had bugs in this area. But I
think the lack of subscript range checking was a more serious problem
than using pointers in the first place. And, indeed, for a few of
the pioneers, BCPL had done exactly the same thing.
Steve
Bjarne agrees with you. He put the * (and the &) with the type name to emphasize it is part of the type.
This works fine as long as you only use one declaration per statement.
The problem with that is that * doesn't really bind to the type name. It binds to the variable.
char* cp1, cp2; // cp1 is pointer to char, cp2 is just a char.
I always found it confusing that the * is used to indicate an pointer here, where as when you want to change an lvalue to a pointer, you use &.
But if we're going to gripe about the evolution of C. My biggest gripe is when they fixed structs to be real types, they didn't also do so for arrays.
Arrays and their degeneration to poitners is one of the biggest annoyances in C.
> Am I the only one here who thinks that e.g. a char pointer should be
> "char* cp1, cp2" instead of "char *cp1, *cp2"? I.e. the fundamental type is "char*", not "char", and to this day I still write:
> Fortran, for the record, passes nearly everything by reference
Sort of. The Fortran 77 standard imposes restrictions that appear to
be intended to allow the implementation to pass by value-and-result
(i.e. values are copied in, and copied back at return). In particular
it disallows aliasing that would allow you to distinguish between
the two methods:
If a subprogram reference causes a dummy argument in the referenced
subprogram to become associated with another dummy argument in the
referenced subprogram, neither dummy argument may become defined
during execution of that subprogram.
http://www.fortran.com/F77_std/rjcnf-15.html#sh-15.9.3.6
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> From: Random832
> Ah. There's the other piece. You start the SUID program under the
> debugger, and ... it simply starts it non-suid. *However*, in the
> presence of shared text ... you can make changes to the text image
> ... which will be reused the *next* time it is started *without* the
> debugger.
So I actually tried to do this (on a V6 system running on an emulator), after
whipping up a tiny test program (which prints "1", and the real and current
UIDs): the plan was to patch it to print a different number.
However, after a variety of stubbed toes and hiccups (gory details below, if
anyone cares), including a semi-interesting issue with the debugger and pure
texts), I'm punting: when trying to set a breakpoint in a pure text, I get the
error message "Can't set breakpoint", which sort of correlates with the
comment in the V6 sig$ptrace(): "write user I (for now, always an error)".
So it's not at all clear that the technique we thought would work would, in
fact, work - unless people weren't using a stock V6 system, but rather one
that had been tweaked to e.g. allow use of debuggers on pure-text programs
(including split I+D).
It's interesting to speculate on what the 'right' fix would be, if somehow the
techique above did work. The 'simple' fix, on systems with a PWB1-line XWRIT
flag, would be to ignore SETUID bits when doing an exec() of a pure text that
had been modified. But probably 'the' right fix would be to give someone
debugging a pure-text program their own private copy of the text. (This would
also prevent people who try to run the program from hitting breakpoints while
it's being debugged. :-)
But anyway, it's clear that back when, when I thought I'd found the bug, I
clearly hadn't - which is why when I looked into the source, it looked like it
had been 'already' been fixed. (And why Jim G hemmed and hawed...)
But I'm kind of curious about that mod in PWB1 that writes a modified pure
text back to the swap area when the last process using it exits. What was the
thinking behind that? What's the value to allowing someone to patch the
in-core pure text, and then save those patches? And there's also the 'other
people who try and run a program beind debugged are going to hit breakpoints'
issue, if you do allow writing into pure texts...
Noel
--------
For the gory details: to start with, attempting to run a pure-text program
(whether SUID or not) under the debugger produced a "Can't execute
{program-name} Process terminated." error message.
'cdb' is printing this error message just after the call to exec() (if that
fails, and returns). I modified it to print the error number when that
happens, and it's ETXTBSY. I had a quick look at the V6 source, to see if I
could see what the problem is, and it seems to be be (in sys1$exec()):
if(u.u_arg[1]!=0 && (ip->i_flag&ITEXT)==0 && ip->i_count!=1) {
u.u_error = ETXTBSY;
goto bad;
}
What that code does is a little obscure; I'm not sure I understand it. The
first term checks to see if the size of the text segment is non-zero (which it
is not, in both 0407 and 0410 files). The second is, I think, looking to see
if the inode is marked as being in use for a pure text (which it isn't, until
later in exec()). The third checks to make sure nobody else is using the file.
So I guess this prevents exec() of a file which is already open, and not for a
pure text. (Why this is the Right Thing is not instantly clear to me...)
Anyway, the reason this fails under 'cdb' is that the debugger already has it
open (to be able to read the code). So I munged the debugger to close it
before doing the exec(), and then the error went away.
Then I ran into a long series of issues, the details of which are not at all
interesting, connected with the fact that the version of 'cdb' I was using
(one I got off a Tim Shoppa modified V6 disk) doesn't correspond to either of
the sources I have for 'cdb'.
When I switched to the latest source (so I could fix the issue above), it had
some bug where it wouldn't work unless there was a 'core' file. But eventually
I kludged it enough to get the 'can't set breakpoints' message, at which point
I threw in the towel.
> From: Clem Cole
> it was was originally written for the for the 6th edition FS (which I
> hope I have still have the sources in my files) ...
> I believe Noel recovered a copy in his files recently.
Well, I have _something_. It's called 'fcheck', not 'fsck', but it looks like
what we're talking about - maybe it was originally named, or renamed, to be in
the same series as {d,i,n}check? But it does have the upper-case error
messages... :-) Anyway, here it is:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/fcheck.chttp://ana-3.lcs.mit.edu/~jnc/tech/unix/man8/fcheck.8
Interestingly, the man page for it makes reference to a 'check' command, which
I didn't recall at all; here it is:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/s1/check.chttp://ana-3.lcs.mit.edu/~jnc/tech/unix/man8/check.8
for those who are interested.
> Noel has pointed out that MIT had it in the late 1970s also, probably
> brought back from BTL by one of their summer students.
I think most of the Unix stuff we got from Bell (e.g. the OS, which is clearly
PWB1, not V6) came from someone who was in a Scout unit there in high school,
of all bizarre connections! ISTR this came the same way, but maybe I'm wrong.
It definitely arrived later than the OS - we'd be using icheck/dcheck for
quite a while before it arrived - so maybe it was another channel?
The only thing that for sure (that I recall) that didn't come this way was
Emacs. Since the author had been a grad student in our group at MIT, I think
you all can guess how we got that!
Noel
> Are there languages that copy arrays in function calls defaultly?
> Pascal is an example.
Pascal's var convention, where the distinction between value
and reference is made once and for all for each argument of
each function, is sound. The flexibility of PL/I, where the
distinction is made at every call (parenthesize the name to
pass an array by value) is finicky, though utterly general.
> Where is all that [memory] going to come from if you pass a
> large array on a memory-constrained system of specs common back in the
> days when C was designed
Amusingly, under the customary linkage method in the even earlier
days when Fortran was designed, pass-by-reference entailed a big
overhead that could easily dominate pass-by-value for small arrays.
[In the beginning, when CPUs had only one register, subroutine
preambles plugged the reference into every mention of that variable
throughout the body of the subroutine. This convention persisted
in Fortran, which was designed for a machine with three index
registered. Since reference variables were sometimes necessary
(think of swap(a,b) for example) they were made standard.]
Doug
> From: Random832
> It seems to me that this check is central to being able to (or not)
> modify the in-core image of any process at all other than the one being
> traced (say, by attaching to a SUID program that has already dropped
> privileges, and making changes that will affect the next time it is
> run).
Right, good catch: if you have a program that was _both_ sticky and SUID, when
the system is idle (so the text copy in the swap area won't get recycled),
call up a copy under the debugger, patch it, exit (leaving the patched copy),
and then re-run it without the debugger.
I'd have to check the handling of patched sticky pure texts - to see if they
are retained or not.
{Checks code.}
Well, the code to do with pure texts is _very_ different between V6 and
PWB1.
The exact approach above might not work in V6, because the modified (in-core)
copy of pure texts are simply deleted when the last user exits them. But it
might be possible for a slight variant to work; leave the copy under the
debugger (which will prevent the in-core copy from being discarded), and then
run it again without the debugger. That might do it.
Under PWB1, I'm not sure if any variant would work (very complicated, and I'm
fading). There's an extra flag bit, XWRIT, which is set when a pure text is
written into; when the last user stops using the in-code pure text, the
modified text is written to swap. (It lools like the in-core copy is always
discarded when the last user stops using it.) But the check for sticky would
probably stop a sticky pure-text being modified? But maybe the approach that
seems like it would work under V6 (leave the patched, debugger copy running,
and start a new instance) looks like it should work here too.
So maybe the sticky thing is irrelevant? On both V6 and PWB1, it just needs a
pure text which is SETUID: start under the debugger, patch, leave running, and
start a _new_ copy, which will run the patched version as the SUID user.
Noel
> From: Clem Cole
> I said -- profil - I intended to say ptrace(2)
Is that the one where running an SUID program under the debugger allowed one
to patch the in-core image of said program?
If so, I have a story, and a puzzle, about that.
A couple of us, including Jim Gettys (later of X-windows fame) were on out way
out to dinner one evening (I don't recall when, alas, but I didn't meet him
until '80 or so), and he mentioned this horrible Unix security bug that had
just been found. All he would tell me about it (IIRC) was that it involved
ptrace.
So, over dinner (without the source) I figured out what it had to be:
patching SUID programs. So I asked him if that was what it was, and I don't
recall his exact answer, but I vaguely recall he hemmed and hawed in a way
that let me know I'd worked it out.
So when we got back from dinner, I looked at the source to our system to see
if I was right, and.... it had already been fixed! Here's the code:
if (xp->x_count!=1 || xp->x_iptr->i_mode&ISVTX)
goto error;
Now, we'd been running that system since '77 (when I joined CSR), without any
changes to that part of the OS, so I'm pretty sure this fix pre-dates your
story?
So when I saw your email about this, I wondered 'did that bug get fixed at
MIT when some undergrad used it to break in' (I _think_ ca. '77 is when they
switched from an OS called Delphi on the -11/45 used for the undergrad CS
programming course - I _think_ they switched that machine from Delphi to
Unix), or did it come with PWB1? (Like I said, that system was mostly PWB1.)
So I just looked in the PWB1 sources, and... there it is, the _exact_ same
fix. So we must have got it from PWB1.
So now the question is: did the PWB guys find and fix this, and forget to
tell the research guys? Or did they tell them, and the research guys blew
them off? Or what?
Noel
> We all took the code back and promised to get patches out ASAP and not tell any one about it.
Fascinating. Chnages were installed frequently in the Unix lab, mostly
at night without fanfare. But an actual zero-day should have been big
enough news for me to have heard about. I'm pretty sure I didn't; Dennis
evidently kept his counsel.
Doug
> From: "Ron Natalie"
> Ordered writes go back to the original BSD fast file system, no? I seem
> to recall that when we switched from our V6/V7 disks, the filesystem got
> a lot more stable in crashes.
I had a vague memory of reading about that, so I looked in the canonical FFS
paper (McKusick et al, "A Fast File System for UNIX" [1984)]) but found no
mention of it.
I did find a paper about 'fsck' (McKusick, Kowalski, "Fsck: The UNIX File
System Check Program") which talks (in Section 2.5. "Updates to the file
system") about how "problem[s] with asynchronous inode updates can be avoided
by doing all inode deallocations synchronously", but it's not clear if they're
talking about something that was actually done, or just saying
(hypothetically) that that's how one would fix it.
Is is possible that the changes to the file system (e.g. the way free blocks
were kept) made it more crash-proof?
Noel
> The problem with that is that * doesn't really bind to the type name.
> It binds to the variable.
>
> char* cp1, cp2; // cp1 is pointer to char, cp2 is just a char.
>
> I always found it confusing that the * is used to indicate an pointer
> here, where as when you want to change an lvalue to a pointer, you use
> &.
The way to read it is that you are declaring *cp1 as a char.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> From: Warner Losh
> There's a dcheck.c in the TUHS v7 sources. How's that related?
That was one of the earlier tools - not sure how far back it goes, but it's in
V6, but not V5. It consistency checks the directory tree. Another tool, 'icheck',
consistency checks file blocks and the free list.
Noel
I've made available on GitHub a series of tables showing the evolution
of Unix facilities (as documented in the man pages) over the system's
lifetime [1] and two diagrams where I attempted to draw the
corresponding architecture [2]. I've also documented the process in a
short blog post [3]. I'd welcome any suggestions for corrections and
improvements you may have, particularly for the architecture diagrams.
[1] https://dspinellis.github.io/unix-history-man/
[2] https://dspinellis.github.io/unix-architecture/
[3] https://www.spinellis.gr/blog/20170510/
Cheers,
Diomidis
On 6 May 2017 at 11:23, ron minnich <rminnich(a)gmail.com> wrote (in part):
[...]
> Lest you think things are better now, Linux uses self modifying code to
> optimize certain critical operations, and at one talk I heard the speaker
> say that he'd like to put more self modifying code into Linux, "because it's
> fun". Oh boy.
Fun, indeed! Even self-modifying chips are being touted -- Yikes!
N.
> tr -cs A-Za-z '\n' |
> tr A-Z a-z |
> sort |
> uniq -c |
> sort -rn |
> sed ${1}q
>
> This is real genius.
Not genius. Experience. In the Bentley/Knuth/McIlroy paper I said,
"[Old] Unix hands know instinctively how to solve this one in a jiffy."
While that is certainly true, the script was informed by my having
written "spell", which itself was an elaboration of a model
pioneered by Steve Johnson. By 1986, when BKM was published,
the lore was baked in: word-processing scripts in a similar
vein were stock in trade.
A very early exercise of this sort was Dennis Ritchie's
enumeration of anagrams in the unabridged Merriam-Webster.
Since the word list barely fit on the tiny disk of the time,
the job entailed unimaginable marshalling of resources. I
was mightily impressed then, and still am.
Doug
On Thu, May 4, 2017 at 7:14 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> some of those Berkeley flags (not specifically for cat, but almost
> certainly including those for cat) were really quite useful.
Amen!!! I think that this is key point. What is in good taste or good
style? Doug's distain for the results of:
less --help | wc
is found in bad design, a poor user interface, little fore thought, *etc.*
Most of what is there has been added over Eric Scheinbrood's
original more(1)
I do not believe are used that often, but some of them are of course, and
you can not tell who uses what!! So how do you decide what get rid of?
How do you learn what people really need -- IMO: much of that is
* experience *and this thing we call 'good taste.' As opposed to what I
think happened with less(1) and many others similar programs (programmers
peeing on the code because they code and the source was available -- I can
add this feature to it and I think it is cool. As opposed to asking,
what do really get and not having a 'master builder' arbitrating or vetting
things).
The problem we have is that we don't yet have a way of defining good taste.
One might suggest that it takes years of civilization and also that
tastes do change over time. Pike's minimalistic view (which I think is
taken to the extreme in the joke about automobile dashboard on Brian
Kernighan's car) sets the bar to one end and probably one of the reason why
UNIX had a bad reputation, certainly by non-computer scientists, when it
first appeared as being difficult. Another extreme is systems that put in
you a box and never let you do anything but what we told you do; which I
find just as frighten and frustration builds there when I use them.
Clearly systems that are so noisy that can not find what really want, or
need is another dimension of the same bad design. So what to do? [more in
a minute...]
Larry is right. Many of the 'features' added to UNIX (and Linux) over time
have been and *are useful*. Small and simple as it was (and I really
admire Ken, Dennis and the Team for its creation), but in 2017 I really
don't want to run the Sixth Edition for my day to day work anymore - which
I happily did in 1977. But the problem is, as we got 'unlimited address
space' of 32 and 64 bits, and more room for more 'useful' things, we also
got a great deal of rubbish and waste. I am interpreting Doug's point
about less --help | wc is that is that are so many thorns and weeds, it
hard to see to see the flowers in the garden.
I'd like to observe that is that most college courses in CS I have seen
talk about how to construct a programs, algorithms, structures - i.e. the
mechanics of some operation. But this discussion is about the human
element. What we feel is good or bad and how it related to how to use
the program.
I think about my friends that have degrees in literature, art and
architecture. In all cases, they spend a lot of time examining past
examples, of good and bad - and thinking and discussing what makes them so.
I'm actually happy to see it was Professor McIlroy that is one of the
folks taking a stand on current craziness. I think this is only going to
get better if a new crop of students that have been trained in 'good
taste.' So, I wonder do any of the schools like Darthmouth and the like
teach courses that study 'style' and taste in CS. Yes, it is a young
field but we have been around long enough that we do a body of work good
and bad to consider.
I think there is a book or two and a few lectures in there somewhere.
Thoughts?
Clem
> From: Michael Kjörling <michael(a)kjorling.se>
> To: tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Discuss of style and design of computer programs
> from a user stand point
> Message-ID: <20170506091857.GE12539(a)yeono.kjorling.se>
> Content-Type: text/plain; charset=utf-8
>
> I would actually take that one step further: When you are writing
> code, you are _first and foremost_ communicating with whatever human
> will need to read or modify the code later. That human might be you, a
> colleague, or the violent psychopath who knows both where you live and
> where your little kids go to school (might as well be you). You should
> strive to write the code accordingly, _even if_ the odds of the threat
> ever materializing are slim at most. Style matters a lot, there.
>
Interesting, I was going to say about the same thing about the violent psychopath
who has to maintain your code after you leave. When I lectured at UCSD or was
giving talks on style for ViaSat I always said the same thing:
Whatever you write, the fellow who is going to wind up maintaining it is a known
axe killer, now released from prison, completely reformed. He learned computer
programming on MS/DOS 3.1 and a slightly broken version of Pascal. He will be
given your home phone number and address so if he has any questions about the
code you wrote he can get in contact with you.
This always got a few chuckles. I then pointed out that whenever anyone gets code
that someone else wrote, the recipient always thinks that they can ‘clean up’ what
is there because the original author clearly doesn’t understand what proper code
looks like.
Over time, I’ve learned that everyone has a style when writing code, just like handwriting
and given enough time, I can spot who the author of a block of code is just from the
indenting, placement of ( and ) around a statement and other small traits.
What makes good code is the ability to convey the meaning of the algorithm
from the original author to all those who come after. Sometimes even the most
unusual code can be quite clear, while the most cleanly formatted and commented
code can be opaque to all.
David
> tr -cs A-Za-z '\n' |
> tr A-Z a-z |
> sort |
> uniq -c |
> sort -rn |
> sed ${1}q
>
> This is real genius.
Not genius. Experience. In the Bentley/Knuth/McIlroy paper I said,
"[Old] Unix hands know instinctively how to solve this one in a jiffy."
While that is certainly true, the script was informed by my having
written "spell", which itself was an elaboration of a model
pioneered by Steve Johnson. By 1986, when BKM was published,
the lore was baked in: word-processing scripts in a similar
vein were stock in trade.
A very early exercise of this sort was Dennis Ritchie's
enumeration of anagrams in the unabridged Merriam-Webster.
Since the word list barely fit on the tiny disk of the time,
the job entailed unimaginable marshalling of resources. I
was mightily impressed then, and still am.
Doug
On 3 May 2017 at 09:09, Arthur Krewat <krewat(a)kilonet.net> wrote:
> Not to mention, you can cat multiple files - as in concatenate :)
Along these lines, who said "Cat went to Berkely, came back waving flags."
N.
> I believe I was the last person to modify the Linux man page macros.
> Their current maintainer is not the kind of groff expert to whom it
> would occur to modify them; it would work as well to ask me questions
Question #1. Which tmac file to they use? If it's not in the groff
package, where can it be found?
Doug