> The Documenter's Workbench is sort of the unsung hero
> of Unix. It is why Unix exists, Unix was done to write patents and
> troff and the Documenter's Workbench was all about that.
My response along the following lines seems to have gone astray.
The prime reason for Unix was the desire of Ken, Dennis, and Joe
Ossanna to have a pleasant environment for software development.
The fig leaf that got the nod from Multics-burned management was
that an early use would be to develop a "stand-alone" word-processing
system for use in typing pools and secretarial offices. Perhaps they
had in mind "dedicated", as distinct from "stand-alone"; that's
what eventuated in various cases, most notably in the legal/patent
department and in the AT&T CEO's office.
Both those systems were targets of opportunity, not foreseen from the
start. When Unix was up and running on the PDP-11, Joe got wind of
the legal department having installed a commercial word processor.
He went to pitch Unix as an alternative and clinched a trial by
promising to make roff able to number lines by tomorrow in order to
fulfill a patent-office requirement that the commercial system did
not support.
Modems were installed so legal-department secretaries could try the
Research machine. They liked it and Joe's superb customer service.
Soon the legal department got a system of their own. Joe went on to
create nroff and troff. Document preparation became a widespread use
of Unix, but no stand-alone word-processing system was ever undertaken.
Doug
Decades ago (mid 1993) I produced a CD-ROM with free software for
Novell's UnixWare entitled "Applications for UnixWare". This was at
Novell's request, and they distributed the CDs at trade shows.
There's nothing very special in the CD, but I recently received a
request for it, so I put it up with a minimal description at
http://www.lemis.com/grog/Software/Applications-for-UnixWare.php
Feel free to copy it (with attribution).
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
All, I've just placed this document at
https://www.tuhs.org/Archive/Documentation/TechReports/Baker_Struct/bsbstru…
Many thanks to Doug for passing it on!
Cheers, Warren
----- Forwarded message from Douglas McIlroy -----
Warren,
Someone asked if Brenda Baker's original technical memorandum about
struct is available. I scanned my copy and passed it on, secure in my
belief that the Labs won't care since it is essentially the same as
her journal publication.
For the memo to be genuinely available, I'd like to send it to TUHS.
With that in mind I redacted information about corporate organization
and distribution channels. However the memo still bears the AT&T logo
and the words "not for publication".
Doug
----- End forwarded message -----
Thanks to Doug and Warren for getting Brenda Baker's memo on struct
online at
https://www.tuhs.org/Archive/Documentation/TechReports/Baker_Struct/bsbstru…
She later published a formal article about that work in
An Algorithm for Structuring Flowgraphs
Journal of the Association for Computing Machinery 24(1) 98--120 January 1977
https://doi.org/10.1145/321992.321999
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Last week there was a bit of discussion about the different shells
that would eventually lead to srb writing the shell that took his name
and the command syntax and semantics that most modern shells use
today. Some of you may remember, VMS had a command interpreter
called DCL (Digital Command language), part of an attempt to make
command syntax uniform across DEC's different operating systems
(TOPS-20 also used DCL). As DEC started to recognize the value of the
Unix marketplace, a project was born in DEC's Commercial Languages and
Tools group to bring the Unix Bourne shell to VMS and to sell it as a
product they called DEC Shell.
I had been part of that effort and one of the issues we had to solve
is providing formal UNIX pipe semantics. They of course needed to
somehow implement UNIX style process pipelines. VMS from the
beginning has had an interprocess communications pseudo-device called
the mailbox that can be written to and read from via the usual I/O
mechanism (the QIO system service). A large problem with them is that
it is not possible to detect the "broken pipe" condition with a
mailbox and that feature deficiency made them unsuitable for use with
DEC Shell. So the team had me write a new device driver, based
closely on the mailbox driver, but that could detect broken pipes
lines UNIX-style.
Shortly after I finished the VMS pipe driver, the team at DECwest had
started work on the MICA project, which was Dave Culter's proposed OS
unification. Dave's team had developed a machine architecture called
PRISM (Proposed RISC Machine) to be the VAX follow-on. For forward
compatibility purposes, PRISM would have to support both Ultrix and
VMS. Dave and team had already written a microkernel-based,
lightweight OS for VAX called VAXeln that was intended for real-time
applications. His new idea was to have a MACH-like microkernel OS
which he called MICA and then to put three user mode personality
modules on top of that:
P.VMS, implementing the VMS system services and ABI
P.Ultrix, implementing the Unix system calls and ABI
P.TBD, a new OS API and ABI intended to supersede VMS
So I wrote the attached "why pipes" memo to explain to Cutler's team
why it was important to implement pipes natively in P.TBD if they
wanted that OS to be a viable follow-on to VMS and Ultrix.
In the end, Dick Sites's 64-bit RISC machine architecture proposal,
which was called Alpha, won out over PRISM. Cutler and a bunch of his
DECwest engineering team went off to Microsoft. Dave's idea of a
microkernel-based OS with multiple personalities of course saw the
light of day originally as NT OS/2, but because of the idea of
multiple personalities, when Microsoft and IBM divorced Dave was able
to quickly pivot to the now infamous Win32 personality, as what would
be called Windows NT. It was also easy for Softway Systems to later
complete the NT POSIX layer for their Interix product, which now a few
generations later is called WSL by Microsoft.
-Paul W.
> You made a comment that MINI-UNIX wasn't available outside of Bell...
No, I didn't say that. What I _actually_ said was: "don't think they were in
wide use outside Bell".
Noel
PS: Can I appeal to people to please take a few seconds of their time and
trim messages they are replying to? Thank you.
> From: Andrew Hume
> the actual configuration of Lions; PDP 11/40 was
> 128 Kbytes of core memory
> ...
> but note that because ... of addressing weirdness (the top 8KB were
> memory-mapped to I/O registers), Lions' PDP actually had 112KB of main
> memory
I think that '112KB' must be an error; the 8KB for the 'I/O page' (as DEC
eventually named ir, long after the rest of the world had started using the
term :-) were deducted from the _UNIBUS_ address space, meaning a UNIBUS -11
(the 'pure' UNIBUS -11's, i.e. other than the -11/70, -11/44, etc) could have
a maximum of 248KB of main memory (which is on the UNIBUS).
A pure UNIBUS -11 with 128KB of main memory (like Lions') has... 128KB of
main memory. The 'small memory management model' -11's (like the /40, /60,
/23, etc) can use at most 64KB of that _at any moment in time_ for user
processes (i.e. directly accessible by the CPU, in 'user' mode).
(The kernel on such machines is basically retricted to 56KB at any moment in
time, since one 'segment/page' - the terminology changed over time - has to be
dedicated to the I/O page: the memory management control registers are in
that, so once the CPU can no longer 'see' them, it's stuck. Long, potentially
interesting digression about, and ways to semi-work around that, elided,
unless people want to hear it.)
> From: Noel Chiappa
> The -11/40 (as it was at first) that I had at LCS had, to start with,
> I'm pretty sure, 3 MM11-L units .. - i.e. 48KB. I know this sounds
> incredible, and I'm having a hard time believing it myself, wondering
> if my memory is failing with age
It is:
# size /lib/c0
13440+2728+10390=26558 (63676)
('c1' takes 14848+6950+2088=23886, FWIW.) So 'my' -11/40 must have had more
than 48KB.
MINI-UNIX provides, on an -11/05 type machine with the maximum of 56KB of
addressable main memory (if you plugged in 64KB worth, the /05 CPU couldn't
'see' the top 8KB of that), up to 32KB for a user process. So that will
just hold the stock V6 C compiler.
I'm not now sure how much memory my -11 _did_ have initially, but it's not
important.
Noel
I enjoy this dc(1) discussion. I am a daily dc user, and since my fisrt
calculator was an HP-45 (circa 1973) RPN felt right. However I think
dc pre-dates ALL HP calculators, since there was one in the 1st Edition
written in assembly.
I extened my version of dc (way before gnu existed)
based on common buttons I used from HP calculators:
CMD WHAT
# comment, to new line
~ change sign of top of stack (CHS key)
| 1/top of stack (1/x key)
e push 99 digits of e (2.718..) on the stack
@ push 99 digits of pi on the stack (looks like a circle)
r reverse top of stack (x<>y key)
I had been fascinated with pi stemimg from the Star Trek epsiode Wolf
in the Fold where Spock uses it to usa all computing power
"Computer, this is a Class A compulsory directive. Compute to
the last digit the value of pi."
"As we know, the value of pi is a transcendental figure without
resolution. The computer banks will work on this problem to the
exclusion of all else until we order it to stop."
As it was supposed to be "arbitrary precision" here was my tool.
So I wrote Machin formula in dc slowing increasing the scale and printing
the results. In the orginal dc, yes the whole part was arbitrary, but the
decimal part (scale) was limited to 99. Well that became a disappiontment.
(this program is lost to time)
So I decided to rewrite it but increasing pi as a whole numbers instead
of increasing scale (ex. 31415, 314159, 3141592, ... etc)
I still have that program which is simply this --
[sxln_1ll*dsllb*dszli2+dsi5li^*/+dsn4*lelzli239li^*/+dse-dlx!=Q]sQ
1[ddsb5/sn239/se1ddsisllQx4*pclb10*lPx]dsPx
if you run it you'll notice the last 1 to 2 digits are wrong due to precision.
The next problem became small memory. I still have thes saved output before
it crashed at 1024 digits. No idea what specs of the machine it was run
on anymore its really old --
3141592653589793238462643383279502884197169399375105820974944592307816\
4062862089986280348253421170679821480865132823066470938446095505822317\
2535940812848111745028410270193852110555964462294895493038196442881097\
5665933446128475648233786783165271201909145648566923460348610454326648\
2133936072602491412737245870066063155881748815209209628292540917153643\
6789259036001133053054882046652138414695194151160943305727036575959195\
3092186117381932611793105118548074462379962749567351885752724891227938\
1830119491298336733624406566430860213949463952247371907021798609437027\
7053921717629317675238467481846766940513200056812714526356082778577134\
2757789609173637178721468440901224953430146549585371050792279689258923\
5420199561121290219608640344181598136297747713099605187072113499999983\
7297804995105973173281609631859502445945534690830264252230825334468503\
5261931188171010003137838752886587533208381420617177669147303598253490\
4287554687311595628638823537875937519577818577805321712268066130019278\
76611195909216420198938095257201065485862972
out of space: salloc
all 8587356 rel 8587326 headmor 1
nbytes -28318
stk 71154 rd 125364 wt 125367 beg 125364 last 125367
83 11 0
30 IOT trap - core dumped
But I was much happier with that.
On a side note: programming dc is hard. There was no comment character.
And it's a pain to read, and it's a pain to debug.
When I discovered the Chudnovsky algorithm for pi, of course I implemented it
in dc --
[0ksslk3^16lkd12+sk*-lm*lhd1+sh3^/smlx_262537412640768000*sxll545140134+dsllm*lxlnk/ls+dls!=P]sP
7sn[6sk1ddshsxsm13591409dsllPx10005v426880*ls/K3-k1/pcln14+snlMx]dsMx
At 99 digits of scale it ran out in 7 rounds, but now with that limitation
removed and large memeories it just goes on and on.....
-Brian
PS: Thanks for the fast OpenBSD version of dc, Otto.
Otto Moerbeek wrote:
> On Thu, Feb 17, 2022 at 01:44:07PM -0800, Bakul Shah wrote:
>
> > On Feb 17, 2022, at 1:18 PM, Dave Horsfall <dave at horsfall.org> wrote:
> > >
> > > On Thu, 17 Feb 2022, Tom Ivar Helbekkmo via TUHS wrote:
> > >
> > >> Watching the prime number generator (from the Wikipedia page on dc)
> > >> running on the 11/23 is much more entertaining than doing it on the
> > >> modern workstation I'm typing this on:
> > >>
> > >> 2p3p[dl!d2+s!%0=@l!l^!<#]s#[s/0ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x
> > >
> > > Wow... About 10s on my old MacBook Pro, and I gave up on my ancient
> > > FreeBSD box.
> >
> > That may be because FreeBSD continues computing primes while the MacOS
> > dc gives up after a while!
> >
> > freebsd (ryzen 2700 3.2Ghz): # note: I interrupted dc after a while
> > $ command time dc <<< '2p3p[dl!d2+s!%0=@l!l^!<#]s#[s/0ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x' > xxx
> > ^C 11.93 real 11.79 user 0.13 sys
> > $ wc xxx
> > 47161 47161 319109 xxx
> > $ size `which dc`
> > text data bss dec hex filename
> > 238159 2784 11072 252015 0x3d86f /usr/bin/dc
> >
> > MacOS (m1 pro, prob. 2Ghz)
> > $ command time dc <<< '2p3p[dl!d2+s!%0=@l!l^!<#]s#[s/0ds^]s@[p]s&[ddvs^3s!l#x0<&2+l.x]ds.x' > xxx
> > time: command terminated abnormally
> > 1.00 real 0.98 user 0.01 sys
> > [2] 37135 segmentation fault command time dc <<< > xxx
> > $ wc xxx
> > 7342 7342 42626 xxx
> > $ size `which dc`
> > __TEXT __DATA __OBJC others dec hex
> > 32768 16384 0 4295016448 4295065600 100018000
> >
>
> MacOS uses the GNU implementation which has a long standing issue with
> deep recursion. It even cannot handle the tail recursive calls used
> here and will run out of its stack.
>
> -Otto
Someone on one of these lists seems to be at ok-labs.com; well...
aneurin% host ok-labs.com
;; connection timed out; no servers could be reached
aneurin%
Houston, we have a problem... Could whoever is responsible please
sprinkle some fairy dust somewhere?
Thanks.
-- Dave
Does anybody know how much memory was configured on the PDP-11 that
Lion's used for the commentary system. Here's what the book says about
the system:
; from lions, page 1
; The code selection presumes a "model" system consisting of:
; PDP11/40 processor;
; RK05 disk drives;
; LP11 line printer;
; PC11 paper tape reader/punch;
; KL11 terminal interface.
I usually add the mag tape, too
; TM10 magnetic tape - not in lions, but super handy
It seems like he must have had an MMU and 128k memory, but I don't know.
I'm hoping y'all remember, know, or can otherwise divine the correct
value. I've run with no MMU - crash on boot. I've also run with less
memory, but then cc won't build mkconf, when I have the TM10 enabled
kernel loaded. As a reminder, his book was published in 1977.
Thanks,
Will
> From: Will Senn
> Does anybody know how much memory was configured on the PDP-11 that
> Lion's used for the commentary system. Here's what the book says about
> the system:
> ..
> ; PDP11/40 processor;
> ...
> It seems like he must have had an MMU
V6 absolutely requires an MMU; the need for it is all throughout basic
attributes of the system - e.g. user processes start their address space at 0.
(BTW, there are V6 descendants, MINI-UNIX:
http://gunkies.org/wiki/MINI-UNIX
and LSX, which don't use/need an MMU, and run on -11 models without memory
managament, such as -11/05's, but I don't think they were in wide use outside
Bell.)
> and 128k memory
The -11/40, as originally released, only supported the MM11-L, which came in
multiples of 16KB (for a 3-board set). Use of the later MM11-U (32KB units)
required a new main power harness, which only came in on
higher-serial-numbered -11/40's.
The -11/40 (as it was at first) that I had at LCS had, to start with, I'm
pretty sure, 3 MM11-L units (i.e. one MM11-L backplane full) - i.e. 48KB. I
know this sounds incredible, and I'm having a hard time believing it myself,
wondering if my memory is failing with age; but it definitely had
extraordinarily little.
I just looked on my V6 (running in a simulator), and it appears that by
trimming all parameters (e.g. number of disk buffers) to the absolute bone,
the kernel could be trimmed to about 36KB. (I haven't actually tried it,
since I don't feel like recompiling all the kernel modules, but one can
estimate it - take the current system size [44KB], delete 10 buffers @ .5KB
gives 39KB, etc, etc.)
That would allow a maximum user process of 12KB on a 48KB machine - and
MINI-UNIX, which runs basically stock V6 user code, can manage with user
processes that small.
I see Andrew's email which reports that the Lions machine had more main
memory, 128KB (maybe 4 MM11-U's - two MM11-U backplanes full); that
woould have made their life a lot easier.
Noel
> The X11 tree was a heavily ifdef-ed. And it needed to be, I don't have
> an answer as to how you would reuse all that code on different hardware
> in a better way.
Plan 9 did it with #include. The name of the included file was the same for
every architecture. Only the search path for include files changed. Done with
care, this eliminates the typical upfront #ifdefs.that define constants and set
flags.
Other preprocessor conditionals can usually be replaced by a regular if, letting
the compiler optimize away the unwanted alternative. This makes conditionals
obey the scope rules of C.
Doug
6th Edition used the Thompson shell as /bin/sh. I don't think it had
those capabilities. Sometimes you could find an early version of the
Bourne shell in /bin/nsh (new shell) in v6.
The 7th Edition made the Bourne shell /bin/sh. And there sometimes
you could find the Thompson shell in /bin/osh (old shell).
Will Senn wrote:
> Login commands question:
>
> I'm sure it's simple, but I can't figure it out. How do I get something
> to run at login in v6? Right now, I use ed to create a file 'setprof'
> that contains:
>
> stty erase[space][backspace][return]
> stty nl0 cr0
>
> Then after logging in:
>
> sh setprof
>
> It works, but, it is pretty clunky.
>
> stty question:
>
> So, I looked at stty.c and it looks like the following should work, if
> the terminal is sending ^H for backspace:
>
> #define BS0 0
> #define BS1 0100000
>
> modes[]
> ...
> "bs0",
> BS0, BS1,
>
> "bs1",
> BS1, BS1,
>
>
> but:
>
> stty bs0
> or
> stty bs1
>
> don't result in proper backspace handling..
>
> but:
>
> stty[space][^h][return]
>
>
> works...
>
> Thoughts?
Login commands question:
I'm sure it's simple, but I can't figure it out. How do I get something
to run at login in v6? Right now, I use ed to create a file 'setprof'
that contains:
stty erase[space][backspace][return]
stty nl0 cr0
Then after logging in:
sh setprof
It works, but, it is pretty clunky.
stty question:
So, I looked at stty.c and it looks like the following should work, if
the terminal is sending ^H for backspace:
#define BS0 0
#define BS1 0100000
modes[]
...
"bs0",
BS0, BS1,
"bs1",
BS1, BS1,
but:
stty bs0
or
stty bs1
don't result in proper backspace handling..
but:
stty[space][^h][return]
works...
Thoughts?
> From: Clem Cole
> what I don't remember was it in v5
Your memory is going! :-) We discussed this recently (well, recently in _our_
timescale :-); it's built into in 'cc' in V5:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s1/cc.c
see "expand(file)".
Noel
> From: Will Senn
> My question is - how did y'all run things - with CSR zero and no kernel
> messages ... or with CSR non-zero and kernel messages.
On the -11/45 V6+ at MIT, we didn't have a printing terminal on the console,
just a VT52. We had a tool at MIT called 'dmesg':
http://ana-3.lcs.mit.edu/~jnc/tech/unix/man8/dmesg.8
which made up for that a bit.
We normally ran with the CSR set to 173030 - the 'boot in single-user'
setting. That's because at one point the machine was in the 9th floor machine
room, but the console VT52 was on the 5th floor (where our offices were - the
famous print of the GE-645 Multics was on the hallway wall outside mine :-),
and I'd added a 'reboot()' system call (nothing fancy, it just jumped to the
bootstrap ROM), so we could reboot the machine without going up in the
elevator (not if it had crashed, of course). Later on, after we'd done with
kernel hacking (adding a network interface, and IP), and the machine stayed up
for long periods, we moved the console up next to the machine (since if it
crashed, you had to use the front panel to restart it, so you had to be up
there anyway); we stayed with the default CSR setting, though. (If it panic'd,
you could see the reason why when you went to reboot it.)
> Oh, BTW, I know I've seen this noted elsewhere, but I can't remember
> where.
Maybe at:
https://gunkies.org/wiki/UNIX_Sixth_Edition#Distros
which discusses it?
Noel
I noticed in v6 source that putch in prf.c - the system kernel printf's
character routine, only prints to the console if the Console Switch
Register is non-zero.
My question is - how did y'all run things - with CSR zero and no kernel
messages (seems dangerous to me, being naive and all) or with CSR
non-zero and kernel messages.
On my FreeBSD instance, I value the messages that show up on console as
they've alerted me to big problems in the past, but on my Mac, not as
much (sure you can run Console and see them, but they aren't immediate).
Oh, BTW, I know I've seen this noted elsewhere, but I can't remember
where. Dennis's v6 doesn't have the Western Electric message:
mem = 435
RESTRICTED RIGHTS
Use, duplication or disclosure is subject to
restrictions stated in Contract with Western
Electric Company, Inc.
It was a bit of a head scratcher as I was trying to read from the Dennis
version of the distro on my mac while running Wellsch's tape on simh. I
spent quite a while spinning my wheels looking for "Western" in the
files to no avail. I thought something was screwy with the files or my mac.
All,
I got sick of poring over my Peer-to-Peer communications version of
Lion's commentary - trying to read it while digging around in v6 was
getting annoying. Of course, if you don't already own, rush out and by a
copy. It's great stuff. Anyhow, the problem is that it's perfect bound,
landscape and it's not searchable. Hunting around the internet, I found
pdfs that were searchable, but they were based on v7 being
back-engineered to v6 code. So, I located a decent source of
electronically readable Lion's source at
https://warsus.github.io/lions-/ and off I went. I took the code and did
a bit (quite a bit) of tweakage on it to get it formatted in pdf, and
created a version in letter format that I find pretty useful. It can be
read from a printout while messing around in v6. I've done some
proofing, but I don't claim it's perfect. If you find any issues with
it, let me know and I'll try to fix them (thanks, Clem for early
suggestions).
Here's what's in the letter sized pdf:
Tweaked Cover Page
Improved Table of Contents
Lion's version of V6 Source Code
Appendices
Source File Sheets Alphabetical List
Source File Locations in Running V6 System
What isn't in the pdf:
Original Forewords, Prefaces, or Letters (not needed for coding)
Symbol Lists, Cross references, or Indexes (beyond my skills at the moment)
All in all, I have found it quite readable and useful for my own work. I
don't claim any ownership or contribution, other than in improving the
readability of the work for modern readers. If the cross reference thing
kills you, just use gnu ctags on the source directories and ctags -x for
the line numbers.
Here's the link to the posting:
http://decuser.blogspot.com/2022/02/tweaked-version-of-lions-v6-source-code…
- will
Lorinda Cherry, a long-time member of the original Unix Lab
died recently. Here is a slightly edited reminiscence that
I sent to the president of the National Center for Women and
Information Technology in 2018 when they honored her with
their Pioneer in Tech award.
As Lorinda Cherry's longtime colleague at Bell Labs, I was
very pleased to hear she has been chosen for the NCWIT Pioneer
Award. At the risk of telling you things you already know,
I offer some remarks about her career. I will mainly speak of
things I saw at first hand when our offices were two doors
apart, from the early '70s through 1994, when Lorinda left
Bell Labs in the AT&T/Lucent split. Most of the work I describe
broke new ground in computing; "pioneer" is an apt term.
Lorinda, like many women (including my own mother and my wife),
had to fight the system to be allowed to study math and science
in college. She was hired by Visual and Acoustics Research
at Bell Labs as a TA--the typical fate of women graduates,
while their male counterparts were hired as full members of
technical staff. It would take another decade for that unequal
treatment to be rectified. Even then, one year she received
a statement of benefits that explained what her wife would
receive upon her death. When Lorinda called HR to confirm that
they meant spouse, they said no, and demanded that the notice
be returned. (She declined.) It seemed that husbands would not
get equal treatment until AT&T lost a current court case. The
loss was a foregone conclusion; still AT&T preferred to pay
lawyers rather than widowers, and fought it to the bitter end.
Lorinda moved to my department in Computing Science when
the Unix operating system was in its infancy. Initially she
collaborated with Ken Knowlton on nascent graphics applications:
Beflix, a system for producing artistically pixillated films,
and an early program for rendering ball-and-stick molecular
models.
She then joined the (self-organized) Unix team, collaborating
on several applications with Bob Morris.
First came "dc", an unlimited-precision desk calculator,
which is still a Unix staple 45 years on. Building on dc,
she would later make "bc", which made unlimited precision
available in familiar programming-language notation and became
the interface of choice to dc.
Then came "form" and "fed", nominally a form-letter generator
and editor. In fact they were more of a personal memory
bank, a step towards Vannevar Bush's famous Memex concept--an
interesting try that didn't pay off at that scale. Memex had to
sleep two more decades before mutating into the Worldwide Web.
Lorinda had a hand in "typo", too, a Morris invention that
found gross spelling mistakes by statistical analysis. Sorting
the words of a document by the similarity of their trigrams
to those in the rest of the document tended to bring typos to
the front of the list. This worked remarkably well and gained
popularity as a spell-checker until a much less interesting
program backed by a big dictionary took over.
Taken together, these initial forays foretold a productive
computer science career centered around graphics, little
languages, and text processing.
By connecting a phototypesetter as an output device for Unix,
Joe Ossanna initiated a revolution in document preparation. The
new resource prompted a flurry of disparate looking documents
until Mike Lesk brought order to the chaos by creating a macro
package to produce a useful standard paper format.
Taking over from Lesk, Lorinda observed the difficulty of
typesetting the mathematics (which the printing industry counted
as "penalty copy") that occurred in many research papers,
and set out to simplify the task of rendering mathematical
formulas, Brian Kernighan soon joined her effort. The result
was "eqn", which built on the way people read formulas aloud
to make a quite intuitive language for describing display
formulas. Having pioneered a pattern that has been adopted
throughout the industry, eqn is still in use forty years later.
Lorinda also wrote an interpreter to render phototypesetter
copy on a cathode-ray terminal. This allowed one to see
typeset documents without the hassle of exposing and developing
film. Though everyone has similar technology at their fingertips
today, this was genuinely pioneering work at the time.
You are certainly aware of Writers Workbench, which gained
national publicity, including Lorinda's appearance on the Today
Show. It all began as a one-woman skunk-works project. Noticing
the very slow progress in natuaral-language processing, she
identified a useful subtask that could be carved out of the
larger problem: identifying parts of speech. Using a vocabulary
of function words (articles, pronouns, prepositions and
conjunctions) and rules of inflection, she was able to classify
parts of speech in running text with impressive accuracy.
When Rutgers professor William Vesterman proposed a
style-assessing program, with measures such as the frequencies
of adjectives, subordinate clauses, or compound sentences,
Lorinda was able to harness her "parts" program to implement
the idea in a couple of weeks. Subsequently Nina MacDonald,
with Lorinda's support, incorporated it into a larger suite
that checked and made suggestions about other stylistic issues
such as cliches, malapropisms, and redundancy.
Another aspect of text processing that Lorinda addressed was
topic identification. Terms (often word pairs) that occur with
abnormal frequency are likely to describe the topic at hand. She
used this idea to construct first drafts of indexes. One
in-house application was to the Unix manual, which up until
that time had only a table of contents, but no index. This
was a huge boon for a document so packed with detail.
In her final years at Bell Labs, Lorinda teamed up with AT&T
trouble-call centers to analyze the call transcripts that
attendants recorded on the fly--very sketchy prose, replete
with ad-hoc contractions and misspellings. The purpose was
to identify systemic problems that would not be obvious from
transcripts considered individually. When an unusual topic
appeared at the same time in multiple transcripts, those
transcripts were singled out for further study. The scheme
worked and led to early detection of system anomalies. In one
case, it led AT&T to suspend publication of a house organ that
rubbed customers the wrong way.
Lorinda was not cut from the same mold as most of her
colleagues. First she was a woman, which meant she faced
special obstacles. Then, while there were several pilots
among us, there was only one shower of dogs and only one car
racer--moreover one who became a regional exec of the Sports
Car Club of America. For years she organized and officiated
at races as well as participating.
Lorinda was always determined, but never pushy. The
determination shows in her success in text analysis, which
involves much sheer grit--there are no theoretical shortcuts
in this subject. She published little, but did a lot. I am
glad to see her honored.
Doug McIlroy
The one I remember using in the 80s was called "fep" written by
Kazumasa Utashiro of Software Research Associates. It was probbaly posetd
posted to comp.sources.unix usenet group.
-Brian
Chet Ramey wrote:
> On 2/20/22 4:19 PM, Lyndon Nerenberg (VE7TFX/VE6BBM) wrote:
> > Chet Ramey writes:
> >
> >> It always seemed like it would have been just the thing to implement as a
> >> tty streams module, but research Unix went in a different direction.
> >
> > I'm really surprised nobody has implemented a basic readline as a
> > tty line discipline by now. I was poking around the OpenBSD NMEA
> > line discipline code a few weeks ago and thinking it shouldn't be
> > that hard to do.
>
> It's not that hard. The complexity is in how sophisticated you want to get
> with redisplay and whether you want to allow user-specified key bindings.
>
> > Did anyone think about doing this in the past? If yes, what made you
> > decide against doing it? (Or a streams implementation, for that matter.)
>
> There have been several implementations (I never did one). I suspect that
> the people who were in a position to integrate that functionality into
> distributed kernels were not supportive, or the code didn't get to them
> at the right time.
> --
> ``The lyf so short, the craft so long to lerne.'' - Chaucer
> ``Ars longa, vita brevis'' - Hippocrates
> Chet Ramey, UTech, CWRU chet(a)case.edu http://tiswww.cwru.edu/~chet/
On Feb 19, 2022, at 8:11 AM, Clem Cole <clemc(a)ccc.com> wrote:
>
>
> On Sat, Feb 19, 2022 at 11:04 AM Steve Nickolas <usotsuki(a)buric.co> wrote:
>> Apparently Bourne was heavily into ALGOL,
> That's sort of an understatement. I believe that when he was at Cambridge, he was one of the people that helped to take the Algol-X proposal and turned it into the Algol-68 definition. I also believe he worked on their famous implementation of same.
Some of you may be interested in this “A history of Algol68” paper:
https://dl.acm.org/doi/pdf/10.1145/234286.1057810
The author, Charles H Lindsey, still occasionally posts on comp.lang.misc
about Algol68. Among other things Bourne was a coauthor of Algol68C,
a portable implementation of Algol68.
Rob Pike:
I did the same to adb, which turned out to have a really good debugger
hidden under a few serious bugs of its own, which I fixed.
=====
Memories.
Was it you who replaced ptrace with /proc in adb, or did I do that?
I do remember I was the one who took ptrace out of sdb (which a
few 1127-ers, or perhaps 112-ers on alice and rabbit still used).
After which I removed ptrace from the kernel, and from the
copy of the V8 manual in the UNIX room. Conveniently ptrace
occupied two facing pages; I glued them together.
I also later did some work to try to isolate the target-dependent
parts of adb and to make them work in host-independent ways--e.g.
assembling ints byte-by-byte rather than assuming byte order--to
make it easier to make a cross adb, e.g. to examine PDP-11 or
68K core dumps on a VAX.
I miss adb; maybe it's time to revive it, though these days I'd
be tempted to rewrite it in Python so I could just load the right
module at runtime to pick the desired target.
Norman Wilson
Toronto ON
Second half of the 1980-tish when the computer division of Philips
Electronics started on their own Motorola M68010 / UNIX System V.3 (don't
remember for sure I'm afraid) they used a syntax.h with macros similar to
mac.h. Only I understand it's more Pascal like. Appended the 1987 version I
found in my archive.
Cheers,
rubl
--
The more I learn the better I understand I know nothing.
I have been poring through the v7 source code lately, and came across an
oddity I would like to know more about. Specifically, in sh. The code
for sh is c, but it makes *extensive* use of of macros, for example:
/usr/src/cmd/sh/word.c
...
WHILE (c=nextc(0), space(c)) DONE
...
The macros for sh are defined in mac.h:
/usr/src/cmd/sh/mac.h
...
#define BEGIN {
#define END }
#define SWITCH switch(
#define IN ){
#define ENDSW }
#define FOR for(
#define WHILE while(
#define DO ){
#define OD ;}
#define REP do{
#define PER }while(
#define DONE );
...
I can read the resultant code through the lens of my experience coding
c, but I'm curious why the macros and how this came about? In v6, the sh
source is straight up c. Is there a story behind it worth knowing?
Thanks,
Will
I was just reminded of this and thought others might enjoy reading it...
Customer Review
https://www.amazon.com/gp/customer-reviews/R2VDKZ4X1F992Q/
> PING! The magic duck!
> Using deft allegory, the authors have provided an insightful and intuitive explanation of one of Unix's most venerable networking utilities. Even more stunning is that they were clearly working with a very early beta of the program, as their book first appeared in 1933, years (decades!) before the operating system and network infrastructure were finalized. ...
-r