What do folks think about event-driven programming as a substitute for threads in UI and process control settings?
I wrote the service processor code for the Sicortex Machines using libevent.a and I thought it was very lightweight and fairly easy to think about. (This was a thing running on ucLinux on a tiny 16 MB coldfire that managed the consoles and power supplies and temp sensors and JTAG access and booting and so forth.)
Tk (IIRC) has a straightforward event driven model for UI interactions.
Meanwhile, the dropbox plugin for my Mac has 120 threads running. WTF?
This was triggered by the fork/spawn discussion.
-Larry
(started with Unix at V6 on an 11/34)
> spawn() beats fork()[;] fork() should be deprecated
Spawn is a further complication of exec, which tells what signals and
file descriptors to inherit in addition to what arguments and
environment variables to pass.
Fork has a place. For example, Program 1 in
www.cs.dartmouth.edu/~doug/sieve/sieve.pdf forks like crazy and never
execs. To use spawn, the program would have to be split in three (or
be passed a switch setting).
While you may dismiss Program 1 as merely a neat demo, the same idea
applies in parallelizing code for use in a multiprocessor world.
Doug
Another issue I have run into is recursion, when a reference includes
another reference. This comes up in various forms:
Also published as ...
Errata available at ...
Summarized [or reviewed] in ...
Preprint available at ...
Often such a reference takes the form of a URL or a page in a journal
or in a proceedings. This can be most succinctly placed in situ --
formatted consistently with other references. If the reference
identifies an alternate mode of publication, it may lack %A or %T
fields.
Partial proposal. a %O field may contain a reference, with no further
recursion, The contained reference will be formatted in line in the %O
text unless references are accumulated and the contained reference is
not unique.
Doug
> Go gets us part of the way there, but cross-machine messaging is still a mess.
Shed a tear for Plan 9 (Pike yet again). While many of its secondary
innovations have been stuffed into Linux; its animating
principle--transparently distributable computing--could not overcome
the enormous inertia of installed BSD-model systems.
Doug
I have considerable sympathy with the general idea of formally
specifying and parsing inputs. Langsec people make a strong case
for doing so. The white paper,"A systematic approach to modern
Unix-like command interfaces", proposes to "simplify parsing by
facilitating the creation of easy-to-use 'grammar-based' parsers".
I'm not clear on what is meant by "parser". A plain parser is a
beast that builds a parse tree according to a grammar. For most
standard Unix programs, the parse tree has two kinds of leaves:
non-options and options with k parameters. Getopt restricts
k to {0,1}.
Aside from fiddling with argc and argv, I see little difference
in working with a parse tree for arguments that could be
handled by getopt and working with using getopt directly.
A more general parser could handle more elaborate grammatic
constraints on options, for example, field specs in sort(1),
requirements on presence of options in tar(1), or representation
of multiple parameters in cut(1).
In realizing the white paper's desire to "have the parser
provide the results to the program", it's likely that the mechanism
will, like Yacc, go beyond parsing and invoke semantic actions
as it identifies tree nodes.
Pioneer Yaccification of some commands might be a worthy demo.
Doug
> On 7/31/21, Michael Siegel <msi(a)malbolge.net> wrote:
> The TOPS-20 COMND JSYS implemented both of these features, and I
think that command
> completion was eventually added to the VMS command interpreter, too.
FYI, There is also a unix version of the COMND JSYS capability. It was
developed at Columbia University as part of their "mm" mail manager. It
is located in to the ccmd subdirectory in the mm.tar.gz file.
url: https://www.kermitproject.org/mm/ftp://ftp.kermitproject.org/kermit/mm/mm.tar.gz
-ron
Besides C-Kermit on Unix systems, the TOPS-20 command interface is
used inside the mm mail client, which I've been using for decades on
TOPS-20, VMS, and several flavors of Unix:
http://www.math.utah.edu/pub/mm
mm doesn't handle attachments, or do fancy display of HTML, and thus,
cannot do anything nasty in response to incoming mail messages.
I rarely need to extract an attachment, and I then save the message in
a temporary file and run munpack on it.
Here are some small snippets of its inline help:
MM] read (messages) ? message number
or range of message numbers, n:m
or range of message numbers, n-m
or range of message numbers, n+m (m messages beginning with n)
or "." to specify the current message
or "*" to specify the last message
or message sequence, one of the following:
after all answered before
current deleted flagged from
inverse keyword last longer
new on previous-sequence recent
seen shorter since subject
text to unanswered undeleted
unflagged unkeyword unseen
or "," and another message sequence
R] read (messages) flagged since yesterday
[message(s) appear here]
MM] headers (messages) since monday longer (than) 100000
[list of long messages here]
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> TUHS list (Rob Pike, Aug 2019)
>
> I think it was slightly later. I joined mid-1980 and VAXes to replace the
> 11/70 were being discussed but had not arrived. We needed to convert a lab
> into a VAX machine room and decide between BSD and Reiser, all of which
> happened in the second half of 1980.
>
> Reiser Unix got demand paging a little later, and it was spectacularly
> fast. I remember being gobsmacked when I saw a demo in early 1981.
>
> Dead ends everywhere.
I think I have figured out why 32V R3 was so fast (assuming my current understanding of how 32V R3 must have worked is correct).
Its VM subsystem tags each memory frame with its disk mirror location, be it in swap or in the file system. A page can quickly be found as they are hashed on device and block no. This is true both for pages in the working set and pages on the 2nd chance list. Effectively, most core is a disk cache.
In a unified buffer design, the buffer code would first look for an existing buffer header for the requested disk block, as in V7. If not found, it would check the page frame list for that block and if found it would connect the frame to an empty buffer header, increment its use count and move it to the working set. If not found there either, it would be loaded from disk as per usual. When a buffer is released, the frame use count would be decremented and if zero the page frame would be put back on the 2nd chance list and the buffer header would be marked empty. With this approach, up to 4MB of the disk could be cached in RAM.
Early in 1981 most binaries and files were a few dozen KB in size. All of the shell, editor, compiler tool chain, library files, intermediate files, etc. would have fitted in RAM all at once. In a developer focused demo and once memory was primed, the system would effectively run from RAM, barely hitting the disk, even with tens of concurrent logins. Also something like “ls -l /bin” would have been much faster on its second run.
It puts a comment from JFR in a clearer context:
<quote>
Strict LRU on 8,192 pages, plus Copy-On-Write, made the second reference to a page "blindingly fast".
<unquote>
So far I read this in context of the paging algorithm, and then it is hard to understand (is LRU really that much better than NRU?). In the context of a unified buffer and disk pages, it makes a lot more sense. Even the CoW part: as the (clean) data segment of executables would still be in core, they could start without reloading from disk - CoW would create copies as needed.
===
The interesting question now is: if this buffer unification was so impressive, why was it abandoned in SVr2-vax? I can think of 3 reasons:
1. Maybe there was a subtle bug that was hard to diagnose. “Research" opting for the BSD memory system “as it did not want to do the maintenance” suggests that there may have been lingering issues.
1b. A variation of this: JFR mentioned that his implementation of unified buffers broke conceptual layering. USG do not strike me as purists, but maybe they thought the code was too messy to maintain.
2. Maybe there was an unintended semantic issue (e.g. you can lock a buffer, but not a mmap ‘ed page).
3. Maybe it was hard to come up with a good sync() policy, making database work risky (and system crashes more devastating to the file system).
JFR mentioned that he did the design and implementation for 32V R3 in about 3 months, with 3 more months for bug fixing and polishing. That is not a lot of time for such a big and complex kernel mod (for its time).
Currently reading through IPC in CB Unix 3.0, Vr1-vax and SVr1-pdp11.
The IPC primitives in CB Unix (maus, event and msg) are quite different from those in SVr1 (although maus survives in SVr1-pdp11).
It made me wonder: is it still known who designed the IPC primitives for SysV?
> Sat Aug 31 06:21:47 AEST 2019
> John Reiser did do his own paging system for UNIX 32/V.
> I heard about it from friends at Bell Labs ca. 1982-83,
> when I was still running systems for physicists at Caltech.
> It sounded very interesting, and I would love to have had
> my hands on it--page cache unified with buffer cache,
> copy-on-write from the start.
>
> [...]
>
> It is in any case a shame that jfr's code never saw the light
> of day. I really hope someone can find it on an old tape
> somewhere and we can get it into the archive, if only because
> I'd love to look at it.
>
> Norman Wilson
> Toronto ON
Norman,
I am getting more optimistic that much of this version of 32V can be ‘triangulated’ from the surviving sources. For convenience I call this version “32V r3” (with the first swapping version being "32V r1" and the scatter loading version being "32V r2").
I’ve been reading my way through the surviving 32/V r2, SysIII-Vax, SVr1-Vax and SVr2-Vax sources. There seems to be a lot of continuous progression in these versions. From a communication with Keith Kelleman I thought that VM in SVr2 was unrelated, but that appears to have been a misunderstanding. In short, it seems that the basic paging algorithms and data structures in SVr2 (other than the region abstraction) come from 32V r3.
The strongest clue for the source link is in the SVr2 “page.h” file. In it, the union describing a page table entry matches with JFR’s description and is otherwise (partially) unused in SVr2. There are other, similar clues in the other source trees.
To explain more clearly, have a look at this code section: https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/uts/vax…
In this union, the 2nd struct “pgd” is never used, nor is the bit pg_disk in the “pgm” struct ever used. It matches with JFR’s recollection:
<quote>
My internal design strategy was to use the hardware page table entry
(4 bytes per page, with "page not present" being one bit which freed
the other 31 bits for use by software) as the anchor to track everything
about a page. This resulted in a complicated union of bitfield structs,
which became a headache. When other departments took over the work,
the first thing they did was use 8 bytes per page, which meant shuffling
between the software and the hardware descriptors: its own mess.
<unquote>
In the pte as given, a pte can be in two states: (i) the pg_disk bit is reset in which case it is in “pgm” format - this format is recognized by the mmu hardware; (ii) the pg_disk bit is set in which case it is in “pgd” format. When a page is paged in, the disk form of the pte was I think saved in the page frame descriptor (the “pfdat" table) and the pte converted to the memory form. Upon page-out the reverse happened. In the SVr2 version of this code the “pgd” form is abandoned and replaced by the separate disk descriptors (see https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/uts/vax…)
The “pgd” structure is a bit puzzling. There is a 19 bit device block number, which is less than the 24 bits allowed by the filesystem code and also less than the 20 bits that would fit in the pte. Maybe this is because the RP04/RP05 disks of the early 80’s worked with 19 bits. I am not sure what the “pg_iord” field is about. The comment “inode index” may be misleading. My hypothesis that it is a short form of the device code, for instance an index into the mount table; magic values could have been used to identify the swap device, “demand zero”, etc.
It seems probable to me that the paging algorithm in SVr2 was derived from the 32/V r3 algorithm. John's recollection:
<quote>
Our machine started with 512KB of RAM, but within a few months was upgraded
to 4 MB with boards that used a new generation of RAM chips.
The hardware page size was 512 bytes, which is quite small. Strict LRU
on 8,192 pages, plus Copy-On-Write, made the second reference to a page
"blindingly fast".
</unquote>
In a follow up mail conversation with JFR we (I?) arrived at the conclusion that literal “strict LRU” is not likely on VAX hardware, but that an algorithm that maintains a small working set combined with a large second chance list amounts to about the same. It seems to me that this description also applies to the surviving SVr2 implementation: every 4 seconds sweep through the page tables of all processes and pages that were not referenced are moved to the free/2nd chance list.
To implement this it seems likely to me that 32V r3 used a structure similar to this: https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/src/uts/vax… Such a structure is also in line with viewing core as a cache for disk, similar to TENEX that JFR had in mind.
The big change from SysIII to SVr1 in kernel page table management is the introduction of the “sptmap” (https://chiselapp.com/user/pnr/repository/paging/file?name=os/machdep.c&ci=…) In 32V r2 and SysIII the user page tables are swapped in and out of kernel space along with the _u area. This makes it impractical to do the working set sweep every few seconds. The “sptmap” innovation effectively creates a global page table space that fits with the needs of the working set sweep. In SVr1 it seems to have no real benefit, and it seems likely to me that it came from 32V r3.
In general it seems plausible to me that SVr1 derives from 32V r3, but was regressed to SysIII code where necessary to keep the code PDP-11 compatible. Another clue for this is in the buffer code of SVr1: https://chiselapp.com/user/pnr/repository/paging/file?name=os/main.c&ci=fba…
Here, the disk buffers are allocated as virtual memory pages, not as an array. This too is otherwise unused in SVr1, but makes perfect sense in the context of 32V r3.
So, in summary, it would seem to me that the 32V r3 virtual memory system:
- used sptmap code similar to SVr1/r2 to manage page tables in the kernel
- used the pte structure from SVr2 and a pfdat table similar to SVr2 to manage mappings
- used page stealer and fault code similar to SVr2
Phrased the other way around: SVr2-vax seems to use the 32V r3 virtual memory code with the region abstraction added on top, and the unified buffer removed.
At the moment I have not found clear clues for the unified buffer algorithms or mmap implementation. Perhaps careful reading of the IPC shared memory code in SVr1 will yield some.
To be continued …
In 1992, 386BSD is released by Lynne and William Jolitz, starting the open
source operating system movement (Linux didn't come along under later).
-- Dave
Unlike most here, I always pronounced Mt Xinu with an
eye, not an eee. I don't know where I got that, though.
I did know Ed Gould via USENIX and DECUS, but that doesn't
make my pronunciation correct.
As an aside, anyone know where Ed is these days or how he's
doing? I last saw him at a USENIX conference, probably in
San Jose in 2013 but I'm not sure. He showed up just for the
reception; he'd retired, and had cut away most of his famous
beard because he was spending a lot of time sailing and it
got in the way.
Norman Wilson
Toronto ON
Nelson H. F. Beebe:
P.S. Jay was the first to get Steve Johnson's Portable C Compiler,
pcc, to run on the 36-bit PDP-10, and once we had pcc, we began the
move from writing utilities in Pascal and PDP-10 assembly language to
doing them in C.
======
How did that C implementation handle ASCII text on the DEC-10?
Were it a from-scratch UNIX port it might make sense to store
four eight- or nine-bit bytes to a word, but if (as I sense it
was) it was C running on TOPS-10 or TOPS-20, it would have had
to work comfortably with DEC's convention of five 7-bit characters
(plus a spare bit used by some programs as a flag).
Norman Wilson
Toronto ON
More from Yost below.
My purpose in relating this was to point out that the original unix
implementation choices were mostly fine; they just had to be tweaked a
bit. Clearly an independent implementation such as in Linux would veer
off in a different direction, done in a different era and with different prior
experience. I was a bit surprised that Bruce didn't make this same
tweak to cblock size but no way of knowing his reasons now.
> Begin forwarded message:
>
> From: Dave Yost
> Subject: Re: [TUHS] 386BSD released
> Date: July 16, 2021 at 9:21:53 AM PDT
> To: Bakul Shah
>
> Plz forward this
> thanks
>
> This was in early 1983 or late 1982.
>
> We got the serial driver to go 19200 out and 9600 in.
>
> I did 2 things in the Fortune Systems 68k serial driver:
> • hand-coded asm pseudo-DMA, suggested by Robert P Warnock III
> • cblock size 128 bytes instead of 8, count ’em, 8.
>
> From Lyons,
> https://cs3210.cc.gatech.edu/r/unix6.pdf <https://cs3210.cc.gatech.edu/r/unix6.pdf>
> the unix v6 serial driver used a clist of cblocks, like this:
>
>
> The pseudo-DMA interrupt handler was a function made up of a few hand-coded 68k instructions, entered into C code as hex data. That code transferred one byte into or out of a cblock, and at the end of the cblock it grabbed the next cblock from a queue and rang the “doorbell” hardware interrupt, which caused a “software interrupt” at lower priority for further processing. Rob put the doorbell into the architecture with a couple of gates on the board because he was well aware of this software interrupt trick, which was already used in bsd. For some reason I didn’t look at the bsd code, probably because Rob’s explanation was lucid and sufficient.
>
> I once had occasion to mention this, and specifically the relaxing of the draconian 8 byte cblock size, to Dennis Ritchie. He said, sure, why not, the 8 byte cblock size was just a neglected holdover from early days.
>
> This approach was just an interrupt version of what I had proposed to Rick Kiessig as a first project at Fortune Systems: to get a 30x speed up when writing to the Fortune Systems memory-mapped character display hardware. I had done the same thing a few years earlier in Z80 in C code in a serial CRT terminal. It’s simple and obvious: make the inner loop do as little as possible. The most primitive operation needs to be a block operation, not a byte-at-a-time operation.
Doug McIlroy asks about the Rosetta Stone table relating TOPS-20
commands to Unix command in my ``Unix for TOPS-20 Users'' document:
>> I was puzzled, though, by the Unix command "leave", which is
>> not in the manuals I edited, nor is it in Linux. What does
>> (or did) it do?
I reread that 1987 document this morning, and found a few small
mistakes, but on the whole, I still agree with what I wrote 34 years
ago, and I'm pleased that almost everything there about Unix still
applies today.
I confess that I had forgotten about the TOPS-20 ALERT command and its
Unix equivalent, leave. As Doug noted, leave is not in Linux systems,
but it still exists in the BSD world, in DragonFlyBSD, FreeBSD,
NetBSD, OpenBSD, and their many derivatives. From a bleeding-edge
FreeBSD 14 system, I find
% man leave
LEAVE(1) FreeBSD General Commands Manual LEAVE(1)
NAME
leave – remind you when you have to leave
SYNOPSIS
leave [[+]hhmm]
DESCRIPTION
The leave utility waits until the specified time, then reminds you that
you have to leave. You are reminded 5 minutes and 1 minute before the
actual time, at the time, and every minute thereafter. When you log off,
leave exits just before it would have printed the next message.
...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Clem Cole asks:
>> Did you know that before PCC the 'second' C compiler was a PDP-10
>> target Alan Snyder did for his MIT Thesis?
>> [https://github.com/PDP-10/Snyder-C-compiler]
I was unaware of that compiler until sometime in the 21st Century,
long after our PDP-10 was retired on 31-Oct-1990.
The site
https://github.com/PDP-10/Snyder-C-compiler/tree/master/tops20
supplies a list of some of Snyder's files, but they don't match
anything in our TOPS-20 archives of almost 180,000 files.
I then looked into our 1980s-vintage pcc source tree and compared
it with a snapshot of the current pcc source code taken three
weeks ago. The latter has support for these architectures
aarch64 hppa m16c mips64 pdp11 sparc64
amd64 i386 m68k nova pdp7 superh
arm i86 mips pdp10 powerpc vax
and the pdp10 directory contains these files:
CVS README code.c local.c local2.c macdefs.h order.c table.c
All 5 of those *.c files are present in our TOPS-20 archives. I then
grepped those archives for familiar strings:
% find . -name '*.[ch]' | sort | \
xargs egrep -n -i 'scj|feldman|johnson|snyder|bell|at[&]t|mit|m.i.t.'
./code.c:8: * Based on Steve Johnson's pdp-11 version
./code2.c:19: * Based on Steve Johnson's pdp-11 version
./cpp.c:1678: stsym("TOPS20"); /* for compatibility with Snyder */
./local.c:4: * Based on Steve Johnson's pdp-11 version
./local2.c:4: * Based on Steve Johnson's pdp-11 version
./local2.c:209: case 'A': /* emit a label */
./match.c:2: * match.c - based on Steve Johnson's pdp11 version
./optim.c:318: * Turn 'em into regular PCONV's
./order.c:5: * Based on Steve Johnson's pdp-11 version
./pftn.c:967: * fill out previous word, to permit pointer
./pftn.c:1458: register commflag = 0; /* flag for labelled common declarations */
./pftn2.c:1011: * fill out previous word, to permit pointer
./pftn2.c:1502: register commflag = 0; /* flag for labelled common declarations */
./reader.c:632: p2->op = NOASG p2->op; /* this was omitted in 11 & /6 !! */
./table.c:128: " movei A1,1\nZN", /* ZN = emit branch */
./xdefs.c:13: * symbol table maintainence
Thus, I'm confident that Jay's work was based on Steve Johnson's
compiler, rather than Alan Snyder's.
Norman Wilson asks:
>> ...
>> How did that C implementation handle ASCII text on the DEC-10?
>> Were it a from-scratch UNIX port it might make sense to store
>> four eight- or nine-bit bytes to a word, but if (as I sense it
>> was) it was C running on TOPS-10 or TOPS-20, it would have had
>> to work comfortably with DEC's convention of five 7-bit characters
>> (plus a spare bit used by some programs as a flag).
>> ...
Our pcc compiler treated char* as a pointer to 7-bit ASCII strings,
stored in the top 35 bits of a word, with the low-order bit normally
zero; a 1-bit there meant that the word contained a 5-digit line
number that some compilers and editors would report. Of course, that
low-order non-character bit meant that memset(), memcpy(), and
memmove() had somewhat dicey semantics, but I no longer recall their
specs.
kcc later gave us access to the PDP-10's 1- to 36-bit byte
instructions.
For text processing, 5 x 7b + 1b bits matched the conventions for all
other programming languages on the PDP-10. When it came time to
implement NFS, and exchange files and data with 32-bit-word machines,
we needed the ability to handle files of 4 x 8b + 4b and 9 x 8b (in
two 36-bit words), and kcc provided that.
The one's-complement 36-bit Univac 1108 machines chose instead to
store text in a 4 x 9b format, because that architecture had
quarter-word load/store instructions, but not the general variable
byte instructions of the PDP-10. Our campus had an 1108 at the
University of Utah Computer Center, but I chose to avoid it, because
it was run in batch mode with punched cards, and never got networking.
By contrast, our TOPS-20, BSD, RSX-11, SunOS, and VMS systems all had
interactive serial-line terminals, and there was no punched card
support at all.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
>> -r is weird because it enables backwards reading, but only as
>> limited by count. Better would be a program, say revfile, that simply
>> reads backwards by lines. Then tail p has an elegant implementation:
>> revfile p | head | revfile
> tail -n can be smarter in that it can simply read the last K bytes
> and see if there are n lines. If not, it can read back further.
> revfile would have to read the whole file, which could be a lot
> more than n lines! tail -n < /dev/tty may never terminate but it
> will use a small finite amount of memory.
Revfile would work the same way. When head has seen enough
and terminates, revfile will get SIGPIPE and stop. I agree that,
depending on scheduling and buffer management, revfile might
read more than tail -n, but it wouldn't read the whole of a
humongous file.
Doug
> Arguably ancient PDP-10 operating systems like ITS, WAITS, TENEX were
> somewhat "open" and "free", but it's not a clear cut case.
The open source movement was a revival of the old days of SHARE and other
user groups.
SAP, the SHARE assembly program for the IBM 704, was freely available--with
source code--to all members of the SHARE user group. I am not aware of any
restrictions on redistribution.
Other more specialized programs were also freely available through SHARE. In
particular, Fortran formatted IO was adopted directly from a SHARE program
written by Roy Nutt (who also wrote SAP and helped write Fortran I).
Bell Labs freely distributed the BESYS operating system for the IBM 704.
At the time (1958) no operating system was available from IBM.
IBM provided source code for the Fortran II compiler. In the
fashion of the time, I spent a memorable all-night session with
that code at hand, finding and fixing a bizarre bug (a computed GOTO
bombed if the number of branches was 74 mod 75) with a bizarre cause
(the code changed the index-register field in certain instructions on the
fly--inconsistently). And there was no operating system to help, because
BESYS swapped itself out to make room for the compiler.
Doug
This somewhat stale note was sent some time ago, but was ignored
because it was sent from an unregistered email address.
> And if the Unix patriarchs were perhaps mistaken about how useful
> "head" might be and whether or not it should have been considered
> verboten.
Point well taken.
I don't know which of head(1) and sed(1) came first. They appeared in
different places at more or less the same time. We in Research
declined to adopt head because we already knew the idiom "sed 10q".
However one shouldn't have to do related operations in unrelated ways.
We finally admitted head in v10.
Head was independently invented by Mike Lesk. It was Lesk's
program that was deemed superfluous.
Head might not have been written if tail didn't exist. But, unlike head,
tail strayed from the tao of "do one thing well". Tail -r and tail -f are
as cringeworthy as cat -v.
-f is a strange feature that effectively turns a regular file into a pipe
with memory by polling for new data, A clean general alternative
might be to provide an open(2) mode that makes reads at the current
file end block if some process has the file open for writing.
-r is weird because it enables backwards reading, but only as
limited by count. Better would be a program, say revfile, that simply
reads backwards by lines. Then tail p has an elegant implementation:
revfile p | head | revfile
Doug
>> -f is a strange feature that effectively turns a regular file into a pipe
>> with memory by polling for new data, A clean general alternative
>> might be to provide an open(2) mode that makes reads at the current
>> file end block if some process has the file open for writing.
> OTOH, this would mean adding more functionality (read: complexity)
> into the kernel, and there has always been a general desire to avoid
> pushing <stuff> into the kernel when it can be done in userspace. Do
> you really think using a blocking read(2) is somehow more superior
> than using select(2) to wait for new data to be appended to the file?
I'm showing my age. tail -f antedated select(2) and was implemented
by alternately sleeping and reading. select(2) indeed overcomes that
clumsiness.
> I'll note, with amusement, that -r is one option which is *NOT* in the
> GNU version of tail. I see it in FreeBSD, but this looks like a
> BSD'ism.
-r came from Bell Labs. This reinforces the point that the ancients
had their imperfections.
Doug