All-in-one vs pipelined sorts brought to mind NSA's undeservedly obscure
dataflow language, POGOL, https://doi.org/10.1145/512927.512948 (POPL
1973). In POGOL one wrote programs as collections of routines that
communicated via named files, which the compiler did its best to optimize
away. Often this amounted to loop jamming or to the distributive law for
map over function composition. POGOL could, however, handle general
dataflow programming including feedback loops.
One can imagine a program for pulling the POGOL trick on a shell pipeline.
That could accomplish--at negligible cost--the conversion of a cheap demo
into a genuine candidate for intensive production use.
This consideration spurs another thought. Despite Unix's claim to build
tools to make tools, only a relativelly narrow scope of higher-order tools
that take programs as dara ever arose. After the bootstrapping B, there
were a number of compilers, most notably C, plus f77, bc, ratfor, and
struct. A slight variant on the idea of compiling was the suite of troff
preprocessors.
The shell also manipulates programs by composing them into larger programs.
Aside from such examples, only one other category of higher-order Unix
program comes to mind: Peter Weinberger's lcomp for instrumenting C
programs with instruction counts.
An offshoot of Unix were Gerard Holzmann's tools for extracting
model-checker models from C programs. These saw use at Indian Hill and most
notably at JPL, but never appeared among mainstream Unix offerings. Similar
tools exist in-house at Microsoft and elsewhere. But generally speaking we
have vey few kinds of programs that manipulate programs.
What are the prospects for computer science advancing to a stage where
higher-level programs become commonplace? What might be in one's standard
vocabulary of functions that operate on programs?
Doug
I chanced upon a brochure describing the Perkin-Elmer Series 3200 /
(previously Interdata, later Concurrent Computer Corporation) Sort/Merge
II utility [1]. It is instructive to compare its design against that of
the contemporary Unix sort(1) program [2].
- Sort/Merge II appears to be marketed as a separate product (P/N
S90-408), whereas sort(1) was/is an integral part of the Unix used
throughout the system.
- Sort/Merge II provides interactive and batch command input modes;
sort(1) relies on the shell to support both usages.
- Sort/Merge II appears to be able to also sort binary files; sort(1)
can only handle text.
- Sort/Merge II can recover from run-time errors by interactively
prompting for user corrections and additional files. In Unix this is
delegated to shell scripts.
- Sort/Merge II has built-in support for tape handling and blocking;
sort(1) relies on pipes from/to dd(1) for this.
- Sort/Merge II supports user-coded decision subroutines written in
FORTRAN, COBOL, or CAL. Sort(1) doesn't have such support to this day.
One could construct a synthetic key with awk(1) if needed.
- Sort/Merge II can automatically "allocate" its temporary file. For
sort(1) file allocation is handled by the Unix kernel.
To me this list is a real-life demonstration of the differences between
the, prevalent at the time, thoughtless agglomeration of features into a
monolith approach against Unix's careful separation of concerns and
modularization via small tools. The same contrast appears in a more
contrived setting in J. Bentley's CACM Programming Pearl's column where
Doug McIlroy critiques a unique word counting literate program written
by Don Knuth [3]. (I slightly suspect that the initial program
specification was a trap set up for Knuth.)
I also think that the design of Perkin-Elmer's Sort/Merge II shows the
influence of salespeople forcing developers to tack-on whatever features
were required by important customers. Maybe the clean design of Unix
owes a lot to AT&T's operation under the 1956 consent decree that
prevented it from entering the computer market. This may have shielded
the system's design from unhealthy market pressures during its critical
gestation years.
[1]
https://bitsavers.computerhistory.org/pdf/interdata/32bit/brochures/Sort_Me…
[2] https://s3.amazonaws.com/plan9-bell-labs/7thEdMan/v7vol1.pdf#page=166
[3] https://doi.org/10.1145/5948.315654
Diomidis - https://www.spinellis.gr
> To me this list is a real-life demonstration of the differences between
> the, prevalent at the time, thoughtless agglomeration of features into a
> monolith approach against Unix's careful separation of concerns and
> modularization via small tools. The same contrast appears in a more
> contrived setting in J. Bentley's CACM Programming Pearl's column where
> Doug McIlroy critiques a unique word counting literate program written
> by Don Knuth [3]. (I slightly suspect that the initial program
> specification was a trap set up for Knuth.)
It wasn't a setup. Although Jon's introduction seems to imply that he had
invited both Don and me to participate, I actually was moved to write the
critique when I proofread the 2-author column, as I did for many of Jon's
Programming Pearls. That led to the 3-author arrangement. Knuth and
I are still friends; he even reprinted the critique. It is also memorably
depicted at https://comic.browserling.com/tag/douglas-mcilroy.
Doug
> I often repeat a throwaway sentence that UUCP was Lesk,
> building a bug fix distribution mechanism.
> Am I completely wrong? I am sure Mike said this to me mid 80s.
That was an important motivating factor, but Mike also had an
unerring anticipatory sense of public "need". Thus his programs
spread like wildfire despite their bugs. UUCP itself is the premier
example. Its popularity impelled its inclusion in v7 despite its
woeful disregard for security.
> Does anyone have [Robert Morris's UUCP CSTR]? Doug?
Not I.
Doug
Robert's uucp was in use in the Research world when I arrived
in late summer of 1984. It had an interesting and sensible
structure; in particular uucico was split into two programs,
ci and co.
One of the first things I was asked to do when I got there was
to get Honey Danber working as a replacement. I don't remember
why that was preferred; possibly just because Robert was a
summer student, not a full-fledged member of the lab, and we
didn't want something as important to us as uucp to rely on
orphaned code.
Honey Danber was in place by the time we made the V8 tape,
toward the end of 1984.
Norman Wilson
Toronto ON
The sound situation in the UNIX world to me has always felt particularly
fragmentary, with OSS offering some glimmer of hope but faltering under the long
shadow of ALSA, with a hodge podge of PCM and other low level interfaces
littered about other offerings.
Given AT&T's involvement with the development of just about everything
"sound over wires" for decades by the time UNIX comes along, one would suspect
AT&T would be quite invested in standardizing interfaces for computers
interacting with audio signals on copper wire. Indeed much of the ESS R&D was
taking in analog telephone signals, digitizing them, and then acting on those
digitized results before converting back to analog to send to the other end.
Where this has me curious is if there were any efforts in Bell Labs, prior to
other industry players having their hands on the steering wheel, to establish an
abstract UNIX interface pattern for interacting with streams of converted audio
signal. Of course modern formats didn't exist, but the general idea of PCM was
well established, concepts like sampling rates, bit depths, etc. could be used
in calculations to interpret and manipulate digitized audio streams.
Any recollections? Was the landscape of signal processing solutions just so
particular that trying to create a centralized interface didn't make sense at
the time? Or was it a simple matter of priorities, with things like language
development and system design taking center stage, leaving a dearth of resources
to direct towards these sorts of matters? Was there ever a chance of seeing,
say, the 5ESS handling of PCM, extended out to non-switching applications, or
was that stuff firmly siloed over in the switching groups, having no influence
on signal processing outside?
- Matt G.