> I often repeat a throwaway sentence that UUCP was Lesk,
> building a bug fix distribution mechanism.
> Am I completely wrong? I am sure Mike said this to me mid 80s.
That was an important motivating factor, but Mike also had an
unerring anticipatory sense of public "need". Thus his programs
spread like wildfire despite their bugs. UUCP itself is the premier
example. Its popularity impelled its inclusion in v7 despite its
woeful disregard for security.
> Does anyone have [Robert Morris's UUCP CSTR]? Doug?
Not I.
Doug
Robert's uucp was in use in the Research world when I arrived
in late summer of 1984. It had an interesting and sensible
structure; in particular uucico was split into two programs,
ci and co.
One of the first things I was asked to do when I got there was
to get Honey Danber working as a replacement. I don't remember
why that was preferred; possibly just because Robert was a
summer student, not a full-fledged member of the lab, and we
didn't want something as important to us as uucp to rely on
orphaned code.
Honey Danber was in place by the time we made the V8 tape,
toward the end of 1984.
Norman Wilson
Toronto ON
The sound situation in the UNIX world to me has always felt particularly
fragmentary, with OSS offering some glimmer of hope but faltering under the long
shadow of ALSA, with a hodge podge of PCM and other low level interfaces
littered about other offerings.
Given AT&T's involvement with the development of just about everything
"sound over wires" for decades by the time UNIX comes along, one would suspect
AT&T would be quite invested in standardizing interfaces for computers
interacting with audio signals on copper wire. Indeed much of the ESS R&D was
taking in analog telephone signals, digitizing them, and then acting on those
digitized results before converting back to analog to send to the other end.
Where this has me curious is if there were any efforts in Bell Labs, prior to
other industry players having their hands on the steering wheel, to establish an
abstract UNIX interface pattern for interacting with streams of converted audio
signal. Of course modern formats didn't exist, but the general idea of PCM was
well established, concepts like sampling rates, bit depths, etc. could be used
in calculations to interpret and manipulate digitized audio streams.
Any recollections? Was the landscape of signal processing solutions just so
particular that trying to create a centralized interface didn't make sense at
the time? Or was it a simple matter of priorities, with things like language
development and system design taking center stage, leaving a dearth of resources
to direct towards these sorts of matters? Was there ever a chance of seeing,
say, the 5ESS handling of PCM, extended out to non-switching applications, or
was that stuff firmly siloed over in the switching groups, having no influence
on signal processing outside?
- Matt G.
I mentioned a few weeks ago that I was writing this invited paper for an
upcoming 50-year anniversary of the first issue of IEEE Transactions on
Software Engineering.
The paper has now been accepted for publication and here's a preprint
version of it:
https://www.mrochkind.com/mrochkind/docs/SCCSretro2.pdf
Marc
> Was the landscape of signal processing solutions just so
> particular that trying to create a centralized interface didn't make
sense at
> the time? Or was it a simple matter of priorities, with things like
language
> development and system design taking center stage, leaving a dearth of
resources
> to direct towards these sorts of matters? Was there ever a chance of
seeing,
> say, the 5ESS handling of PCM, extended out to non-switching
applications,
In the early days of Unix there were intimate ties between CS Research and
Visual and Acoustic Research. V&A were Bell Labs' pioneer minicomputer
users because they needed interactive access to graphics and audio, which
would have been prohibitively expensive on the Labs' pre-timesharing
mainframes. Also they generally had EE backgrounds, so were comfortable
working hands-on with hardware, whereas CS had been largely spun off from
the math department.
Ed David, who led Bell Labs into Multics, without which Unix might not have
happened, had transferred from V&A to CS. So had Vic Vyssotsky and Elliot
Pinson (Dennis's department head and coauthor with me of the introduction
to the 1978 BSTJ Unix issue). John Kelly, a brilliant transferee who died
all too young pre-Unix, had collaborated with Vic on BLODI, the first
dataflow language, which took digital signal processing off breadboards and
into computers. One central member of the Unix lab, Lee McMahon, never left
V&A.
The PDP-7 of Unix v0 was a hand-me-down from Pinson's time in V&A. And the
PDP-11 of v1 was supported by a year-end fund surplus from there.
People came from V&A to CS because their interests had drifted from signal
processing to computing per se. With hindsight, one can see that CS
recruiting--even when it drew on engineering or physics
talent--concentrated on similarly motivated people. There was dabbling in
acoustics, such as my "speak" text-to-speech program. And there were
workers dedicated to a few specialties, such as Henry Baird in optical
character recognition. But unlike text processing, say, these fields never
reached a critical mass of support that might have stimulated a wider array
of I/O drivers or full toolkits to use them.
Meanwhile, in V&A Research linguists adopted Unix, but most others
continued to roll their own one-off platforms. It's interesting to
speculate whether the lack of audio interfaces in Unix was a cause or a
result of this do-it-yourself impulse.
Doug

In the lost-in-time department, my group at Digital Cambridge Research lab in 1993 did an audio interface patterned after the X Window system. Paper in the Summer USENIX: https://www.usenix.org/legacy/publications/library/proceedings/cinci93/gett…
For extra fun, the lab director of CRL at the time was Vic Vyssotsky.
But there must have been some Bell work, because around 1983 (?) when I was doing Etherphone at PARC I visited John DeTreville at Holmdel. He was building a voice - over - Ethernet system as well.
-Larry
> On Jan 6, 2025, at 4:51 PM, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> segaloco via TUHS wrote in
> <BWYwXjScYdFHM1NV0KEtgvazEfJM1PX7WaZ8lygZ45Bw2pEQG6JQr5OCtX-KMwEwr_k2zLD\
> GXac7wymRCtifnU9VKnlsrJCrKFqGZSgM6-0=(a)protonmail.com>:
> |The sound situation in the UNIX world to me has always felt particularly
> |fragmentary, with OSS offering some glimmer of hope but faltering under \
> |the long
> |shadow of ALSA, with a hodge podge of PCM and other low level interfaces
> |littered about other offerings.
>
> Oh, but *how* great it was when FreeBSD came on over with those
> "virtual sound devices", in 4.7 or 4.9 i think it was. Ie instead
> of one blocking device, one could open dev.1 and dev.2 and it was
> multiplexed in the kernel. It did some format conversion in the
> kernel alongside this.
>
> It was *fantastic*!, and i had a recording program sitting on
> a Cyrix 166+ and it took me ~1.5 percent of (single) CPU to record
> our then still great Hessenradio HR3 for long hours (Clubnight
> with worldwide known DJs, Chill with great sets in the Sunday
> mornings), and oh yes HR2 with the wonderful Mr. Paul Bartholomäi
> in "Notenschlüssel" (classical music), and the fantastic "Voyager"
> hour with Robert Lug on Sunday evening. It cannot be any better.
> I could code and compile and there was no stuttering alongside.
> 1.5 percent of CPU, honestly!
>
> I say this because FreeBSD has replaced that very code last year,
> if i recall correctly. It now all scales dynmically, if i read
> the patches that flew by right. (So it may be even better as of
> now, but by then, over twenty years ago, it blew my mind. And the
> solution was so simple, you know. The number of concurrent
> devices was a compile time constant if i recall correctly, four by
> default.)
>
> I also say this because today i am lucky i can use ALSA on Linux,
> and apulse for the firefox i have to use (and do use, too
> .. i also browse the internet in such a monster, and at least in
> parts still like that). I always hated those server solutions,
> where those masses of audio data flow through several context
> switches. What for? I never understood. Someone convinced me to
> try that pulseaudio server, but i think it was about 20 percent of
> CPU for a simple stream, with a terrible GUI, and that on
> a i5-8250U CPU @ 1.60GHz with up to 3.4 Ghz (four core; the four
> HT are alwys disabled). 20 percent!!
>
> ...
> |Any recollections?[.]
>
> Sorry, the above is totally apart, but for me the above is still
> such a tremendous thing that someone did; and for free. Whoever
> it was (i actually never tried to check it, now that i track their
> git for so many years), *thank you*!
> (And that includes the simple usual format conversions in between
> those 22050/44100 etc etc. Just like that -- open a device and
> read it, no thousands of callbacks, nothing. And 1.5 percent CPU.
> Maybe it is not good/exact enough for studio level audio editing.
> But i still have lots of those recordings, except that the "Balkan
> piss box" chill somehow disappeared. (Sorry Pedja, shall you read
> this.))
>
> --steffen
> |
> |Der Kragenbaer, The moon bear,
> |der holt sich munter he cheerfully and one by one
> |einen nach dem anderen runter wa.ks himself off
> |(By Robert Gernhardt)
> |
> |In Fall and Winter, feel "The Dropbear Bard"s pint(er).
> |
> |The banded bear
> |without a care,
> |Banged on himself for e'er and e'er
> |
> |Farewell, dear collar bear
Do I remember correctly that the early linux exec as a userland exec?
I think this came up before, I just can't recall.
I just learned about:
https://github.com/hardenedlinux/userland-exec