Forgive me if this is a duplicate. My original message appears to have bounced.
On 1/16/19, Kevin Bowling <kevin.bowling(a)kev009.com> wrote:
> I’ve heard and personally seen a lot of technical arrogance and
> incompetence out of the Masshole area. Was DEC inflicted? In
> “Showstopper” Cutler fled to the west coast to get away from this kind of
> thing.
>
Having worked at DEC from February 1980 until after the Compaq
takeover, I would say that DEC may have exhibited technical arrogance
from time to time, but certainly never technical incompetence. DEC's
downfall was a total lack of skill at marketing. Ken Olsen believed
firmly in a "build it and they will come" philosophy. Contrast this
with AT&T's brilliant "Unix - consider it a standard" ad campaign.
DEC also suffered from organizational paralysis. KO believed in
decisions by consensus. This is fine if you can reach a consensus,
but if you can't it leads to perpetually revisiting decisions and to
obstructionist behavior. There was a saying in DEC engineering that
any decision worth making was worth making 10 times. As opposed to
the "lead, follow, or get out of the way" philosophy at Sun. Or
Intel's concept of disagree and commit. DEC did move towards a
"designated responsible individual" approach where a single person got
to make the ultimate decision, but the old consensus approach never
really died.
Dave Cutler was the epitome of arrogance. On the technical side, he
got away with it because his way (which he considered to be the only
way) was usually at least good enough for Version 1, if not the best
design. Cutler excelled in getting V1 of something out the door. He
never stayed around for V2 of anything. He had a tendency to leave
messes behind him. A Cutler product reminded me of the intro to "The
Peabodys" segment of Rocky & Bullwinkle. A big elaborate procession,
followed by someone cleaning up the mess with a broom.
Cutler believed in a "my way or the highway" approach to software
design. His move to the west coast was to place himself far enough
away that those who wanted to revisit all his decisions would have a
tough time doing so.
On the personal side, he went out of his way to be nasty to people, as
pointed out elsewhere in this thread. Although he was admired
technically, nobody liked him.
-Paul W.
Tsort was a direct reference to Knuth's recognition and
christening of topological sort as a worthy software component.
This is a typical example of the interweaving of R and D
which characterized the culture of Bell Labs. Builders
and thinkers were acutely aware of each other, and often
were two faces of one person. Grander examples may be
seen in the roles that automata theory and formal languages
played in Unix. (Alas, these are the subjects of projected
Knuthian volumes that are still over the horizon.)
Doug
Norman is right. The Seattle museum has a 5620. Having seen "5620" in
the subject line, I completely overlooked the operative words
"real blit or jerq" in Seth's posting.
Doug
Seth Morabito:
I'd love to see a real Blit or jerq in person some day, but I don't even
know where I'd find one (it looks like even the Computer History Museum
in Mountain View, CA doesn't have a 68K Blit -- it only has a DMD 5620)
Doug McIlroy:
The Living Computer Museum in Seattle has one. Ad like most things
there, you can play with it.
===
It's a couple of years since I was last in Seattle, but
I remember only a DMD 5620 (aka Jerq); no 68000-based Blit.
Though of course they are always getting new acquisitions,
and have far more in storage than on display. (On one of
my visits I was lucky enough to get a tour of the upper
floor where things are stored and worked on.)
Whether they have a Blit or only a Jerq, it's a wonderful
place, and I expect any member of this list would enjoy a
visit.
I plan to drop in again this July, when I'm in town for
USENIX ATC (in suburban Renton).
Norman Wilson
Toronto ON
> I'd love to see a real Blit or jerq in person some day, but I don't even know where I'd find one
The Living Computer Museum in Seattle has one. Ad like most things
there, you can play with it.
Doug
We've been recovering a 1980s programming language implemented using a
mix of Pascal and C that ran on 4.1 BSD on VAX.
The Makefile distributed to around 20+ sites included these lines for
the C compiler.
CC= occ
CFLAGS= -g
It seems there was a (common?) practice of keeping around the old C
compiler when updating a BSD system and occ was used to reference it.
Anyone care to comment on this observation? was it specific to
BSD-land? how was the aliasing effected, a side-by-side copy of the
compiler pieces? As at 4.1 BSD the C compiler components were in /lib
(Pascal though was in /usr/lib).
# ls -l /lib
total 467
-rwxr-xr-x 1 root 25600 Jul 9 1981 c2
-rwxr-xr-x 1 root 89088 Jul 9 1981 ccom
-rwxr-xr-x 1 root 19456 Jul 9 1981 cpp
-rwxr-xr-x 1 root 199 Mar 15 1981 crt0.o
-rwxr-xr-x 1 root 40960 Jul 9 1981 f1
-rwxr-xr-x 1 root 62138 Jul 9 1981 libc.a
-rwxr-xr-x 1 root 582 Mar 15 1981 mcrt0.o
I'm still happily experimenting with my combination of a V6 kernel with the 1981 tcp/ip stack from BBN, for example figuring out how one would write something like 'ping' using that API. That brought me to pondering the origins of the 'alarm()' sys call and how some things were done on the Spider network.
These are my observations:
1. First of all: I understand that early Unix version numbers and dates mostly refer to the manual editions, and that core users had more frequent snapshots of a constantly evolving code base.
2. If I read the TUHS archive correctly, alarm() apparently did not exist in the original V6 edition of mid-1975. On the same basis, it was definitely there by the time of the V7 edition of early '79 (with sleep() removed) - so alarm() would seem to have appeared somewhere in the '75-'78 time frame.
3. The network enhanced NCP Unix versions in the TUHS archive have alarm() appear next to sleep(). Although the surviving tapes date from '79, it would seem to suggest that alarm() may have originated in the earlier part of the '75-'78 time frame.
4. The Spider network file store program 'nfs' (source here: http://chiselapp.com/user/pnr/repository/Spider/dir?mtime=0&type=flat&udc=1…) uses idioms like the below to avoid getting hung on a dead server/network:
signal(14,timeout); alarm(30);
if((read(fn,rply,100)) < 0) trouble();
alarm(0);
The 'nfs' program certainly was available in the 5th edition, maybe even in the 4th edition (the surviving 4th edition source code includes a Spider device driver). However, the surviving source for 'nfs' is from 1979 as well, so it may include later additions to the original design.
5. Replacing sleep() with alarm() and a user space library routine seems to have happened only some time after alarm() appeared, so it would seem that this was an optimization that alarm() enabled, and not its raison d'être.
So here are some questions that the old hands may be able to shed some light on:
- When/where did alarm() appear? Was network programming driving its inception?
- Did Spider programs use a precursor to alarm() before that? (similar to V5 including 'snstat' in its libc - a precursor to ioctl).
Paul
> / uses the system sleep call rather than the standard C library
> / sleep (sleep (III)) because there is a critical race in the
> / C library implementation which could result in the process
> / pausing forever. This is a horrible bug in the UNIX process
> / control mechanism.
>
> Quoted without comment from me!
Intriguing comment. I think your v6+ system probably has a lot of
PWB stuff in there. The libc source for sleep() in stock V6 is:
.globl _sleep
sleep = 35.
_sleep:
mov r5,-(sp)
mov sp,r5
mov 4(r5),r0
sys sleep
mov (sp)+,r5
rts pc
The PWB version uses something alarm/pause based, but apparently
gets it wrong:
.globl _sleep
alarm = 27.
pause = 29.
rti = 2
_sleep:
mov r5,-(sp)
mov sp,r5
sys signal; 14.; 1f
mov 4(r5),r0
sys alarm
sys pause
clr r0
sys alarm
mov (sp)+,r5
rts pc
1:
rti
I think the race occurs when an interrupt arrives between the sys alarm
and the sys pause lines, and the handler calls sleep again.
sleep() in the V7 libc is a much more elaborate C routine.
When I first read the race condition comment, I thought the issue would
be like that of write:
_write:
mov r5,-(sp)
mov sp,r5
mov 4(r5),r0
mov 6(r5),0f
mov 8(r5),0f+2
sys 0; 9f
bec 1f
jmp cerror
1:
mov (sp)+,r5
rts pc
.data
9:
sys write; 0:..; ..
This pattern appears in several V6 sys call routines, and would
not be safe when used in a context with signal based multi-
threading.
> From: Dave Horsfall
> As I dimly recall ... it returns the number of characters in the input
> queue (at that time)
Well, remember, this is the MIT V6 PDP-11 system, which had a tty driver which
had been completely re-written at MIT years before, so you'd really have to
check the MIT V6 sources to see exactly what it did. I suspect they borrowed
the name, and basic semantics, from Berkeley, but everything else - who
knows.
This user telnet is from 1982 (originally), but I was looking at the final
version, which is from 1984; the use of the ioctl was apparently a later
addition. I haven't checked to see what it did originally for reading from the
user's terminal (although the earlier version also used the 'tasking'
package).
Noel
> From: Paul Ruizendaal
> It would not seem terribly complex to add non-blocking i/o capability to
> V6. ... Adding a 'capacity' field to the sgtty interface would not
> have been a big leap either. ...
> Maybe in the 1975-1980 time frame this was not felt to be 'how Unix does
> it'?
This point interested me, so I went and had a look at how the MIT V6+/PWB
TCP/IP did it. I looked at user TELNET, which should be pretty simple (server
would probably be more complicated, due to PTY stuff).
It's totally different - although that's in part because in the MIT system,
TCP is in the user process, along with the application. In the user process,
there's a common non-premptive 'tasking' package which both the TCP and TELNET
code use. When there are no tasks ready to run, the process uses the sleep()
system call to wait for a fixed, limited quantum (interrupts, i.e. signals,
will usually wake it before the quantum runs out); note this comment:
/ uses the system sleep call rather than the standard C library
/ sleep (sleep (III)) because there is a critical race in the
/ C library implementation which could result in the process
/ pausing forever. This is a horrible bug in the UNIX process
/ control mechanism.
Quoted without comment from me!
There are 3 TCP tasks - send, receive and timer. The process receives an
'asynchronous I/O complete' signal when a packet arrives, and that wakes up
the process, and then one of the tasks therein, to do packet processing
(incoming data, acks, etc).
There appears to be a single TELNET task, which reads from the user's
keyboard, and sends data to the network. (It looks like processing of incoming
data is handled in the context of one of the TCP tasks.) Its main loop starts
with this:
ioctl (0, FIONREAD, &nch);
if (nch == 0) {
tk_yield ();
continue;
}
}
if ((c = getchar()) == EOF) {
so that ioctl() must look to see if there is any data waiting in the terminal
input buffer (I'm too lazy to go see what FIONREAD does, right at the moment).
Noel