> From: Larry McVoy
> If you read(2) a page and mmap()ed it and then did a write(2) to the
> page, the mapped page is the same physical memory as the write()ed
> page. Zero coherency issues.
Now I'm confused; read() and write() semantically include a copy operation
(so there are then two copies of that data chunk, and possible consistency
issues between them), and the copied item is not necessarily page-sized (so
you can't ensure consistency between the original+copy by mapping it in). So
when one does a read(file, &buffer, 1), one gets a _copy of just that byte_
in the process' address space (and similar for write()).
Yes, there's no coherency issue between the contents of an mmap()'d page, and
the system's idea of what's in that page of the file, but that's a
_different_ coherency issue.
Or am I confused?
PS:
> From: "Greg A. Woods"
> I now struggle with liking the the Unix concept of "everything is a
> file" -- especially with respect to actual data files. Multics also got
> it right to use single-level storage -- that's the right abstraction
Oh, one other thing that SLS breaks, for data files, is the whole Unix 'pipe'
abstraction, which is at the heart of the whole Unix tools paradigm. So no
more 'cmd | wc' et al. And since SLS doesn't have the 'make a copy'
semantics of pipe output, it would be hard to trivially work around it.
Yes, one could build up a similar framework, but each command would have to
specify an input file and an output file (no more 'standard in' and 'out'),
and then the command interpreter would have to i) take command A's output file
and feed it to command B, and ii) delete A's output file when the whole works
was done. Yes, the user could do it manually, but compare:
cmd aaa | wc
and
cmd aaa bbb
wc bbb
rm bbb
If bbb is huge, one might run out of room, but with today's 'light my cigar
with disk blocks' life, not a problem - but it would involve more disk
traffic, as bbb would have to be written out in its entirety, not just have a
mall piece kept in the disk cache as with a pipe.
Noel
> From: "Greg A. Woods"
> the elegance of fork() is incredible!
That's because in PDP-11 Unix, they didn't have the _room_ to create a huge
mess. Try reading the exec() code in V6 or so.
(I'm in a bit of a foul mood today; my laptop sorta locked up when a _single_
Edge browser window process grew to almost _2GB_ in size. Are you effing
kidding me? If I had any idea what today would look like, back when I was 20 -
especially the massive excrement pile that the Internet has turned into - I
never would have gone into computers - cabinetwork, or something, would have
been an infinitely superior career choice.)
> I now struggle with liking the the Unix concept of "everything is a
> file" -- especially with respect to actual data files. Multics also got
> it right to use single-level storage -- that's the right abstraction
Well, files a la Unix, instead of the SLS, are OK for a _lot_ of data storage
- pretty much everything except less-common cases like concurrent access to a
shared database, etc.
Where the SLS really shines is _code_ - being able to just do a subroutine
call to interact with something else has incredible bang/buck ratio - although
I concede doing it all securely is hard (although they did make a lot of
progress there).
Noel
>> > It's part of my academic project to work on provable compiler security.
>> > I tried to do it according to the "Reflections on Trusting Trust" by Ken
>> > Thompson, not only to show a compiler Trojan horse but also to prove that
>> > we can discover it.
>>
>> Of course it can be discovered if you look for it. What was impressive about
>> the folks who got Thompson's compiler at PWB is that they found the horse
>> even though they weren't looking for it.
> I had not heard this story. Can you elaborate, please? My impression from having
> read the paper (a long time ago now) is that Ken did the experiment locally only.
Ken did it locally, but a vigilant person at PWB noticed there was an
experimental
compiler on the research machine and grabbed it. While they weren't looking for
hidden stuff, they probably were trying to find what was new in the
compiler. Ken
may know details about what they had in the way of source and binary.
Doug
> It's part of my academic project to work on provable compiler security.
> I tried to do it according to the "Reflections on Trusting Trust" by Ken
> Thompson, not only to show a compiler Trojan horse but also to prove that
> we can discover it.
Of course it can be discovered if you look for it. What was impressive about
the folks who got Thompson's compiler at PWB is that they found the horse
even though they weren't looking for it.
Then there was the first time Jim Reeds and I turned on integrity control in
IX, our multilevel-security version of Research Unix. When it reported
a security
violation during startup we were sure it was a bug. But no, it had snagged Tom
Duff's virus in the act of replication. It surprised Tom as much as it did us,
because he thought he'd eradicated it.
Doug
This is FYI. No comment on whether it was a good idea or not. :-)
Arnold
> From: Niklas Rosencrantz <niklasro(a)gmail.com>
> Date: Sun, 19 Sep 2021 17:10:24 +0200
> To: tinycc-devel(a)nongnu.org
> Subject: Re: [Tinycc-devel] Can tcc compile itself with Apple M1?
>
>
> Hello!
>
> For demonstration purpose I put my experiment with a compiler backdoor in a
> public repository
> https://github.com/montao/ddc-tinyc/blob/857d927363e9c9aaa713bb20adbe99ded7…
>
> It's part of my academic project to work on provable compiler security.
> I tried to do it according to the "Reflections on Trusting Trust" by Ken
> Thompson, not only to show a compiler Trojan horse but also to prove that
> we can discover it.
> What it does is inject arbitrary code to the next version of the compiler
> and so on.
>
> Regards \n
One of the things I really appreciate about participating in this community
and studying Unix history (and the history of other systems) is that it
gives one firm intellectual ground from which to evaluate where one is
going: without understanding where one is and where one has been, it's
difficult to assert that one isn't going sideways or completely backwards.
Maybe either of those outcomes is appropriate at times (paradigms shift; we
make mistakes; etc) but generally we want to be moving mostly forward.
The danger when immersing ourselves in history, where we must consider and
appreciate the set of problems that created the evolutionary paths leading
to the systems we are studying, is that our thinking can become calcified
in assuming that those systems continue to meet the needs of the problems
of today. It is therefore always important to reevaluate our base
assumptions in light of either disconfirming evidence or (in our specific
case) changing environments.
To that end, I found Timothy Roscoe's (ETH) joint keynote address at
ATC/OSDI'21 particularly compelling. He argues that what we consider the
"operating system" is only controlling a fraction of a modern computer
these days, and that in many ways our models for what we consider "the
computer" are outdated and incomplete, resulting in systems that are
artificially constrained, insecure, and with separate components that do
not consider each other and therefore frequently conflict. Further,
hardware is ossifying around the need to present a system interface that
can be controlled by something like Linux (used as a proxy more generally
for a Unix-like operating system), simultaneously broadening the divide and
making it ever more entrenched.
Another theme in the presentation is that, to the limited extent
the broader systems research community is actually approaching OS topics at
all, it is focusing almost exclusively on Linux in lieu of new, novel
systems; where non-Linux systems are featured (something like 3 accepted
papers between SOSP and OSDI in the last two years out of $n$), the
described systems are largely Linux-like. Here the presentation reminded me
of Rob Pike's "Systems Software Research is Irrelevant" talk (slides of
which are available in various places, though I know of no recording of
that talk).
Roscoe's challenge is that all of this should be seen as both a challenge
and an opportunity for new research into operating systems specifically:
what would it look like to take a holistic approach towards the hardware
when architecting a new system to drive all this hardware? We have new
tools that can make this tractable, so why don't we do it? Part of it is
bias, but part of it is that we've lost sight of the larger picture. My own
question is, have we become entrenched in the world of systems that are
"good enough"?
Things he does NOT mention are system interfaces to userspace software; he
doesn't seem to have any quibbles with, say, the Linux system call
interface, the process model, etc. He's mostly talking about taking into
account the hardware. Also, in fairness, his highlighting a "small" portion
of the system and saying, "that's what the OS drives!" sort of reminds me
of the US voter maps that show vast tracts of largely unpopulated land
colored a certain shade as having voted for a particular candidate, without
normalizing for population (land doesn't vote, people do, though in the US
there is a relationship between how these things impact the overall
election for, say, the presidency).
I'm curious about other peoples' thoughts on the talk and the overall topic?
https://www.youtube.com/watch?v=36myc8wQhLo
- Dan C.
> Maybe there existed RE notations that were simply copied ...
Ed was derived from Ken's earlier qed. Qed's descendant in Multics was
described in a 1969 GE document:
http://www.bitsavers.org/pdf/honeywell/multics/swenson/6906.multics-condens….
Unfortunately it describes regular expressions only sketchily by
example. However, alternation, symbolized by | with grouping by
parentheses, was supported in qed, whereas alternation was omitted
from ed. The GE document does not mention character classes; an
example shows how to use alternation for the same purpose.
Beginning-of-line is specified by a logical-negation symbol. In
apparent contradiction, the v1 manual says the meanings of [ and ^ are
the same in ed and (an unspecified version of) qed. My guess about the
discrepancies is no better than yours.
(I am amused by the title "condensed guide" for a manual in which each
qed request gets a full page of explanation. It exemplifies how Unix
split from Multics in matters of taste.)
Doug
> From: Roland Huisman
> I have a PDP11/20 and I would love to run an early Unix version on
> it. ... But it seems that the earliest versions of Unix do not need the
> extra memory. Does anyone have RK05 disk images for these early Unix
> versions?
Although the _kernel_ source for V1 is available:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=V1
most of the rest is missing; only 'init' and 'sh' are available. So one would
have to write almost _everything_ else. Some commands are available in PDP-11
assembler in later versions, and might be movable without _too_ much work -
but one would have to start with the assembler itself, which is luckily in
assembler.
If I were trying to run 'UNIX' on an -11/20, I think the only reasonable
choice would be MINI-UNIX:
https://gunkies.org/wiki/MINI-UNIX
It's basically V6 UNIX with all use of the PDP-11 memory management
removed. The advantage of going MINI-UNIX is that almost all V6 source
(applications, drivers, etc) will run on it 'as is'.
It does need ~56KB of main memory. If you don't have that much on the -11/20,
LSX (links in the above) would be an option; it's very similar to MINI-UNIX,
but is trimmed down some, to allow its use on systems with less main memory.
I'm not sure if MINI-UNIX has been run on the -11/20, but it _should_ run
there; it runs on the -11/05, and the only differences between the /20 and the
/05 are that the /20 does not have the RTT instruction (and I just checked,
and MINI-UNIX doesn't use RTT), and SWAB doesn't clear the V condition code
bit. (There are other minor differences, such as OP Rn, (Rn)+ are different on
the -11/20, but that shouldn't be an issue.)
Step 1 would be to get MINI-UNIX running on an -11/20 under a simulator; links
in the above to get you there.
Noel
> From: Clem Cole
> The KS11 MMU for the 11/20 was built by CSS ... I think Noel has done
> more investigation than I have.
I got a very rough description of how it worked, but that was about it.
> I'm not sure if the KS11 code is still there. I did not think so.
No, the KS11 was long gone by later Vn. Also,I think not all of the -11/20
UNIX machines had it, just some.
> The V1 work was for a PDP-7
Actually, there is a PDP-11 version prior to V2, canonically called V1.
The PDP-7 version seems to be called 'PDP-7 UNIX' or 'V0'.
> I'm fairly sure that the RK05, used the RK11-D controller.
Normally, yes. I have the impression that one could finagle RK03's to work on
the RK11-D, and vice versa for RK05's on the RK11-C, but I don't recall the
details. The main difference between the RK11-C and -D (other then the
implementation) was that i) the RK11-C used one line per drive for drive
selection (the -D used binary encoding on 3 lines), and ii) it had the
'maintenance' capability and register (al omitted from the -D).
> The difference seems to have been in drive performance.
Yes, but it wasn't major. They both did 1500RPM, so since they used
the same pack format, the rotational delay, transfer rate, etc were
identical. The one peformance difference was in seeks; the
average on the RK01 was quoted as 70 msec, and 50 msec on the
RK05.
> Love to see the {KT11-B prints] and if you know where you found them.
They were sold on eBait along with an -11/20 that allegedly had a KT11-B. (It
didn't; it was an RK11-C.) I managed to get them scanned, and they and the
minimal manual are now in Bitsavers. I started working on a Tech Manual for
it, but gave up with it about half-way done.
> I wonder if [our V1 source] had the KS-11 stuff in it.
No; I had that idea a while back, looked carefully, our V1 listings
pre-date the KS11.
> From: Roland Huisman
> There is a KT11B paging option that makes the PDP11/20 a 18 bit
> machine.
Well, it allows 2^18 bytes of main memory, but the address space of the
CPU is still2^16 bytes.
> It looks a bit like the TC11 DECtape controller.
IITC, it's two backplanes high, the TC11 is one. So more like the RK11-C...
:-)
> I have no idea how it compares to the later MMU units from the
> software perspective.
Totally different; it's real paging (with page tables stored in masin
memory). The KT11-B provides up to 128 pages of 512 bytes each, in both Exec
and User mode. The KT11-C, -D etc are all segmentation, with all the info
stored in registers in the unit.
> I wonder is there is some compatibility with the KT11-B [from the KS11]
I got the impression that the KS11 was more a 'base and bounds' kind
of thing.
Noel
Hello Unix fanatics,
I have a PDP11/20 and I would love to run an early Unix version on it. I've been working on the hardware for a while and I'm getting more and more of the pieces back online again. The configuration will be two RK05 hard disks, TU56H tape, PC11 paper tape reader/puncher and a RX01 floppy drive. Unfortunately I don't have a MMU or paging option. But it seems that the earliest versions of Unix do not need the extra memory.
Does anyone have RK05 disk images for these early Unix versions? That would be a great help. Otherwise it would be great to have some input about how to create a bootable Unix pack for this machine.
A bit about the hardware restoring is on the vcfed forum:https://www.vcfed.org/forum/forum/genres/dec/78961-rk05-disk-drive-ve…
Booting RT11 from RK05https://youtu.be/k0tiUcRBPQATU56H tape drive back onlinehttps://youtu.be/_ZJK3QP9gRA
Thanks in advance!Roland Huisman
Hoi,
I'm interested in the early design decisions for meta characters
in REs, mainly regarding Ken's RE implementation in ed.
Two questions:
1) Circumflex
As far as I see, the circumflex (^) is the only meta character that
has two different special meanings in REs: First being the
beginning of line anchor and second inverting a character class.
Why was it chosen for the second one? Why not the exclamation mark
in that case? (Sure, C didn't exist by then, but the bang probably
was used to negate in other languages of the time, I think.)
2) Symbol for the end of line anchor
What is the reason that the beginning of line and end of line
anchors are different symbols? Is there a reason why not only one
symbol, say the circumflex, was chosen to represent both? I
currently see no disadvantages of such a design. (Circumflexes
aren't likely to end lines of text, neither.)
I would appreciate if you could help me understand these design
decisions better. Maybe there existed RE notations that were simply
copied ...
meillo
You can check the Computer History Museum's holdings on line. If they don't
have the documents already, they would probably like them.
The Living Computer Museum in Seattle had a working blit on display. If
they don't already have the manual, I'm sure they would love to have one.
Alas, their website says they've "suspended all operations for now", a
result of the double whammy of Covid and the death of their principal
angel, Paul Allen.
more garage cleaning this last weekend. i came across some memorabilia
from my time at Bell Labs, including a lovely article titled
The Electrical Properties of Infants
Infants have long been known to grow into adults. Recent experiments
show they are useful in applications such as high power circuit breakers.
Not to mention a lovely article from the “Medical Aspects of Human Sexuality”
(July 1991) titled “Scrotum Self-Repair”.
the two items are
1) “Documents for UNIX Volume 1” by Dolotta, Olson and Petrucelli (jan 1981)”
2) The complete manual for the Blit. this comes in a blue Teletype binder and includes
the full manual (including man pages) and circuit diagrams.
i’d prefer to have them go to some archival place, but send me a private email
if you interested and we’ll see what we can do.
andrew
I’d be interested in a scan of the Blit schematics, and it seems that a few others might be as well:
https://minnie.tuhs.org/pipermail/tuhs/2019-December/thread.html#19652https://github.com/aiju/fpga-blit
(for clarity: I’m not ‘aiju')
Paul
> Message: 1
> Date: Wed, 8 Sep 2021 01:29:13 -0700
> From: Andrew Hume <andrew(a)humeweb.com>
> To: The Eunuchs Hysterical Society <tuhs(a)tuhs.org>
> Subject: [TUHS] desiderata
> Message-ID: <34E984D3-AD92-402D-9A9C-E84B6362DF77(a)humeweb.com>
> Content-Type: text/plain; charset=utf-8
>
> more garage cleaning this last weekend.
[...]
> 2) The complete manual for the Blit. this comes in a blue Teletype binder and includes
> the full manual (including man pages) and circuit diagrams.
>
> i’d prefer to have them go to some archival place, but send me a private email
> if you interested and we’ll see what we can do.
>
> andrew
I recently upgraded my machines to fc34. I just did a stock
uncomplicated installation using the defaults and it failed miserably.
Fc34 uses btrfs as the default filesystem so I thought that I'd give it
a try. I was especially interested in the automatic checksumming because
the majority of my storage is large media files and I worry about bit
rot in seldom used files. I have been keeping a separate database of
file hashes and in theory btrfs would make that automatic and transparent.
I have 32T of disk on my system, so it took a long time to convert
everything over. A few weeks after I did this I went to unload my
camera and couldn't because the filesystem that holds my photos was
mounted read-only. WTF? I didn't do that.
After a bit of poking around I discovered that btrfs SILENTLY remounted the
filesystem because it had errors. Sure, it put something in a log file,
but I don't spend all day surfing logs for things that shouldn't be going
wrong. Maybe my expectation that filesystems just work is antiquated.
This was on a brand new 16T drive, so I didn't think that it was worth
the month that it would take to run the badblocks program which doesn't
really scale to modern disk sizes. Besides, SMART said that it was fine.
Although it's been discredited by some, I'm still a believer in "stop and
fsck" policing of disk drives. Unmounted the filesystem and ran fsck to
discover that btrfs had to do its own thing. No idea why; I guess some
think that incompatibility is a good thing.
Ran "btrfs check" which reported errors in the filesystem but was otherwise
useless BECAUSE IT DIDN'T FIX ANYTHING. What good is knowing that the
filesystem has errors if you can't fix them?
Near the top of the manual page it says:
Warning
Do not use --repair unless you are advised to do so by a developer
or an experienced user, and then only after having accepted that
no fsck successfully repair all types of filesystem corruption. Eg.
some other software or hardware bugs can fatally damage a volume.
Whoa! I'm sure that operators are standing by, call 1-800-FIX-BTRFS.
Really? Is a ploy by the developers to form a support business?
Later on, the manual page says:
DANGEROUS OPTIONS
--repair
enable the repair mode and attempt to fix problems where possible
Note there’s a warning and 10 second delay when this option
is run without --force to give users a chance to think twice
before running repair, the warnings in documentation have
shown to be insufficient
Since when is it dangerous to repair a filesystem? That's a new one to me.
Having no option other than not being able to use the disk, I ran btrfs
check with the --repair option. It crashed. Lesson so far is that
trusting my data to an unreliable unrepairable filesystem is not a good
idea. Since this was one of my media disks I just rebuilt it using ext4.
Last week I was working away and tried to write out a file to discover
that /home and /root had become read-only. Charming. Tried rebooting,
but couldn't since btrfs filesystems aren't checked and repaired. Plugged
in a flash drive with a live version, managed to successfully run --repair,
and rebooted. Lasted about 15 minutes before flipping back to read only
with the same error.
Time to suck it up and revert. Started a clean reinstall. Got stuck
because it crashed during disk setup with anaconda giving me a completely
useless big python stack trace. Eventually figured out that it was
unable to delete the btrfs filesystem that had errors so it just crashed
instead. Wiped it using dd; nice that some reliable tools still survive.
Finished the installation and am back up and running.
Any of the rest of you have any experiences with btrfs? I'm sure that it
works fine at large companies that can afford a team of disk babysitters.
What benefits does btrfs provide that other filesystem formats such as
ext4 and ZFS don't? Is it just a continuation of the "we have to do
everything ourselves and under no circumstances use anything that came
from the BSD world" mentality?
So what's the future for filesystem repair? Does it look like the past?
Is Ken's original need for dsw going to rise from the dead?
In my limited experience btrfs is a BiTteR FileSystem to swallow.
Or, as Saturday Night Live might put it: And now, linux, starring the
not ready for prime time filesystem. Seems like something that's been
under development for around 15 years should be in better shape.
Jon
...
DEC Diagnositcs would run on a beached whale
?
Anyone remember and/or know?
(It seems to apply to other manufacturer's diagnostics as well, even today.)
Thanks,
Arnold
I hope that this does not start any kind of language flaming and that if
something starts the moderator will shut it down quickly.
Where did the name for abort(3) and SIGABRT come from? I believe it was
derived from the IBM term ABEND, but would like to know one way or the
other.
Clem Cole:
I believe the line was: *"running **DEC Diagnostics is like kicking a dead
whale down the beach.*"
As for who said it, I'm not sure, but I think it was someone like Rob
Kolstad or Henry Spencer.
=====
The nearest I can remember encountering before was a somewhat
different quote, attributed to Steve Johnson:
Running TSO is like kicking a dead whale down the beach.
Since scj is on this list, maybe he can confirm that part.
I don't remember hearing it applied to diagnostics. I can
imagine someone saying it, because DEC's hardware diags were
written by hardware people, not software people; they required
a somewhat arcane configuration language, one that made more
sense if you understood how the different pieces of hardware
connected together.
I learned to work with it and found it no less usable than,
say, the clunky verbose command languages of DEC's operating
systems; but I have always preferred to think in low levels.
DEC's diags were far from perfect, but they were a hell of a
lot better than the largely-nonexistent diags available for
modern Intel-architecture systems. I am right now dealing
with a system that has an intermittent fault, that causes
the OS to crash in the middle of some device driver every
so often. Other identical systems don't, so I don't think
it's software. Were it a PDP-11 or a VAX I'd fire up the
diagnostics for a while, and have at least a chance of spotting
the problem; today, memtest is about the only such option,
and a solid week of running memtest didn't shake out anything
(reasonably enough, who says it's a memory problem?).
Give me XXDP, not just the Blue Screen of Death.
Norman Wilson
Toronto ON
Not to get into what is soemthing of a religious war,
but this was the paper that convinced me that silent
data corruption in storage is worth thinking about:
http://www.cs.toronto.edu/~bianca/papers/fast08.pdf
A key point is that the character of the errors they
found suggests it's not just the disks one ought to worry
about, but all the hardware and software (much of the latter
inside disks and storage controllers and the like) in the
storage stack.
I had heard anecdotes long before (e.g. from Andrew Hume)
suggesting silent data corruption had become prominent
enough to matter, but this paper was the first real study
I came across.
I have used ZFS for my home file server for more than a
decade; presently on an antique version of Solaris, but
I hope to migrate to OpenZFS on a newer OS and hardware.
So far as I can tell ZFS in old Solaris is quite stable
and reliable. As Ted has said, there are philosophical
reasons why some prefer to avoid it, but if you don't
subscribe to those it's a fine answer.
I've been hearing anecdotes since forever about sharp
edges lurking here and there in BtrFS. It does seem
to be eternally unready for production use if you really
care about your data. It's all anecdotes so I don't know
how seriously to take it, but since I'm comfortable with
ZFS I don't worry about it.
Norman Wilson
Toronto ON
PS: Disclosure: I work in the same (large) CS department
as Bianca Schroeder, and admire her work in general,
though the paper cited above was my first taste of it.
This may be due to logic similar to that of a classic feature that I
always deemed a bug: troff begins a new page when the current page is
exactly filled, rather than waiting until forced by content that
doesn't fit. If this condition happens at the end of a document, a
spurious blank page results. Worse, if the page header happens to
change just after the exactly filled page, the old heading will be
produced before the new heading is read.
Doug
> fork() is a great model for a single-threaded text processing pipeline to do
> automated typesetting. (More generally, anything that is a straightforward
> composition of filter/transform stages.) Which is, y'know, what Unix is *for*.
> It's not so great for a responsive GUI in front of a multi-function interactive program.
"Single-threaded" is not a term I would apply to multiple processes in
a pipeline. If you mean a single track of data flow, fine, but the
fact that that's a prevalent configuration of cooperating processes in
Unix is an artifact of shell syntax, not an inherent property of
pipe-style IPC. The cooperating processes in Rob Pike's 20th century
window systems and screen editors, for example, worked smoothly
without interrupts or events - only stream connections. I see no
abstract distinction between these programs and "stuff people play
with on their phones."
It bears repeating, too, that stream connections are much easier to
reason about than asynchronous communication. Thus code built on
streams is far less vulnerable to timing bugs.
At last a prince has come to awaken the sleeping beauty of stream
connections. In Go (Pike again) we have a widely accepted programming
language that can fully exploit them, "[w]hich is, y'know, what Unix
is 'for'."
(If you wish, you may read "process" above to include threads, but
I'll stay out of that.)
Doug
Steve Simon:
once again i am taken aback at the good taste of the residents of the unix room.
As a whilom denizen of that esteemed playroom, I question
both the accuracy and the relevance of that metric.
Besides, what happened to the sheep shelf? Was it scrubbed
away after I left? And, Ken, whatever happened to Dolly the
Sheep (after she was hidden to avoid upsetting visitors)?
Norman Wilson
Toronto ON
No longer a subscriber to sheep! magazine
> I don't think anyone knows. Nobody relevant, I believe.
>
> -rob
I understand that Dave Presotto bought that photo at a garage sale for $1. The photo hung in
the Unix Room for years, at one point labeled “Peter Weinberger.”
One day I removed it from its careful mounting and scanned in the photo. It bore the label
“what, no steak?”
The photo was stolen from a wall sometime after I left. The scanned image is at
https://cheswick.com/ches/tmp/whatnosteak.jpeg
ches