> From: Dave Horsfall
> And as for subroutine calls on the -8, let's not go there... As I dimly
> recall, it planted the return address into the first word of the called
> routine and jumped to the second instruction; to return, you did an
> indirect jump to the first word.
That do be correct.
That style of subroutine call goes back a _long_ way. IIRC, Whirlwind used
that kind of linkage (alas, I've misplaced my copy of the Whirlwind
instruction manual, sigh - a real tresure).
ISTVR there was something about the way Whirlwind did it that made it clear
how it came to be the way it was - IIRC, the last instruction in the
subroutine was normally a 'jump to literal' (i.e. a constant, in the
instruction), and the Whirlwind 'jump to subroutine' stored the return address
in a register; there was a special instruction (normally the first one in any
subroutine) that stored the low-order N bits of that register in the literal
field of the last instruction: i.e. self-modifying code.
The PDP-6 (of which the PDP-10 was a clone) was on the border of that period;
it had both types of subroutine linkage (store the return in the destination,
and jump to dest+1; and also push the return on the stack).
Noel
> From: Arthur Krewat
>> The PDP-6 (of which the PDP-10 was a clone)
> Um. More like a natural progression.
> Like 8086->80186->80286->80386->80486->...
No, the PDP-6 and PDP-10 have identical instruction sets, and in general, a
program that will run on one, will run on the other. See "decsystem10 System
Reference Manual" (DEC-10-XSRMA-A=D", pg. 2-72., which provides a 7-instruction
code fragment which allows a program to work out if it's running on a PDP-6, a
KA10, or a KI10.
The KA10 is a re-implementation (using mostly B-series Flip Chips) of the
PDP-6 (which was built out of System Modules - the predecessor to Flip Chips).
Noel
Nudging the thread back twoard Unix history:
> I really
> like how blindingly obvious a lot of the original Unix code was. Not saying
> it was all that way, but a ton of it was sort of what you would imagine it
> to be before you saw it. Which means I understood it and could bugfix it.
That's an important aspect of Thompson's genius--code so clean and right
that, having seen it, one cannot imagine it otherwise. But the odds are
that the same program from another hand would not have the same ring of
inevitability. As Erdos was wont to say of an elegant proof, "It comes
> like how blindingly obvious a lot of the original Unix code was
> ... sort of what you would imagine it to be before you saw it.
That's a facet of Thompson's genius--code so clean and right that,
having seen it, one cannot imagine it otherwise. Odds are, though,
that the same program from another hand would not have the same
aura of inevitability. As Erdos would say of a particularly elegant
proof, "It comes from the Book," i.e. had divine inspiration.
Doug
OT, but of interest to a few people here :-)
The venerable PDP-8 was introduced in 1965 today (or tomorrow if you're on
the wrong side of the date line). It was the first computer I ever
used, back around 1970 (I think I'd just left school and was checking out
the local University's computer department, with a view to majoring in
Computer Science (which I did)).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I think Noel put it very well, when I saw the read()/write() vs
recvfrom()/sendto() stuff mentioned earlier I was going to say that part of
the contract of read()/write() is that they are stream oriented thus things
like short reads and writes should behave in a predictable way and have the
expected recovery semantics. So having it be boundary preserving or having
a header on the data would in my view make it not read()/write() even if
the new call can be overlaid on read()/write() at the ABI level. I think
the contract of a syscall is an important part of its semantics even if it
may be an "unwritten rule". However, Noel said it better: If it's not
file-like then a file-like interface may not be appropriate.
Having said that, Kurt raises an equally valid point which is that the
"every file is on a fast locally attached harddisk" traditional unix
approach has serious problems. Not that I mind the simplicity: contemporary
systems had seriously arcane file abstractions that made file I/O useless
to all but the most experienced developer. I come from a microcomputer
background and I am thinking of Apple II DOS 3.3, CP/M 2.2 and its FCB
based interface and MSDOS 1.x (same). When MSDOS 2.x came along with its
Xenix-subset file API it was seriously a revelation to me and others.
Microcomputers aside, my understanding is IBM 360/370 and contemporary DEC
OS's were also complicated to use, with record based I/O etc.
So it's hard to criticize unix's model, but on the other hand the lack of
any nonblocking file I/O and the contract against short reads/writes (but
only for actual files) and the lack of proper error reporting or recovery
due to the ASSUMPTION of a write back cache, whether or not one is actually
used in practice... makes the system seriously unscaleable, in particular
as Kurt points out, the system is forced to try to hide the network
socket-like characteristics of files that are either on slow DMA or
PIO/interrupt based devices (think about a harddisk attached by serial
port, something that I actually encountered on a certain cash register
model and had to develop for back in the day), or an NFS based file, or
maybe a file on a SAN accessed by iSCSI, etc.
Luckily I think there is an easy fix, have the OS export a more socket-like
interface to files and provide a userspace compatibility library to do
things like detecting short reads/writes and retrying them, and/or blocking
while a slow read or write executes. It woukd be slightly tricky getting
the EINTR semantics correct if the second or subsequent call of a multipart
read or write was interrupted, but I think possible.
On the other hand I would not want to change the sockets API one bit, it is
perfect. (Controversial I know, I discussed this in detail in a recent
post).
Nick
On Mar 26, 2017 12:46 PM, "Kurt H Maier" <khm(a)sciops.net> wrote:
On Sat, Mar 25, 2017 at 09:35:20PM -0400, Noel Chiappa wrote:
>
> For instance, files, as far as I know, generally don't have timeout
> semantics. Can the average application that deals with a file, deal
reasonably
> with the fact that sometimes one gets half-way through the 'file' - and
things
> stop working? And that's a _simple_ one.
The unwillingness to even attempt to solve problems like these in a
generalized and consistent manner is a source of constant annoyance to
me. Of course it's easier to pretend that never happens on a "real"
file, since disks never, ever break.
Of course, there are parts of the world that don't make these
assumptions, and that's where I've put my career, but the wider IT
industry still likes to pretend that storage and networking are
unrelated concepts. I've never understood why.
khm
Hi all, I don't mind a bit of topic drift but getting into generic OS
design is a bit too far off-topic for a list based on Unix Heritage.
So, back on topic please!
Cheers, Warren
> From: Ron Minnich
> There was no shortage of people at the time who were struggling to find
> a way to make the Unix model work for networking ... It didn't quite
> work out
For good reason.
It's only useful to have a file-name _name_ for a thing if... the thing acts
like a file - i.e. you can plug that file-name into other places you might use
a file-name (e.g. '> {foo}' or 'ed <foo>', etc, etc).
There is very little in the world of networking that acts like a file. Yes,
you can go all hammer-nail, and use read() and write() to get data back and
forth, and think that makes it a file - but it's not.
For instance, files, as far as I know, generally don't have timeout
semantics. Can the average application that deals with a file, deal reasonably
with the fact that sometimes one gets half-way through the 'file' - and things
stop working? And that's a _simple_ one. How does a file abstraction match
to a multi-cast lossy (i.e. packets may be lost) datagram group?
For another major point (and the list goes on, I just can't be bothered to go
through it all), there's usually all sorts of other higher-level protocol in
there, so only specialized applications can make use of it anyway. Look at
HTTP: there's specialized syntax one has to spit out to say what file you
want, and the HTML files you get back from that can generally only be usefully
used in a browser.
Etc, etc.
Noel
Maybe of interest, here is something spacewarish..
|I was at Berkeley until July 1981.
I had to face a radio program which purported that..
Alles, was die digitale Kultur dominiert, haben wir den Hippies
zu verdanken.
Anything which dominates the digital culture is owed to the
Hippies
This "We owe it all to the Hippies" as well as "The real legacy of
the 60s generation is the Computer Revolution" actually in English
on [1] (talking about the beginning of the actual broadcast), the
rest in German. But it also lead(s) (me) to an article of the
Rolling Stone, December 1972, on "Fanatic Life and Symbolic Death
Among the Computer Bums"[2].
[1] http://www.swr.de/swr2/programm/sendungen/essay/swr2-essay-flowerpowerdaten…
[2] http://www.wheels.org/spacewar/stone/rolling_stone.html
That makes me quite jealous of your long hair, i see manes
streaming in the warm wind of a golden Californian sunset.
I don't think the assessment is right, though, i rather think it
is a continous progress of science and knowledge, then maybe also
pushed by on-all-fronts efforts like "bringing a moon to the moon
by the end of the decade", and, sigh, SDI, massive engineer and
science power concentration, etc. And crumbs thereof approaching
the general public, because of increased knowledge in the
industry. (E.g., the director of the new experimental German
fusion reactor that finally sprang into existence claimed
something like "next time it will not be that expensive due to
what everybody has learned".) And hunger for money, of course,
already in the 70s we had game consoles en masse in Italy, for
example, with Pacman, Donkey Kong and later then with Dragons Lair
or what its name was ("beautiful cool graphics!" i recall, though
on the street there were Italian peacocks, and meaning the
birds!).
--steffen
> From: Random832
> Does readlink need to exist as a system call? Maybe it should be
> possible to open and read a symbolic link - using a flag passed to open
What difference does it make? The semantics are the same, only the details of the
syntax are different.
Noel
On 3/23/17, Larry McVoy <lm(a)mcvoy.com> wrote:
>
> I only know a tiny amount about Plan 9, never dove into deeply. QNX,
> on the other hand, I knew quite well. I was pretty good friends with
> one of the 3 or 4 people who were allowed to work on the microkernel, Dan
> Hildebrandt. He died in 1998, but before then he and I used to call each
> other pretty often to talk about kernel stuff, how stuff ought to be done.
> The calls weren't jerk off sessions, they were pretty deep conversations,
> we challenged each other. I was very skeptical about microkernels,
> I'd seen Mach and found it lacking, seen Minix and found it lacking
> (though that's a little unfair to Minix compared to Mach). Dan brought
> me around to believing in the microkernel but only if the kernel was
> actually a kernel. Their kernel didn't use up a 4K instruction cache,
> it left room for the apps and the processes that did the rest. That's
> why only a few people were allowed to work on the kernel, they counted
> every single instruction cache footprint.
>
> So tell me more about your OS, I'm interested.
Where do I start? I've got much of the design planned out. I've
thought about the design for many years now and changed my plans on
some things quite a few times. Currently the only code I have is a
bootloader that I've been working on somewhat intermittently for a
while now, as well as a completely untested patch for seL4 to add
support for QNX-ish single image boot. I am planning to borrow Linux
and BSD code where it makes sense, so I have less work to do.
It will be called UX/RT (for Universally eXtensible Real Time
operating system, although it's also a kind of a nod to QNX originally
standing for Quick uNiX), and it will be primarily intended as a
workstation and embedded OS (although it would be also be good for
servers where security is more important than I/O throughput, and also
for HPC). It will be a microkernel-based multi-server OS.
Like I said before, it will superficially resemble QNX in its general
architecture (for example, much like QNX, the lowest-level user-mode
component will be a single root server called "proc", which will
handle process management and filesystem namespace management),
although the specifics of the IPC model will be somewhat different. I
will be using seL4 and/or Rux as the microkernel (so unlike that of
QNX, UX/RT's process server will be a completely separate program).
proc and most other first-party components will be mostly written in
Rust rather than C, although there will still be a lot of third-party
C code. The network stack, disk filesystems, and graphics drivers will
be based on the NetBSD rump kernel and/or LKL (which is a
"librarified" version of the Linux kernel; I'll probably provide both
Linux and NetBSD implementations and allow switching between them and
possibly combining them) with Rust glue layers on top. For
performance, the disk drivers and disk filesystems will run within the
same server (although there will be one disk server per host adapter),
and the network stack will also be a single server, much like in QNX
(like QNX, UX/RT will mostly avoid intermediary servers and will
usually follow a process-per-subsystem architecture rather than a
process-per-component one like a lot of other multi-sever OSes, since
intermediary servers can hurt performance).
As I said before, UX/RT will take file-oriented architecture even
further than Plan 9 does. fork() and some thread-related APIs will be
pretty much the only APIs implemented as non-file-based primitives.
Pretty much the entire POSIX/Linux API will be implemented although
most non-file-based system calls will have file-based implementations
underneath (for example, getpid() will do a readlink() of /proc/self
to get the PID). Even process memory like the process heap and stack
will be implemented as files in a per-process memory filesystem (a bit
like in Multics) rather than being anonymous like on most other OSes.
ioctl() will be a pure library function for compatibility purposes,
and will be implemented in terms of read() and write() on a secondary
file.
Unlike other microkernel-based OSes, UX/RT won't provide any way to
use message passing outside of the file-based API. read() and write()
will use kernel calls to communicate directly with the process on the
other end (unlike some microkernel Unices in which they go through an
intermediary server). There will be APIs that expose the underlying
transport (message registers for short messages and a shared per-FD
buffer for long ones), although they will still operate on file
descriptors, and read()/write() and the low-level messaging APIs will
all use the same header format so that processes don't have to care
which API the process on the other
end uses (unlike on QNX where there are a few different incompatible
messaging APIs). There will be a new "message special" file type that
will preserve message boundaries, similar to SEQPACKET Unix-domain
sockets or SCTP (these will be ideal for RPC-type APIs).
File descriptors will be implemented as sets of kernel capabilities,
meaning that servers won't have to check permissions like they do on
QNX. The API for servers will be somewhat socket-like. Each server
will listen on a "port" file in a special filesystem internal to the
process server, sort of like /mnt on Plan 9 (although it will be
possible for servers to export ports within their own filesystems as
well). Reads from the port will produce control messages which may
transfer file descriptors. Each client file descriptor will have a
corresponding file descriptor on the server side, and servers will use
a superset of the regular file API to transfer data. Device numbers as
such will not exist (the device number field in the stat structure
will be a port ID that isn't split into major and minor numbers), and
device files will normally be exported directly by their server,
rather than residing on a filesystem exported by one driver but being
serviced by another as in conventional Unix. However, there will be a
sort of similar mechanism, allowing a server to export "firm links"
that are like cross-filesystem hard links (similar to in QNX).
Per-process namespaces like in Plan 9 will be supported. Unlike in
Plan 9, it will be possible for processes with sufficient privileges
to mount filesystems in the namespaces of other processes (to allow
more flexible scoping of mount points). Multiple mounts on one
directory will produce a union like in both QNX and Plan 9. Binding
directories as in Plan 9 will also be supported. In addition to the
per-process root on / there will also be a global root directory on //
into which filesystems are mounted. The per-process name spaces will
be constructed by binding directories from // into the / of the
process (neither direct mounts under / nor bindings in // will be
supported, but bindings between parts of / will of course be
supported).
The security model will be based on a per-process default-deny ACL
(which will be purely in memory; persisting process ACLs will be
implemented with an external daemon). It will be possible for ACL
entries to explicitly specify permissions, or to use the permissions
from the filesystem with (basically) normal Unix semantics. It will
also be possible for an entry to be a wildcard allowing access to all
files in a directory. Unlike in conventional Unix, there will be no
root-only system calls (anything security-sensitive will have a
separate device file to which access can be controlled through process
ACLs), and running as root will not automatically grant a process full
privileges. The suid and sgid bits will have no effect on executables
(the ACL management daemon will handle privilege escalation instead).
The native API will be mostly compatible with that of Linux, and a
purely library-based Linux compatibility layer will be available. The
only major thing the Linux compatibility layer specifically won't
support will be stuff dealing with logging users in (since it won't be
possible to revert to traditional Unix security, and utmp/wtmp won't
exist). The package manager will be based on dpkg and apt with hooks
to make them work in a functional way somewhat like Nix but using
per-process bindings, allowing for multiple environments or universes
consisting of different sets of packages (environments will be able to
be either native UX/RT or Linux compatibility environments, and it
will be possible to create Linux environments that aren't managed by
the UX/RT package manager to allow compatibility environments for
non-dpkg Linux distributions).
The init system will have some features in common with SMF and
systemd, but unlike those two, it will be modular, flexible, and
lightweight. System initialization such as checking/mounting
filesystems and bringing up network interfaces will be script-driven
like in traditional init systems, whereas starting daemons will be
done with declarative unit files that will be able to call (mostly
package-independent) hook scripts to set up the environment for the
daemon.
Initially I will use X11 as the window system, but I will replace it
with a lightweight compositing window server that will export
directories providing a low-level DRI-like interface per window.
Unlike a lot of other compositing window systems, UX/RT's window
system will use X11-style central client-side window management. I was
thinking of using a default desktop environment based on GNUstep
originally but I'm not completely sure if I'll still do that (a long
time ago I had wanted to put together a Mac OS X-like or NeXTStep-like
Linux distribution using a GNUstep-based desktop, a DPS-based window
server, and something like a fork of mkLinux with a stable module ABI
for a kernel, but soon decided I wanted to write a QNX-like OS
instead).
>
> Sockets are awesome but I have to agree with you, they don't "fit".
> Seems like they could have been plumbed into the file system somehow.
>
Yeah, the almost-but-not-quite-file-based nature of the socket API is
my biggest complaint about it. UX/RT will support the socket API but
it will be implemented on top of the normal file system.
> Can't speak to your plan 9 comments other than to say that the fact
> that you are looking for provided value gives me hope that you'll get
> somewhere. No disrespect to plan 9 intended, but I never saw why it
> was important that I moved to plan 9. If you do something and there
> is a reason to move there, you could be worse but still go farther.
I'd say a major problem with Plan 9 is the way it changes things in
incompatible ways that provide little advantage over the traditional
Unix way of doing things (for example, instead of errno, libc
functions set a pointer to an error string instead, which I don't
think provides enough of a benefit to break compatibility). Another
problem is that some facilities Plan 9 provides aren't general enough
(e.g. the heavy focus on SSI clustering, which never really was widely
adopted, or the rather 80s every-window-is-a-terminal window system).
UX/RT will try to be compatible with conventional Unices and
especially Linux wherever it is reasonable, since lack of applications
would significantly hold it back. It will also try to be as general as
possible without overcomplicating things.
All, I'm setting up a uucp site 'tektronix'. When I send e-mail, I'm seeing
this error:
ASSERT ERROR (uux) pid: 235 (3/24-00:09) CAN'T OPEN D.tektronX00D0 (0)
Something seems to be trimming the hostname to seven chars. If I do:
# hostname
tektronix
Thanks, Warren
On Fri, Mar 24, 2017 at 4:08 PM, Andy Kosela <andy.kosela(a)gmail.com> wrote:
>
> [snip]
> Dan, that was an excellent post.
>
Thanks! I thought it was rather too long/wordy, but I lacked the time to
make it shorter.
I always admired the elegance and simplicity of Plan 9 model which indeed
> seem to be more UNIX like than todays BSDs and Linux world.
>
> The question though remains -- why it has not been more successfull? The
> adoption of Plan 9 in the real world is practically zero nowadays and even
> its creators like Rob Pike moved on to concentrate on other things, like
> the golang.
>
I think two big reasons and one little one.
1. It wasn't backwards compatible with the rest of the world and forced you
to jump headlong into embracing a new toolset. That is, there was no
particularly elegant way to move gradually to Plan 9: you had to adopt it
all from day one or not at all. That was a bridge too far for most. (Yeah,
there were some shims, but it was all rather hacky.)
2. Relatedly, it wasn't enough of an improvement over its predecessor to
pull people into its orbit. Are acme or sam really all that much better
than vi or emacs? Wait, don't answer that...but the reality is that people
didn't care enough whether they were or not. The "everything is a
file(system)" idea is pretty cool, but we've already had tools that worked
then and work now. Ultimately, few people care how elegant the abstractions
are or how well the kernel is implemented.
And the minor issue: The implementation. Plan 9 was amazing for when it was
written, but now? Not so much.
I work on two related kernels: one that is directly descended from Plan 9
(Harvey, for those interested) and one that's borrowed a lot of the code
(Akaros) and in both we've found major, sometimes decades old bugs. There
are no tests, and there are subtle race conditions or rarely tickled bugs
lurking in odd places. Since the system is used so little, these don't
really get shaken out the way they do in Linux (or to a lesser extent the
BSDs or commercial Unixes). In short, some code is better than other code
and while I'd argue that the median quality of the implementation is
probably higher than that of Linux or *BSD in terms of elegance and
understandability, it's not much higher and it's certainly buggier.
And the big implementation issue is lack of hardware support. I stood up
two plan 9 networks at work for Akaros development and we ran into major
driver issues with the ethernets that took a week or two to sort out. On
the other hand, Linux just worked.
Eventually, one of those networks got replaced with Linux and the other is
probably headed that way. In fairness, this has to do with the fact that no
one besides Ron and I was interested in using them or learning how they
work: people *want* Linux and the idea that there's this neat system out
there for them to explore and learn about the cool concepts it introduced
just isn't a draw. I gave a presentation on Plan 9 concepts to the Akaros
team a year and a half or so ago and a well-known figure in the Linux
community who working with us at the time had only to say that, "the user
interface looks like it's from 1991." None of the rest didn't interest him
at all: the CPU command usually kind of blows people's minds, but after I
went through the innards of it the response was, "why not just use SSH?"
I've had engineers ask me why Plan 9 used venti and didn't "just use git"
(git hadn't been invented yet). It's somewhat lamentable, but it's also
reality.
- Dan C.
> From: Random832
> "a stream consisting of a serialized sequence of all of whatever
> information would have been supplied to/by the calls to the special
> function" seems like a universal solution at the high level.
Yes, and when the only tool you have is a hammer, everything look like
a nail.
Noel
> From: Nick Downing
> Programming is actually an addiction.
_Can be_ an addition. A lot of people are immune... :-)
> What makes it addictive to a certain type of personality is that little
> rush of satisfaction when you try your code and it *works*... ... It was
> not just the convenience and productivity improvements but that the
> 'hit' was coming harder and faster.
Joe Weizenbaum wrote about the addiction of programming in his famous book
"Computer Power and Human Reason" (Chapter 4, "Science and the Compulsive
Programmer"). He attributes it to the sense of power one gets, working in a
'world' where things do exactly what you tell them. There might be something
to that, but I suspect your supposition is more likely.
> This theory is well known to those who design slot machines and other
> forms of gambling
Oddly enough, he also analogizes to gamblers!
Noel
> From: "Ron Natalie"
> I was thinking about Star Wars this morning and various parodies of it
> (like Ernie Foss's Hardware Wars)
The best one ever, I thought, was Mark Crispin's "Software Wars". (I have an
actual original HAKMEM!)
> I rememberd the old DEC WARS.
I seem to vaguely recall a multi-page samizdat comic book of this name? Or am
I mis-remembering its name? Does this ring any bells for anyone?
Noel
I realized after writing that I was being slightly unfair since one valid
use case that DOES work correctly is something like:
ssh -X <some host> <command that uses X>
This is occasionally handy, although the best use case I can think of is
running a browser on some internet-facing machine so as to temporarily
change your IP address, and this use case isn't exactly bulletproof since
at least google chrome will look for a running instance and hand over to it
(despite that instance having a different DISPLAY= setting). Nevertheless
my point stands which is that IMO a programmatic API (either through .so or
.dll linkage, or through ioctls or dedicated syscalls) should be the first
resort and anything else fancy such as remoting, domain specific languages,
/proc or fuse type interfaces, whatever, should be done through extra
layers as appropriate. You shouldn't HAVE to use them.
cheers, Nick
On Mar 15, 2017 9:15 PM, "Tim Bradshaw" <tfb(a)tfeb.org> wrote:
On 15 Mar 2017, at 01:13, Nick Downing <downing.nick(a)gmail.com> wrote:
>
> But the difficulty with X Windows is that the remoting layer is always
there, even though it is almost completely redundant today.
It's redundant if you don't ever use machines which you aren't physically
sitting next to and want to run any kind of graphical tool run on them. I
do that all the time.
--tim
> From: Tim Bradshaw
> I don't know about other people, but I think the whole dope thing is why
> computer people tend *not* to be hippies in the 'dope smoking' sense. I
> need to be *really awake* to write reasonably good code ... our drugs
> of choice are stimulants not depressants.
Speak for yourself! :-)
(Then again, I have wierd neuro-chemistry - I have modes where I have a large
over-sppply of natural stimulant... :-)
My group (which included Prof. Jerry Salzter, who's about as straight an arrow
as they make) was remarkably tolerant of my, ah, quirks... I recall at one
point having a giant tank of nitrous under the desk in my office - which they
boggled at, but didn't say anything about! ;-)
Noel
"Two Bacco, here, my Bookie.”
Awesome.
David
> Date: Wed, 22 Mar 2017 21:26:16 -0400
> From: "Ron Natalie" <ron(a)ronnatalie.com>
> To: <tuhs(a)minnie.tuhs.org>
> Subject: [TUHS] DEC Wars
> Message-ID: <001d01d2a374$77e02dc0$67a08940$(a)ronnatalie.com>
> Content-Type: text/plain; charset="utf-8"
>
> I was thinking about Star Wars this morning and various parodies of it (like
> Ernie Foss's Hardware Wars) and I rememberd the old DEC WARS. Alas when I
> tried to post it, it was too big for the listserv. So here's a link for
> your nostalgic purposes. I had to find one that was still in its
> fixed-pitch glory complete with the ASCII-art title.
>
>
>
> http://www.inwap.com/pdp10/decwars.txt
>
> From: Steffen Nurpmeso
> This "We owe it all to the Hippies"
Well, yes and no. Read "Hackers". There wasn't a tremendous overlap between
the set of 'nerds' (specifically, computer nerds) and 'hippies', especially in
the early days. Not that the two groups were ideologically opposed, or
incompatible, or anything like that. Just totally different.
Later on, of course, there were quite a few hackers who were also 'hippies',
to some greater or lesser degree - more from hackers taking on the hippie
vibe, than the other way around, I reckon. (I think that to be a true computer
nerd, you have to start down that road pretty early on, and with a pretty
severe commitment - so I don't think a _lot_ of hippied turned into hackers.
Although I guess the same thing, about starting early, is true of really
serious musicians.)
> "The real legacy of the 60s generation is the Computer Revolution"
Well, there is something to that (and I think others have made this
observation). The hippie mentality had a lot of influence on everyone in that
generation - including the computer nerds/hackers. Now, the hackers may have
had a larger, impact, long-term, than the hippies did - but in some sense a
lot of hippie ideals are reflected in the stuff a lot of hackers built:
today's computer revolution can be seen as hippie idealism filtered through
computer nerds...
But remember things like this, from the dust-jacket of the biography of
Prof. Licklider:
"More than a decade will pass before personal computers emerge from the
garages of Silicon Valley, and a full thirty years before the Internet
explosion of the 1990s. The word computer still has an ominous tone,
conjuring up the image of a huge, intimidating device hidden away in an
over-lit, air-conditioned basement, relentlessly processing punch cards for
some large institution: _them_. Yet, sitting in a nondescript office in
McNamara's Pentagon, a quiet ... civilian is already planning the revolution
that will change forever the way computers are perceived. Somehow, the
occupant of that office ... has seen a future in which computers will empower
individuals, instead of forcing them into rigid conformity. He is almost
alone in his conviction that computers can become not just super-fast
calculating machines, but joyful machines: tools that will serve as new media
of expression, inspirations to creativity, and gateways to a vast world of
online information.
Now, technically Lick wasn't a hippie (he was, after all, 40 years old in
1965), and he sure didn't have a lot of hippie-like attributes - but he was,
in some ways, an ideological close relative of some hippies.
Noel
Some pointers. Warren, worth grabbing these IMHO.
I will ask him if he's willing to donate whatever troff
he has.
Arnold
> Date: Wed, 22 Mar 2017 16:47:42 -0400 (EDT)
> From: Brian Kernighan <bwk(a)CS.Princeton.EDU>
> To: arnold(a)skeeve.com
> Subject: Re: CSTRs?
>
> There are a few things here:
> http://www.netlib.org/cgi-bin/search.pl
> but it seems to be mostly the numerical analysis ones.
>
> But Google reveals this one:
> http://www.theobi.com/Bell.Labs/cstr/
> which seems to be all postscript.
>
> I have some odds and ends, like the troff manual and tutorial,
> but otherwise only PDF.
>
> Sorry -- not much help.
>
> Brian
>
> On Wed, 22 Mar 2017, arnold(a)skeeve.com wrote:
>
> > Hi.
> >
> > Do you by chance happen to have copies of the CSTRs that used to be
> > available at the Bell Labs web site?
> >
> > And/or troff source for any? The TUHS people would like to archive
> > at least the Unix-related ones...
> >
> > Thanks,
> >
> > Arnold
I was thinking about Star Wars this morning and various parodies of it (like
Ernie Foss's Hardware Wars) and I rememberd the old DEC WARS. Alas when I
tried to post it, it was too big for the listserv. So here's a link for
your nostalgic purposes. I had to find one that was still in its
fixed-pitch glory complete with the ASCII-art title.
http://www.inwap.com/pdp10/decwars.txt
Early on when I was consulting for what would become my company, I got stuck
on a weekend to fix something with the coffee pot and a box of Entenmann's
chocolate donuts. These have a coating that's kind of like wax you have to
soften up in the hot coffee to be digestable. As a result of that weekend
any crunch time was referred to as waxy chocolate donut time. Another
crunch weekend I was working on the firmware for an esoteric digital data
tape player. I would test it. Find the fault. Go to one machine
running Xenix on a 286 which had the editor and the assembler. I'd then
floppy it over to a DOS machine that had the EPROM burner. I then would
take the eprom and stick it into the controller. The president of the
company had two jobs. He was to follow behind me and refill my coffee cup
and scarf up the used EPROMS and dump them into the eraser so we wouldn't
run out of ones to program.
For years, we were a six person company of which only me and the president
drank coffee. When the one pot we made in the morning was gone, that was
it for coffee. As the company got larger and there were more coffee
drinkers, people would just make a new pot. This coincided with me having
my office moved adjacent to the coffee maker. Every time I had a long
compile or something I'd look down and see my cup was empty and I'd pop
outside and get a new cup. Not surprisingly, I started to get heart
palpitations. The doctor asks how much coffee I drank, and I tell her
something like thirty cups a day. She tells me I may want to cut back on
that.
My best job was working for a friend whose company operates out of his home.
He'd make espresso for me and we'd drink that (and eat his wife's excellent
leftover food) until about six and then being another wine judge, we'd
switch to wine.
-----Original Message-----
From: Tim Bradshaw [mailto:tfb@tfeb.org]
Sent: Wednesday, March 22, 2017 8:51 AM
To: Ron Natalie
Cc: Dave Horsfall; The Eunuchs Hysterical Society
Subject: Re: [TUHS] Were all of you.. Hippies?
I don't know about other people, but I think the whole dope thing is why
computer people tend *not* to be hippies in the 'dope smoking' sense. I
need to be *really awake* to write reasonably good code (if ever I do write
reasonably good code) in the same way I need to be really awake to do maths
or physics. So I live on a diet of coffee and sugar and walk around
twitching as a result (this is an exaggeration, but you get the idea). I
have the strength of will to not use stronger stimulants (coffee is mostly
self-limiting, speed not so much).