On this day in 1978 Kurt Shoens placed the following comment in
def.h (the fork i maintain keeps it in nail.h):
/*
* Mail -- a mail program
*
* Author: Kurt Shoens (UCB) March 25, 1978
*/
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
"The only thing I can think of is to use have programs that
translate programs in todays languages to a common but very
simple universal language for their "long term storage". May
be something like Lamport's TLA+? A very tough job.
"
Maybe not so hard. An existence proof is Brenda Baker's "struct",
which was in v7. It converted Fortran to Ratfor (which of course
turned it back to Fortran). Interestingly, authors found their
completely reorganized code easier to read than what they had
written in the first place.
Her big discovery was a canonical form--it was not a matter of
taste or choice how the code got rearranged.
It would be harder to convert the code to say, Matlab,
because then you'd have to unravel COMMON statements and
format strings. It's easy to cook up nasty examples, like
getting away with writing behyond the end of an array, but
such things are rare in working code.
Doug
A core package in a lot of the geospatial applications is a old piece of
mathematical code originally written in Fortran (probably in the sixties).
Someone probably in the 80's recoded the thing pretty much line for line
(maintaining the horrendous F66 variable names etc.) into C. It's
probably ripe for a jump to something else now.
We've been through four major generations of the software. The original
was all VAX based with specialized hardware (don't know what it was written
in). We followed that on with a portable UNIX (but mostly Suns, but ours
worked on SGI, Ardent, Stellar, various IBM AIX platofrms, Apollo DN1000's,
HP, DEC Alphas). This was primarily a C application. Then right about
the year 2000, we jumped to C++ on Windows. Subsequently it got back
ported to Linux. Yes there are some modules that have been unchanged for
decades, but the system on the whole has been maintained.
The bigger issue than software getting obsoleted is that the platform needed
to run it goes away.
> From: Larry McVoy <lm(a)mcvoy.com>
> Going forward, I wish that people tried to be simple as they tackle the
> more complicated problems we have.
I have a couple of relevant quotations on my 'Some Computer-Related Lines'
page:
"Deliberate complexity is the mark of an amateur. Elegant simplicity is the
mark of a master."
-- Unknown, quoted by Robert A. Crawford
"Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses
remove it."
-- Alan Perlis
"The most reliable components are the ones you leave out."
-- Gordon Bell
(For software, the latter needs to be read as 'The most bug-free lines of
codqe are the ones you leave out', of course.)
I remember watching the people building the LISP machine, and thinking 'Wow,
that system is complex'. I eventually decided the problem was that they were
_too_ smart. They could understand, and retain in their minds, all that
complexity.
Noel
On Wed, Mar 21, 2018 at 7:50 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I was sys admin for a Masscomp with a 40MB disk
Right an early MC-500/DP system - although I think the minimum was a 20M
[ST-506 based] disk.
On Wed, Mar 21, 2018 at 8:55 PM, Mike Markowski <mike.ab3ap(a)gmail.com>
wrote:
> I remember Masscomp ... it allowed data acquisition to not be swapped
>> out.
>>
> Actually, not quite right in how it worked, but in practice you got the
desired behavior. More in a minute.
On Wed, Mar 21, 2018 at 9:00 PM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I remember the Masscomps we had fondly. The ones we had were 68000 based
> and they had two of those CPUs running in lock step
Not lock step ... 'executor' and 'fixor' -- actually a solution Forest
Basket proposed at Stanford. The two implementations that actually did
this were the Masscomp MC-500 family and the original Apollo.
> ... because the
> 68K didn't do VM right. I can't remember the details, it was something
> like they didn't handle faults right so they ran two CPUs and when the
> fault happened the 2nd CPU somehow got involved.
>
Right, it the original 68000 did not save the faulting address properly
when it took the
exception, so there was not enough information to roll back the from the
fault and restart. The microcode in the 68010 corrected this flaw.
>
> Clem, what model was that?
We called the dual 68000 board the CPU board and the 68010/68000 board the
MPU. The difference was which chip was in the 'executor' socket and some
small changes in some PALs on the board to allow the 68010 to actually take
the exception. Either way, the memory fault was processed by the
'fixor'. BTW: the original MC-500/DP was a dual processor system, so it
has 4 68000 just for the 2 'cpu boards -- i.e. an executor and fixor on
each board. The later MC-5000 family could handle up to 16 processor
boards depending the model (MC-5000/300 was dual, MC-5000/500 up to 4 and
the MC-5000/700 up to 16 processor boards).
Also for the record, the MC-500/DP and MC-5000/700 predate the
multiprocessor 'Symmetry System' that Sequent would produce by a number of
years). The original RTU ran master/slave (Purdue VAX style) for the first
generation (certainly through RT 2.x and maybe 3.x). That restriction was
removed and a fully symmetric OS was created, as we did the 700
development. I've forgotten which OS release brought that out.
A few years later Stellix, while not in direct source development a child,
had the same Texieria/Cole team as RTU - was always fully symmetric - i.e.
lesson learned.
> And can you provide the real version of what I was trying say?
>
Sure ... soon after Motorola released the 68000, Stanford's Forest Baskett
in a paper I have sadly lost, proposed that the solution the 68000's issue
of not saving enough information when an illegal memory exception
occured, was to instead of allowing the memory exception, return 'wait
states' to the processor - thus never letting it fault.
More specifically, the original microprocessor designs had a fixed time
that the memory system needed to respond on a read or write cycle that was
defined by the system clock. I don't have either Motorola 68000 or MOS
6502 processor books handy, but as a for instance IIRC on a 1 Mhz MOS 6502
you had 100 nS for this operation. Because EPROMs of the day could not
respond that fast (IIRC 400ns for a read), the CPU chip designers created a
special 'WAIT' signal that would tell the processor, to look for the data
in on the next clock tick (and the wait signal could be repeated for each
tick for an indefinite wait if need be). *i.e. *in effect, when running
from ROM a 6502 would run at .25MHz if it was doing direct fetches from
something like an Intel 1702 ROM on every cycle. Similarly, early dynamic
RAM, while faster that ROM, had access issues with needing to ensure the
refresh cycles and the access cycles aligned. Static RAMs which were fast,
did not have these problems and could directly interface to processor, but
static RAMs of the day were the quite expensive chips (5-10 times) in
comparison to DRAM cost, so small caches were built to front end the memory
system.
Hence, the HW 'wait state' feature became standard (and I think is still
supported in todays processors), since it allows the memory system speed
and the processor to work at differing rates. i*.e.* the difference in
speed could be 'biased' to each side -> processor vs memory.
In his paper, Forest Basket observed that if a HW designer used the wait
state feature, a memory system could be built that refreshed the cache
using another processor, as long as the second processor that was 'fixing
up' the exception ( i.e. refilled the cache with proper data) could be
ensured it never had a memory fault.
Basket's idea was exactly how both the original Apollo and Masscomp CPU
boards were designed. On the Masscomp CPU board, there was a 68000 that
always ran from a static ram based cache (which we called the 'executor.'
If it tried to access a memory location that was not yet in cache, or if it
was a legal memory access, the memory system sent wait states until the
cache was refilled as you might expect. If the location was illegal, the
memory system also return a error exception as expected. However, it was
legal address but not yet in memory, the second 6800 (the 'fixor') was
notified of desire to access that specific memory location and the fixor
ran the paging code to put the page into the live memory. Once the cache
was properly set up, then the executor could be released from wait state
and instruction allowed to complete.
When Motorola released the 68010 with the internal fixes that would allow
an faulting instruction to be restarted, Masscomp revised the CPU board
(creating the MPU board) to install a 68010 in the executor socket and
changed a few PALs in the memory system on the board. If a memory address
was discovered to be legal, but not in memory, the executor (now a 68010)
was allowed to returned a paging exception and stack was saved
appropriately, and the executor did a context switch to different process -
allow the executor to do something useful while the fixor processed the
fault. Although we now had to add kernel code for the executor processor
to return from restart at the fault instruction on a context switch back to
the original process. The original fixor code did not need to be changed,
other than to remove need to clearing of the 'waiting flop' for that
restarted the executor. [RTU would detect which board was plugged into a
specific system automatically so it was an invisible change to the end user
- other than you got a few percent performance back if there was a lot of
page faults in your application since the executor never wait stated].
As for the way real time and analog I/O worked that Mike commented upon.
Yes, RTU - Real Time Unix supported features that could guarantee that I/O
would not be lost from the DMA front end. For those not familiar with
the system, it was a 'federated system' with a number of subsystems: the
primary UNIX portion that I just described and then a number of other
specialized processors that surrounded it for specific tasks. This
architecture was chosen because the market we were attacking was a
scientific laboratory and in particular real-time oriented - for instance,
some uses were Mass General in the Cardiac ward, on board ever EWACS plane
to collect all and retain all signals during any sortie [very interesting
application], NASA used 75 of them in Mission Control to be the 'consoles'
you see on TV, *etc *[ which have only recently be replaced by PCs, as some
were still in use at least 2 years ago when I was last in Houston].
So besides the 2/4 68000s in the main UNIX system, their was another
dedicated 68000 running a graphics system on the graphics board, another
80186 running TCP/IP, and an AM2900 bit slices processor called Data
Acquisition and Control Processor (DA/CP) - which Mike hinted, as well as
29000's for the FP unit and Array processor functionality. Using ther
DA/CP, in 1984, and MC-500 system could handle multiple 1MHz analog 16 bit
signals - just by sampling them -> in fact, a cute demo at the Computer
Museum in Boston just connected a TV camera to the DA/CP and without any
special HW, it could 'read' the video signal (if you normally have a
connection a TV camera, it usually takes a bunch of special logic to
interface to the TV signals -- in this case it was just SW in the DA/CP).
The resultant OS was RTU, were we had modified a combined 4.1BSD and System
III 'unices' (later up dated to 4.2 and V.2] to support RT behaviors, with
pre-emption, ASTs [which I still miss - a great idea but nicer than
traditional UNIX signals], async I/O, *etc*.. Another interesting idea was
the concept of 'universes' that allowed on a per user basis, to see a BSD
view of the system or an System V view].
One of the important things we did in RTU was rethink the file system
(although only a little then). Besides all the file types and storage
schemes you have with UFS, Masscomp added support for an pre-allocated
extent based files (like VMS and RSX) and then created a new file type
called contiguous file that used it [by the time of the Stellar and
Stellix, we just made all files extent based and got rid of the special
file type - *i.e.* looked like UFS to the user, but under the covers was
fully extend based]. So under RTU, once we had preallocated files that we
knew were on contiguous blocks, we could make all sorts of speed ups and
guarantees that traditional UNIX can not because of the randomized location
of the disk blocks.
So the feature that Mike was using, was actually less a UNIX change and did
not actually use the preemption and other types of RT features. We had
added the ability for the DA/CP to write/read information directly from the
disk without OS getting in the way - (we called it Direct to Disk IO).
An RTU application created a contiguous file large enough on disk to store
the data the DA/CP might receive - many megabytes typically - but the data
itself was never passed thru or stored in the UNIX buffer cache *etc*.
The microcode of the DA/CP and the RTU disk driver could/would cooperate
and all kernel or user processing on the UNIX side was by-passed. Thus,
when the real time data started to be produced by the DA/CP (say the EWACS
started getting radio signals on one of those 16 bit analog converters) the
digital representation of those analog signals were stored in the users
file system independent of what RTU was doing.
The same idea, by the way was why we chose to run IP/TCP on a
co-processor. So much of network I/O is 'unexpected,' we could
potentially lose the ability to make real-time guarantees. But keeping
all the protocol work outside of UNIX, the network I/O just operated on
'final bits' at the rate that was right for it. As an interesting aside,
this worked until we switched to a Motorola 68020 at 16 Mhz or 20Mhz
(whatever it was I've forgotten now) in the MC-5000 system. The host
ended up be so much faster than the 80186 in the ethernet coprocessor
board. We ended up offering a mode were they swapping roles and just used
the '186 board as a very heavily buffered ethernet controller, so we could
keep protocol processing very low priority compared to other tasks.
However if we did that, it did mean some of the system guarantees had to be
removed. My recollection is that only a couple of die hard RT style
customers, ended up continuing to use the original networking configuration.
Clem
> From: Dave Horsfall <dave(a)horsfall.org>
> Yep, and I'm glad that I had bosses to whom I didn't have to explain why
> my comments were so voluminous.
>
> And who first said "Write your code as though the next person to maintain
> it is a psychotic axe-wielding murderer who knows where you live"? I've
> often thought that way (as the murderer I mean, not the murderee).
>
> I'd name names, but he might be on this list...
>
I’ve always said that the person was a ‘reformed’ axe-wielding murderer
who knows where you live.
Please, a little decorum.
W.R.T. comments in code, and a bit of Unix, when I taught Unix Systems
programming at UCSD one of my students wanted to use his personal
computer for the programming. I didn’t care as long as it would pass the
tests I ran after the fact.
After his second homework was turned in, I stopped looking at his code
or even running it, as the comment blocks were just so well done. Nary
a comment in the function itself, just a block before each one very clearly
explaining what was happening.
David
"I was told in school (in 1985) that if I was ever lucky enough to
have access to the unix source, I'd find there were no comments. The
reason, I was told at the time, was that comments would make the
source code useful, and selling software was something AT&T couldn't
do due to some consent decree."
I can't speak for SYS V, but no such idea was ever mentioned in
Research circles. Aside from copyright notices, the licensing folks
took no interest in comments. Within rsearch there was tacit
recognition of good coding style--Ken's cut-to-the-bone code was
universally admired. This cultural understanding did not extend
to comments. There was disparagement for the bad, but not honor
for the good. Whatever comments you find in the code were put
there at the author's whim.
My own commenting style is consistent within a project, but
wildly inconsistent across projects, and not well correlated
with my perception of the audience I might be writing for.
Why? I'm still groping for the answer.
For imoortant code, custom is to describe it in a separate
paper, which is of course not maintained in parallel with
the code. In fact, comments are often out of date, too.
Knuth offered the remedy of "literate programming", which
might help in academic circles. In business, probably not.
Yet think of the possibility of a "literate spec", where
the code grows organically with the understanding of what
has to be done.
Doug
> From: Warren Toomey <wkt(a)tuhs.org>
> there is next to no commenting in the early code bases.
By 'early' you must mean the first 'C' PDP-11 Unixes, because certainly
starting with V6, it is reasonably well commented (to the point where I like
to say that I learned how to comment by reading the V6 code), e.g.:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/ken/slp.chttp://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/dmr/bio.c
to pick examples from each author; and there are _some_ comments in the
assembler systems (both PDP-7 and PDP-11).
> Given that the comments never made it into the compiled code, there was
> no space reason to omit comments. There must have been another reason.
I was going to say 'the early disks were really small', but that hypothesis
fails because the very earliest versions (in assembler) do have some comments.
Although assembler is often so cryptic, the habit of putting a comment on each
instruction isn't so unreasonable.
So maybe the sort of comments one sees in assembler code (line-by-line
descriptions of what's happening; for subroutines, which arguments are in
which registers; etc) aren't needed in C code, and it took a while for them to
work out what sort of commenting _was_ appropriate/useful for C code?
The sudden appearance in V6 does make it seem as if there was a deliberate
decision to comment the code, and they went through it and added them in a
deliberate campaign.
> From: Andy Kosela <akosela(a)andykosela.com>
> "Practice of Programming" by Rob Pike and Brian Kernighan.
> ...
> They also state: "Comments ... do not help by saying things the code
> already plainly says ... The best comments aid ... by briefly pointing
> out salient details or by providing a larger-scale view of the
> proceedings."
Exactly.
Noel
Revision 1.1, Sun Mar 21 09:45:37 1993 UTC (25 years ago) by cgd
http://cvsweb.netbsd.org/bsdweb.cgi/src/sbin/init/init.c?rev=1.1&content-ty…
Today is commonly considered the birthday of NetBSD.
Theo told me (seven years ago) that he, cgd, and glass (and one other
person) planned it within 30 minutes after discussing with the CSRG and
BSDI guys in the hot tub at the Town & Country Resort in San Diego at
the January 25-29 1993 USENIX conference. (Does anyone have more to
share about this discussion?) Soon, cgd had setup a CVS repository
(forked 386BSD with many patchkits) which was re-rolled a few times (due
to corrupted CVS). (So maybe March 21 is later than the real birthday.)
As far as I know, it is the oldest continuously-maintained complete
open source operating system. (It predates Slackware Linux, FreeBSD,
and Debian Linux by some months.)
"NetBSD" wasn't mentioned by name in the April 19. 1993 release files
(but was named in the announcement).
ftp://ftp.netbsd.org/pub/NetBSD/misc/release/NetBSD/NetBSD-0.8
On April 28, the kernel was renamed to /netbsd, the boot loader
identified it as NetBSD, and various references of 386BSD were changed
to NetBSD.
https://github.com/NetBSD/src/commit/a477732ff85d5557eef2808b5cbf221f3c7455…https://github.com/NetBSD/src/commit/446115f2d63299e52f34977fb4a88c289dcae9…
On 2018-03-21 14:48, Paul Winalski<paul.winalski(a)gmail.com> wrote:
>
> On 3/20/18, Clem Cole<clemc(a)ccc.com> wrote:
>> Paul can correct me, but I don't think DEC even developed a Pascal for TOPS
>> originally - IIRC the one I used came from the universities. I think the
>> first Pascal sold was targeted for the VAX. Also, RT11 and RSX were
>> 'laboratory' systems and those systems were dominated by Fortran back in
>> the day - so DEC marketing thought in those terms.
>>
> DEC did do a Pascal for RSX. I don't remember if it supported RT11 or
> RSTS. DEC did a BASIC compiler for RSTS and RSX. RSX and especially
> RT were designed mainly for real-time process control in laboratories.
DEC did both COBOL, DIBOL, PASCAL, FORTRAN (-IV, -IV-PLUS, -77), C as
well as Datatrieve for RSX and RSTS/E. Some of these were also available
for RT-11. Admittedly, the C compiler was very late to the game.
> A lot of the programming was in assembler for efficiency reasons
> (both time and space).
Yes. And MACRO-11 is pretty nice.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Let's see how much this thread can drift...
The venerable PDP-8 was introduced in 1965 today (or tomorrow if you're on
the wrong side of the date line). It was the first computer I ever used,
back around 1970 (I think I'd just left school and was checking out the
local University's computer department, and played with BASIC and FOCAL).
And (hopefully) coincidentally the Pentium first shipped in 1993; the
infamous FDIV defect was discovered a year later (and it turned out that
Intel was made aware of it by a post-grad student a bit earlier), and what
followed next was an utter farce, with some dealers refusing to accept the
results of a widely-distributed program as evidence of a faulty FPU.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> From: "Steve Johnson"
So, I have this persistent memory that I read, in some early Multics (possibly
CTSS, but ISTR it was Multics) document, a footnote explaining the origin of
the term 'daemon'. I went looking for it, but couldn't find it in a 'quick'
scan.
I did find this, though, which is of some interest: R. A. Freiburghouse, "The
Multics PL/1 Compiler" (available online here:
http://multicians.org/pl1-raf.html
if anyone is interested).
> There was a group that was pushing the adoption of PL/1, being used to
> code Multics, but the compiler was late and not very good and it never
> really caught on.
So, in that I read:
The entire compiler and the Multics operating system were written in EPL, a
large subset of PL/1 ... The EPL compiler was built by a team headed by
M. D. McIlroy and R. Morris ... Several members of the Multics PL/1 project
modified the original EPL compiler to improve its object code performance,
and utilized the knowledge acquired from this experience in the design of
the Multics PL/1 compiler.
The EPL compiler was written when the _original_ PL/1 compiler (supposedly
being produced by a consulting company, Digitek) blew up. More detail is
available here:
http://multicians.org/pl1.html
I assume it's the Digitek compiler you were thinking of above?
Noel
We lost computer pioneer John Backus on this day in 2007; amongst other
things he gave us FORTRAN (yuck!) and BNF, which is ironic, really,
because FORTRAN has no syntax to speak of.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
I've put online at https://dspinellis.github.io/unix-history-man/ nine
timelines detailing the evolution of 15,596 unique documented facilities
(commands, system calls, library functions, device drivers, etc.) across
93 major Unix releases tracked by the Unix history repository.
For each facility you get a timeline starting from the release it first
appeared. Clicking on the timeline opens up the manual page for the
corresponding release. (Sadly, the formatting is often messed up,
because more work is needed on the JavaScript troff renderer I'm using.)
The associated scripts and the raw data are all available on GitHub.
Diomidis
A while ago someone was asking about the mt Xinu Unix manuals. I have a
found a complete set, currently owned by Vance Vaughan, one of the mt
Xinu founders. He is willing to donate them to Warren's Unix archive.
However, they are too expensive to ship to Australia.
Would anyone be willing to scan them in for the archive? Ah, there are
a lot of them (8? volumes). If so, I might be able to ship them to
somewhere in the US.
Let me know.
Thanks.
Deborah
Peter Guthrie Tait (1831--1901) seems to have recorded the oldest
mention of the thermodynamic demon of James {Clerk Maxwell} in the
page 213 image from Tait's book ``Sketch of Thermodynamics'' at
https://archive.org/stream/lifescientificwo00knotuoft#page/212/mode/2up
that was posted to this list by Bakul Shah <bakul(a)bitblocks.com> on
Tue, 20 Mar 2018 12:10:37 -0700.
I've been working on a bibliography (still unreleased) of Clerk
Maxwell, and the oldest reference that I had so far found to Maxwell's
demon is from an address by Sir William Thomson (later raised to Lord
Kelvin) entitled
The sorting demon of Maxwell: [Abstract of a Friday evening
Lecture before the Royal Institution of Great Britain,
February 28, 1879]
Proceedings of the Royal Institution of Great Britain 9,
113--114 (1882)
However, I've not been able to find that volume online. Hathi Trust
has only volumes 30--71, with numerous holes, and often, it will not
show page contents at all. The journal issue is old enough that few
university libraries are likely to have it, but it is probably
available through the Interlibrary Loan service.
I had also recorded
Harold Whiting
Maxwell's demons
Science (new series) 6(130), 83, July 1885
https://doi.org/10.1126/science.ns-6.130.83
and
W. Ehrenberg
Maxwell's demon
Scientific American 217(5) 103--110, November 1967
https://doi.org/10.1038/scientificamerican1167-103
plus numerous later papers and books.
I also went through a score of books on my shelf about physics or
thermodynamics, and finally found a brief mention of Maxwell's demon
in G. N. Lewis & M. Randall's famous text ``Thermodynamics'', first
published in 1923 (I have a 1961 reprint). The other books that I
checked remain strangely silent on that topic.
The Oxford English Dictionary (OED) online has this definition and
etymology:
>> ...
>> Maxwell's demon n. (also Maxwell demon) an entity imagined by Maxwell
>> as allowing only fast-moving molecules to pass through a hole in one
>> direction and only slow-moving ones in the other direction, so that if
>> the hole is in a partition dividing a gas-filled vessel, one side
>> becomes warmer and the other cooler, in contradiction of the second
>> law of thermodynamics.
>>
>> 1879 W. Thomson in Proc. Royal Inst. 9 113 Clerk Maxwell's `demon' is
>> a creature of imagination.., invented to help us to understand the
>> `Dissipation of Energy' in nature.
>>
>> 1885 Science 31 July 83/1 (heading) Maxwell's demons.
>>
>> 1956 E. H. Hutten Lang. Mod. Physics iv. 152 It would require a
>> Maxwell demon..to select the rapidly moving molecules according to
>> their velocity and concentrate them in one corner of the vessel.
>>
>> 1971 Sci. Amer. Sept. 182/2 Maxwell's demon became an intellectual
>> thorn in the side of thermodynamicists for almost a century. The
>> challenge to the second law of thermodynamics was this: Is the
>> principle of the increase of entropy in all spontaneous processes
>> invalid where intelligence intervenes?
>>
>> 1988 Nature 27 Oct. 779/2 Questions about the energy needed in
>> measurement began with Maxwell's demon.
>> ...
For the word `daemon', the OED has this:
>> ...
>> Etymology: Probably an extended use of demon ....
>>
>> A program (or part of a program), esp. within a Unix system, which
>> runs in the background without intervention by the user, either
>> continuously or only when automatically activated by a particular
>> event or condition. A distinction is sometimes made between the form
>> daemon, referring to a program on an operating system, and demon,
>> referring to a portion of a program, but the forms seem generally to
>> be used interchangeably, daemon being more usual.
>>
>> 1971 A. Bhushan Request for Comments (Network Working Group)
>> (Electronic text) No. 114. 2 The cooperating processes may be
>> `daemon' processes which `listen' to agreed-upon sockets, and
>> follow the initial connection protocol.
>>
>> 1983 E. S. Raymond Hacker's Dict. 53 The printer daemon is just a
>> program that is always running; it checks the special directory
>> periodically, and whenever it finds a file there it prints it
>> and then deletes it.
>>
>> 1989 DesignCenter ii. 41/3 The file server runs a standard set of
>> HP-UX system and network daemons.
>>
>> 1992 New Scientist 18 Jan. 35/2 These programs, which could recognise
>> simple patterns, were made up of several independent
>> information-processing units, or `demons', and a `master
>> demon'.
>>
>> 2002 N.Y. Times 7 Mar. d4/5 A mailer daemon installed on an e-mail
>> system can respond to a piece of incorrectly addressed e-mail
>> by generating an automated message to the sender that the
>> message was undeliverable.
>> ...
----------------------------------------
>From The Hacker's Dictionary (1983), reproduced in the Emacs info node
Jargon, I find another `explanation' of daemon:
>> ...
>> :daemon: /day'mn/ or /dee'mn/ /n./ [from the mythological
>> meaning, later rationalized as the acronym `Disk And Execution
>> MONitor'] A program that is not invoked explicitly, but lies
>> dormant waiting for some condition(s) to occur. The idea is that
>> the perpetrator of the condition need not be aware that a daemon is
>> lurking (though often a program will commit an action only because
>> it knows that it will implicitly invoke a daemon). For example,
>> under {{ITS}} writing a file on the {LPT} spooler's directory
>> would invoke the spooling daemon, which would then print the file.
>> The advantage is that programs wanting (in this example) files
>> printed need neither compete for access to nor understand any
>> idiosyncrasies of the {LPT}. They simply enter their implicit
>> requests and let the daemon decide what to do with them. Daemons
>> are usually spawned automatically by the system, and may either
>> live forever or be regenerated at intervals.
>>
>> Daemon and {demon} are often used interchangeably, but seem to
>> have distinct connotations. The term `daemon' was introduced to
>> computing by {CTSS} people (who pronounced it /dee'mon/) and
>> used it to refer to what ITS called a {dragon}. Although the
>> meaning and the pronunciation have drifted, we think this glossary
>> reflects current (1996) usage.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> I'll have to redo my kludgy fix to gmtime() ... I guess I'll have to fix
> it for real, instead of my kludgy fix (which extended it to work for
> 16-bit results). :-)
> ...
> And on the -11/23:
> Note that the returned 'quotient' is simply the high part of the dividend.
Heh. I had decided that the easiest clean and long-lived fix was to just to do
it right, using the long division routine used in the V7 C compiler runtime:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/libc/crt/ldiv.s
and I vaguely recalled reading a DMR story that talked about that, so just for
amusement I decided to re-read it, and looked it up:
https://www.bell-labs.com/usr/dmr/www/odd.html
(the section "Comments I do feel guilty about"), and it's lucky I did, because
I found this:
Addendum 18 Oct 1998
Amos Shapir of nSOF (and of long memory!) just blackened (or widened) the
spot a bit more in a mail message, to wit:
'I gather the "almost" here is because this trick almost worked... It has a
nasty bug which I had to find the hard way!
The "clever part" relies on the fact that if the "bvc 1f" is not taken, it
means that the result could not fit in 16 bits; in that case the long value
in r0,r1 is left unchanged. The bug is that this behavior is not documented;
in later models (I found this on an 11/34) when the result does fit in 16
bits but not in 15 bits ... which makes this routine provide very strange
results!'
So this code won't work on an 11/23 either (which bashes the low register of
the pair; above). I'd have been groveling in buggy math, again...
Caveat Haquur (if you're trying to run stock V7 on a /23 or /34)!
Noel
So, I have discovered, to my astonishment, that the double-word version of the
DIV instruction on the PDP-11 won't do a divide if the result won't fit into
15 bits. OK, I can understand it bitching if the quotient wouldn't fit into 16
bits - but what's the problem with returning an unsigned quotient?
And, just for grins, the results left in the registers which hold the quotient
and remainer is different in the -11/23 (KDF11-A) and the -11/73 (KDJ11-A).
(Although, to be fair, the PDP-11 Architecture Manual says 'register contents
are unpredictable if there's an overflow'.)
Oh well, guess I'll have to redo my kludgy fix to gmtime() (the distributed
version of which in V6 qhas a problem when the number of 8-hour periods since
the epoch overflows 15 bits)! I guess I'll have to fix it for real, instead of
my kludgy fix (which extended it to work for 16-bit results). :-)
I discovered this when I plugged in an -11/73 to make sure the prototype QSIC
(our RK11/etc emulator for the QBUS) worked with the -11/73 as well as the
-11/23 (which is what we'd mostly been using - when we first started working
on the DMA and interrupts, we did try them both). I noticed that with the
-11/73, the date printed incorrectly:
Sun Mar 10 93:71:92 EST 1991
After a certain amount of poking and prodding, I discovered the issue - and
on further reading, discovered the limitation to 15-bit results.
For those who are interested in the details, here's a little test program that
displays the problem:
r = ldiv(a, b, d);
m = ldivr;
printf("a: 0%o %d. b: 0%o %d. d: 0%o %d.\n", a, a, b, b, d, d);
printf("q: 0%o %d. r: 0%o %d.\n", r, r, m, m);
and, for those who don't have V6 source at hand, here's ldiv():
mov 2(sp),r0
mov 4(sp),r1
div 6(sp),r0
mov r1,_ldivr
rts pc
So here are the results, first from a simulator:
tld 055256 0145510 070200
a: 055256 23214. b: 0145510 -13496. d: 070200 28800.
q: 0147132 -12710. r: 037110 15944.
This is _mathematically_ correct: 055256,0145510 = 1521404744., 070200 =
28800., and 1521404744./28800. = 0147132.
And on the -11/23:
a: 055256 23214. b: 0145510 -13496. d: 070200 28800.
q: 055256 23214. r: 037110 15944.
Note that the returned 'quotient' is simply the high part of the dividend.
And on the -11/73:
a: 055256 23214. b: 0145510 -13496. d: 070200 28800.
q: 055256 23214. r: 037110 15944.
Note that in addition to the quotient behaviour, as with the /23, the
'remainder' is the low part of the dividend.
Noel
> From: Paul McJones <paul(a)mcjones.org>
> I suspect the CPU architect (Gene Amdahl -- not exactly a dullard)
> intended programmers store array elements at increasing memory
> addresses, and reference an array element relative to the address of the
> last element plus one. This would allow a single index register (and
> there were only three) to be used as the index and the (decreasing)
> count.
I suspect the younger members of the list, who've only ever lived in a world
in which one lights ones cigars with mega-gates, so to speak, may be missing
the implication here.
Back when the 704 (a _tube_ machine) was built, a register meant a whole row
of tubes. That's why early machines had few/one register(s).
So being able to double up on what a register did like this was _HYYUUGE_.
Noel
On 3/17/2018 8:54 AM, Dave Horsfall <dave(a)horsfall.org> wrote:
> ... Was it the 704, or the 709? I recall that the
> array indexing order mapped directly into its index register or something
> ...
It first ran on the IBM 704, whose index registers subtracted (as did
the follow-on 709, 7090, etc), so array indexing went from higher memory
addresses to lower.
> The bookshelf: I had most of those books once; what's the one on the
> bottom right? It has a "paperback" look about it, but I can't quite make
> it out because of the reflection on the spine.
I'm not sure, and things have shifted since then on the shelves, but I
sent the original photo to your email address.
On 3/17/2018 12:22 PM, Steve Simon <steve(a)quintile.net> wrote:
> on the subject of fortran’s language, i remember hearing tell of a French version. anyone ever meet any?
Yes: here is the French version of the original Fortran manual, with
keywords in French (via
http://www.softwarepreservation.org/projects/FORTRAN/)
Anonymous. FORTRAN Programmation Automatique de L'Ordinateur IBM 704 :
Manuel du Programmeur. IBM France, Institut de Calcul Scientifique,
Paris. No date, 51 pages. Given to Paul McJones by John Backus.
http://archive.computerhistory.org/resources/text/Fortran/102663111.05.01.a…
Dave Horsfall <dave(a)horsfall.org> wrote:
> We lost computer pioneer John Backus on this day in 2007; amongst other
> things he gave us FORTRAN (yuck!) and BNF, which is ironic, really,
> because FORTRAN has no syntax to speak of.
I think of FORTRAN as having established the very idea of high-level programming languages. For example, John McCarthy’s first idea for what became LISP was to extend FORTRAN with function subroutines written in assembly language for list-manipulation. (He had to give up on this idea when he realized a conditional expression operator wouldn’t work correctly since both the then-expression and the else-expression would be evaluated before the condition was tested.) The original FORTRAN compiler pioneered code optimization, generating code good enough for the users at the physics labs and aerospace companies. For more on this compiler, see:
http://www.softwarepreservation.org/projects/FORTRAN/
Disclosure: I worked with John in the 1970s (on functional programming) — see:
http://www.mcjones.org/dustydecks/archives/2007/04/01/60/ .
Paul McJones
The first Internet domain, symbolics.com, was registered in 1985 at 0500Z
("Zulu" time, i.e. UTC).
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
> So what are its origins? Where did it first appear?
It was a direct copy from CTSS, which already had it
n 1965 when we BTL folk began to use it.
The greatest MOTD story of all time happened at CTSS.
To set the stage, the CTSS editor made a temp file,
always of the same name, in one's home directory.
The MOTD was posted by the administrator account.
The password file was plain text, maintained by
editing it.
And multiple people had access to the administrator
account.
It happened one day that one administrator was
working on the password file at the same time
another was posting MOTD. The result: the password
file (probably the most secret file on the system)
got posted as the MOTD (the most public).
Upon seeing the password file type out before him,
an alert user shut the machine down by writing
and running one line of assembly code:
HERE TRA *HERE
(The star is for indirect addressing, and indirection
was transitive.)
Doug