On 2016-03-27 23:49, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
>
> > From: Johnny Billquist
>
> > It would also be interesting if anyone can come up with a good reason
> > why SPL should work that way.
>
> So that when doing:
>
> SPL 0
> WAIT
>
> you don't lose by having the interrupt happen between the SPL and the WAIT?
Hmm. A good point. If you depend on WAIT waking you up at an interrupt,
then you need SPL to block here. But this also means that you must be at
SPL 7 before any of this, otherwise you are still exposed to this
problem (nothing says that the interrupt won't happen before the SPL as
well).
In general, I would say that this is not the way I would write code, but
checking in RSX and 2.11BSD I can tell that RSX do not use this pattern,
and does a WAIT without any SPL, while 2.11BSD do an SPL 0 followed by
WAIT. And the routine in 2.11BSD is also called at SPL 7.
So obviously, both ways have been done, and 2.11BSD will work
potentially with a slight degration if the SPL do not block interrupts.
It will still work fine, as you will, at a minimum, get an interrupt at
the next clock tick, which will wake it up. But it might possibly be
sitting in a WAIT slightly longer than required.
RSX in fact just loops after the WAIT. If an interrupt should cause the
system to be able to do something more productive, it will not return to
the idle loop. So yes, it also detects in the interrupt exit processing,
that it was/is in the idle loop.
I still haven't had time to investigate properly. But at least processor
and chip manuals do not say that SPL will block interrupts. But that is
no guarantee that it don't in reality.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Dave Horsfall <dave(a)horsfall.org>
> SPL 7 was only used by the clock interrupt
Err, according to the 1975 Peripherals Handbook, both are BR6. (Sorry, only
interested in accuracy.)
> even the published Unibus spec was known to be wrong, in order to keep
> 3rd-party kit out of it (it was something subtle to do with buss timing,
> so sometimes the card worked, and sometimes it didn't, doing wonders for
> your reputation).
I don't know about that, but we built two UNIBUS DMA networking devices,
relying on the UNIBUS description in the 1975 Peripherals Handbook, and they
both worked fine (one became a product for Proteon).
> Slightly longer? I think it was Lions himself who used to teach us that
> a lost interrupt is nasty :-(
The interrupt isn't lost, it's just that the OS does a WAIT when it should
perhaps return and start up some user process - but that resumption of doing
user computations is delayed by at most 1 clock tick (some other device may
interrupt during the WAIT, before the clock does).
> Anyone here remember overlapped seeks on the RK-11 failing under Unix
I'd be interested in the details of this. The V6 RK driver didn't use them,
but the RK11-D does claim to support them (having spent a modest amount of
time looking at the drawings), so I'd very much like to know what the bug was.
> I know that Kevin Dawson (I think) tried it on my /40 as well
The 11/40 does not have the SPL instruction; see the '75-'76 PDP-11 Processor
Handbook, pg. 4-5. (Again, sorry, just want to be accurate.)
> Christ, but this is starting to sound like some religion or other.
I am only interested in correct data.
Noel
> From: Johnny Billquist
> this also means that you must be at SPL 7 before any of this
Yes, I assumed that (since it wouldn't make sense otherwise :-).
> In general, I would say that this is not the way I would write code, but
> ... 2.11BSD do an SPL 0 followed by WAIT.
Right; even if one does something like have every interrupt set a flag (which
is cleared while interrupts are disabled), and check that after lowering the
priority, but before doing the WAIT, there's _still_ a window between that
check, and the WAIT (although I guess it's less likely to be hit, since the
interrupt request would have to be posted _in that window_, not be hanging
around waiting to be serviced).
The only way (that I can work out) to atomically lower the priority and wait
is to do an RTI with the PC on the stack pointing to the WAIT instruction, but
I'm not sure even that is guaranteed to work.
I guess it all depends on the CPU implementation: does it check for pending
interrupts before each instruction, or only at the end of each instuction, or
what? If before, and there's an interrupt pending, it will go off before the
WAIT is executed. Although I suppose if it's at the end, it would do the check
at the end of the RTI, and do the interrupt then.
And whether it's at the end, or the beginning, WAIT itself must be special
cased, to check for pending interrupts during the execution (which can take an
indeterminate amount of time).
> 2.11BSD will work potentially with a slight degration if the SPL do not
> block interrupts. It will still work fine, as you will, at a minimum,
> get an interrupt at the next clock tick, which will wake it up. But it
> might possibly be sitting in a WAIT slightly longer than required.
Yes, exactly.
> RSX in fact just loops after the WAIT. If an interrupt should cause the
> system to be able to do something more productive, it will not return to
> the idle loop. So yes, it also detects in the interrupt exit processing,
> that it was/is in the idle loop.
Does it detect if it was _before_ the WAIT instruction? I would assume it does,
but I don't know anything sbout RSX.
> But at least processor and chip manuals do not say that SPL will block
> interrupts.
Yes, I looked too, in a variety of places (PDP-11 Architecture, including in
the 'model differences' table, 11/73 Tech Manual, etc). Crickets...
Noel
> From: Warren Toomey
> I thought it would be nice to get a feel for what it was like to use a
> real tty
Make sure it only prints 10 characters per second, then. (I think TTY's were
10 cps?) R-e-a-l-l-y s-l-o-w.
Noel
On 2016-03-27 08:18, Greg 'groggy' Lehey<grog(a)lemis.com> wrote:
> Isn't it wonderful that we no longer have issues with character
> representation?
I hope that comment was meant as a joke, ironic, cynical, or whatever...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Johnny Billquist
> It would also be interesting if anyone can come up with a good reason
> why SPL should work that way.
So that when doing:
SPL 0
WAIT
you don't lose by having the interrupt happen between the SPL and the WAIT?
Noel
On 2016-03-27 03:50, Dave Horsfall<dave(a)horsfall.org> wrote:
>
> On Fri, 25 Mar 2016, Johnny Billquist wrote:
>
>>> > >Some instructions inhibit the "check for interrupts at the end of this
>>> > >instruction" check. I'm most familiar with the 8080 EI instruction,
>>> > >which enabled interrupts after the following instruction (so things
>>> > >like EI;HLT didn't have a window). It seems the PDP-11 SPL behaves
>>> > >the same.
>> >
>> >I don't think it should on the PDP-11, and the documentation do not
>> >mention any such thing.
> It most certainly did, at least on the 11/70 that I used... Do you have
> experience otherwise?
I do not have any experience either way. I have never checked this. I'm
just saying that it don't make sense in my head, and the processor
handbook do not describe such a property of SPL. But now that I know,
I'm going to try and find out.
It might be correct. I'm just surprised if so, since there is no
technical need for SPL to act that way. And having SPL behave
differently than all other instructions means extra work for the people
who wrote the microcode.
It would also be interesting if anyone can come up with a good reason
why SPL should work that way.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
As attached, thanks to someone over on the RTTY list; not sure if it's
exactly what he wanted.
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
---------- Forwarded message ----------
Date: Sat, 26 Mar 2016 18:52:59 -0700
From: tony.podrasky
To: Dave Horsfall <dave(a)horsfall.org>
Subject: Re: [GreenKeys] Teletype simulator? (fwd)
Hi Dave;
Attached is my TTY test program.
It is fairly simple. The only thing that might be
tricky is the type of UAR/T you are using.
Let me know if it compiles.
regards,
tony
On 03/26/2016 06:33 PM, Dave Horsfall wrote:
> On Fri, 25 Mar 2016, tony.podrasky wrote:
>
> > What do you mean, "a non-Windoze" TTY simulator?
>
> One that's in source form, not binary...
>
> > One of the programs I have takes e-mail and prints it on a 5-level ITA#2
> > machine.
> >
> > It is written in "C", and at such a low-level that it should compile on
> > ANY thing that runs "C".
>
> Got a copy you can send me to pass on?
>
--
"I read somewhere that 77 percent of all the
mentally ill live in poverty. Actually, I'm more
intrigued by the 23 per cent who are apparently
doing quite well for themselves." -Jerry Garcia
On 2016-03-24 03:00, "Ron Natalie"<ron(a)ronnatalie.com> wrote:
>
>> >Closest I've ever been murdered was when I "accidentally" filled the local
>> >11/70 with an uninterruptible instruction sequence."
> SPL instruction. The PDP-11 was odd that while SPL was a "privileged"
> instruction, rather that trapping if you did it in user mode, it just
> "ignored" it.
> Well, what it ignored was the actual change of the processor level. What
> it still implemented was the side effect was that interrupts were locked out
> until the next instruction fetch.
> If you filled your instruction space up with SPLs you could lock up the
> computer so that even the HALT key didn't work (you had to do a bus RESET).
Ok. Color me stupid, but I don't get it. I totally do not understand how
this locks anything out.
It is the normal behavior of any instruction that interrupts are not
recognized until the next instruction fetch. This is how the microcode
works, and it is also pretty much the same in any processor today.
Except for instructions that take a long time, and which can be
interrupted in the middle, the context preserved, and the instruction
restarted and continued, instructions are normally atomic. You cannot
get interrupts in the middle of an instruction.
Second, I cannot understand how filling the memory with SPL instructions
(or any other instruction) can lock out the CPU. As noted, they are
individual instructions. You still get a fetch between each instruction,
at which point, interrupts will be recognized.
Now, if you instead talked about actually raising the CPU to SPL 7, then
I agree that no interrupts will happen. But that is because you
essentially disabled interrupts.
The front panel still works though. It is not handled like an interrupt,
but it is true that it do interact with the processor states, and
normally if you pull HALT, it will only halt when it's going to fetch
the next instruction. You can, of course, also set the front panel
switch for single microcode instruction, at which point the CPU will
halt at the next microcode instruction instead, and you can single step
the microcode as well.
The one CPU I know you can sunset is the KA10 (PDP-10). I'm sure there
are others, but I have never seen how this could be done on a PDP-11, so
I'm most curious about this, and if you can provide more details I would
be most interested. As I also happen to know where a PDP-11/70 is
standing, I intend to test this out next I get close to it.
As for the KA10 (I think it was the KA10, but it might have been the
PDP-6), the problem is related to the indirect addressing feature. Since
memory is 36 bits, but addresses only 18, you have plenty of bits to
play with. And one of them is the indirect bit. And if you refer to a
memory location that also have the indirect bit set, you get another
memory access to get the actual content. The fun thing happens if you
set the indirect bit, and give your own address. This is then an
infinite memory reference. And the KA10 can not be broken out of that
lookup. The only solution is to pull the power plug.
The CPU is essentially stuck in one state, just tightly reading memory,
and then repeating reading memory. Later PDP-10 models have an explicit
check in the microcode in this loop to be able to break out of this.
Sorry for the offtopic content. :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2016-03-26 20:43, Clem Cole<clemc(a)ccc.com> wrote:
>
> On Fri, Mar 25, 2016 at 11:09 PM, Charles Anthony <
> charles.unix.pro(a)gmail.com> wrote:
>
>> >And Dec's RADIX-50, packing 3 characters into 16 bits. (IIRC the origin of
>> >the 6.3 filenames. bit I can't document that.)
>
> Sort of.... before ASCII, DEC used a few other 5 bit codes that were
> around such as baudot (look at the PDP-1/4 etc and KSR 28). RAD50 was a
> natural scheme for storing file name and using bits efficiently.
>
> Which, of course, lead to the abomination of case folding - it's not a bug,
> it's a feature 😂
>
> RAD50 gave us the x.y file name form with the implied dot et al. 6.3 and
> later 8.3 were natural directions from that coding. Using the .3 ext as a
> type tag of course followed that naturally given that's all that was stored
> in the disk "catalog." [And CP/M and PC/MS-DOS inherit that scheme -
> including the case folding silliness even though by that time all keyboard
> were upper and lower case and they stored the files in 8 bits].
Some other people already mentioned this, but... - SIXBIT. DEC might
have used baudot in the very early machines, but I would say that SIXBIT
dominated here for a long time. We see it both in the PDP-8, but also
the PDP-6 and its follow ons. RAD50 was the natural extension of SIXBIT
on a machine that did not have a word size that was a multiple of 6.
The x.y filename, as well as the 6+3 pattern predate the PDP-11. I would
say that in this area, the PDP-11 didn't come with anything new, but
just made life more complicated.
OS/8 for sure only have 6+2 filenames, but still in the x.y form.
TOPS-10 have, I think, 6+3. And the Monitor (I think that was the name
for the PDP-6 OS) was, I think, also 6+3.
And it was all SIXBIT.
And SIXBIT also give you the case folding.
I say the PDP-11 complicated life just because DEC was already so much
into having filenames stored more compact than normal text, and having a
6+3 pattern, so they came up with R50, which fits the bill, but it's
more headache than it was worth, if you ask me.
Since the PDP-11 have 8 bit bytes, it would have made much more sense to
just store filenames as 8 bit bytes. It would have cost some more
storage, but not that much. But it took time for DEC to realize that the
space savings here were not really a good tradeoff. Old habits die hard,
I guess.
By the way, RSX (and early VMS) actually use 9+3 filenames.
> UNIX of course, would put the "type" in the file itself (magic #) and force
> the storing of the dot, but removed the strict mapping of name and type.
> Having grown up in both systems, I see the value of each; but agree I think
> I find UNIX's scheme better and lot more flexible.
I think I agree on the point of having filenames in a free format. Not
sure I really like storing the type in the file itself. So I'm sortof
torn. Or rather, I would like to keep type separate from both.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Hey All,
I just saw in another thread the statement their are no odd requests here.
So I thought I would test that.
The day NeXT took over Apple I read a page somewhere on the internet that started with line.
Bow down to UNIX children of Macintosh... It then when on its Old Testament conquering tone about the new way of computing that was coming.
Does anybody have any idea who wrote this or where to find it?
Thanks,
Ben
Hello everyone,
I am Rocky and this is my first message. Before starting, I would like to thank you for all the valuable informations and stories you post here.
About the History of Unix, I was wondering with another guy why the rc script has that name. As many of you already know, and according to NetBSD, FreeBSD, OpenBSD (current) manual,
"The rc utility is the command script which controls" the startup of various services, "and is invoked by init(8)" (from DESCRIPTION).
"The rc command appeared in 4.0BSD" (from HISTORY).
Words may slightly change between the three distributions, but the meaning and the informations provided are the same. So, the etymology of rc does not appear in the man pages. Do you know how to recover it? Do (or did) the letters rc have some meaning in this context?
Cheers,
Rocky
On 2016-03-25 00:27, Milo Velimirovic wrote:
>
>> On Mar 24, 2016, at 6:06 PM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>>
>> On 2016-03-24 23:50, Peter Jeremy wrote:
>>> On 2016-Mar-24 11:17:18 +0100, Johnny Billquist <bqt(a)update.uu.se> wrote:
>>>> It is the normal behavior of any instruction that interrupts are not
>>>> recognized until the next instruction fetch. This is how the microcode
>>>> works, and it is also pretty much the same in any processor today.
>>> ...
>>>> individual instructions. You still get a fetch between each instruction,
>>>> at which point, interrupts will be recognized.
>>>
>>> Some instructions inhibit the "check for interrupts at the end of this
>>> instruction" check. I'm most familiar with the 8080 EI instruction,
>>> which enabled interrupts after the following instruction (so things like
>>> EI;HLT didn't have a window). It seems the PDP-11 SPL behaves the same.
>>
>> I don't think it should on the PDP-11, and the documentation do not mention any such thing.
>> There is a good reason the 8080 (and Z80, and others) have this property. The RETI instruction on these machines do not enable itnerrupts themselves, so just as you note, you need to both enable interrupts and return from interrupt atomically, or else you get into a mess.
>>
>> The PDP-11 RETI instruction changes the processor priority as a part of the instruction. You do not use SPL (whatever) before a RETI.
>> Thus, it do not make sense that SPL on a PDP-11 would have this property. If if indeed do disable recognizing interrupts after an SPL, it sounds more like a bug. I guess I'll go and read the microcode so see if that mentions any of this, since I'm sortof into reading it anyway as I was trying to debug a problem on an 11/70 only a couple of months ago…
>
> The PDP-11 has no RETI instruction; it has a RTT (ReTurn from Trap) and a RTI (ReTurn from Interrupt) instructions, but you already knew that, right? In some cases it’s problem that it’s not possible to determine which is appropriate or correct to use. According to the PDP11 Architecture Handbook the difference between them is in what happens when the RTx instruction loads a PSW that has the T bit set and when it forces a Trace trap. RTI - immediate trap, RTT traps after the instruction following the RTT.
Oops. Yes, it's RTI and RTT. But the names are beside the point, and so
is the difference between these two. The point is that the
instruction(s) do set the processor priority level, and you do not use
SPL in combination with them, so it makes no sense to have SPL inhibit
interrupts for any length at all. (And yes, I did know that.)
Oh, and as far as RTT vs. RTI goes, not it's not hard to know which one
you want. You want RTT for your debugger and RTI for everything else.
The difference is about what happens after the return. With RTT, the
T-bit trap will not trap until another instruction has executed. With
RTI, you would never manage to step to the next instruction with the
T-bit, since every time you returned, you'd get another trap.
But I bet you knew that as well... ;-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Seems like we are all contributing old card stories.. here is one of my
favorites from my past.
At CMU, all systems programmers working for the computer center had to put
shifts in as the operator behind the "glass door" doing the grunt stuff
(but we got all the computer time we wanted, an office and terminal - so it
was a good deal in those days). The student courses, in particular the
engineering intro to FORTRAN (WATFIV), used the TSS based 360/67 which we
programmed and ran; but they used the batch system on cards not timesharing
with the ASR 33's which was quite expensive. There was a traditional glass
room with the computer, its tapes and other gear, and a counter with a
"call human for help" button where "paying users'" could come ask questions
of the operator on duty. On the older side of the counter was the flock
of keypunch machines and a high speed card read. The printers were in
secure areas, so we would bring out student prints from their batch jobs as
needed and put them on the binds near the counter ( as was the pretty much
the standard of those days).
By rule, the system's programmers were also not supposed to help the
students with their assignments. They were supposed to get help from their
TA's and Profs, *etc*. who had regular hours. But often folks were up
very late working on assignments and no one from the course was around to
ask questions. And as the operator, if you had a minute, it was not
uncommon to have a little empathy for your brothers and sisters in arms on
the other side of the counter. As long as this was not abused, the TA's,
Profs as well as our bosses in the computer center tolerated the process.
But if we were obviously busy, we really did not have the time to do much
to help them.
One night I was working the over night operator shift with another coworker
who will be left nameless (but I will say that he's now a SVP at a large
computer firm these days). It was a very busy night for us for some
reason, probably something like printing bills, or checks for the school or
some such; along with a full backup, so we had our hands full between
mounting tapes, changing types of paper and print heads *etc.*, security
procedures with the results and the like. That night, there was also a big
assignment due shortly for one of the classes.
Sure enough the buzzer started ringing and it was a frustrated (and as I
remember somewhat clueless) student that needed help with his assignment.
He was claiming that the his deck was being rejected/was not working.
Note "turn around" from deposit card deck to receipt of print out was
probably in the order of 10-15 minutes, and sometimes longer. One of us
came out, showed him something like a missing "BATCH WATFIV" command card
or some such and reminded them of the official policy and probably pointed
to the sign, as we were very busy with our job. We would politely tell
them to try to find a TA or someone in the class that could help him.
The student went away, and we went back to work. A few minutes later the
buzzer went off again, same student, and the cycle repeated with some other
trivial issue. After the 4th or 5th time it was becoming a real issue
because we were really quite busy. At that point, my coworker came out and
said, here bring me your deck. He looked at them and quickly said -- "The
problem is you used the wrong color cards."😈
The student was completely dejected and walked away. I looked up and
said, man that was cruel. But it did buy us time to finish our work.
Never found out if he re-keypunched his cards.
Clem
Steve Johnson:
This reminds me that someone at BTL threw together a "TSO Shell". It was
a wrapper around /bin/sh that slept for 10 seconds before executing each
line...
=====
And after each command exited. Discarding anything typed ahead
during the sleep, of course.
And printed all-upper-case IEFCRAPNONSENSE messages even when a
command exited successfully.
I think I still have a copy somewhere. It dates from the 6/e era,
so it would need a lot of work to compile and run on a modern system.
Occasionally I think of converting it to ISO and POSIX even though
that seems contradictory somehow.
Norman Wilson
Toronto ON
Not quite a self reproducing program and I did take down one of the UCSD servers one day.
I was writing a shell script to do some complex and long running task. This was in the early days of the shell supporting functions. The script had a large number of functions and I named one of them to be the same name as the shell script.
I set it in motion and logged out, as I knew it would take several hours to finish the work.
The next day I logged in to find that the machine had the load spike as the shell script recursively started itself when it got to the function call that had the same name as the shell script. The admin kindly sent me a ‘top' output showing the load at several hundred and all the jobs having my name and being my shell script.
Under this he wrote: “Never do this again.”
I haven’t.
David
> Btw. It does not emulate the smell of small machine oil
or the mess of ppt punch chaff on the floor
Yes. I saw Clem's mail just as I was about to recommend
placing a small dish of machine oil beside the simulator.
Alas, it seems I missed out on the chad experience. Data was
regularly imported to the PDP-7 by ppt, but rarely exported.
Night-owl Ken must have taken some ppt backups, evidence of
which the midnight janitors whisked away.
Doug
It's a bit off-topic, but what were non-Unix filesystems like around 1969-1970?
The PDP-7 filesystem has i-nodes (file metadata) and filenames separate
from the i-nodes. This allows hard links and thus a non-tree structured
filesystem.
This has always struck me to be one of the most important features of
the Unix filesystem: names separated from the rest of the file metadata,
and arbitrary hard links so that there is no preferred filename.
Were these features in other contemporaneous filesystems?
As a side note, the PDP-7 kernel knows about the top-level directory ("dd")
but it is agnostic to the concept of "." and "..". What that means is
that you can build a filesystem with "." and ".." links and the kernel
will deal with them as per all other links. But you can also build a
filesystem without "." or ".." and the kernel doesn't care.
There's not enough evidence (source code, papers, anecdotes) to confirm
or deny the presence of "." in the PDP-7 code that Norman scanned for us.
".." does seem to exist as it is mentioned in one source file, ds.s.
It's an intruiging mystery.
Cheers, Warren
This came up today at work; what's the origin of the open file table? The
suggestion was made that, instead, a ref-counted data structure could be
allocated at open() time to serve the same purpose, and that a table of
open files was superfluous. My guess was that this made it (relatively)
easy to look up what files referred to a particular device?
> Those file structures are collected into a single, global table. The
> question is why this latter table? One could rather imagine an
> implementation where open() allocates (e.g., via malloc())
Depending on how malloc() is implemented, fragmentation can be
serious in a program that runs forever with as many frees as
allocs. Separate allocations for each item can also be costly in time.
One malloc() strategy is to divide the arena into regions,
each of which caters for blocks of a single size, so
fragmentation doesn't occur. In essence that's how the
system tables work, except that these tables have
hard limits. Now, if the tables could be reallocated
as needed ...
Another problem with per-item allocations is performance
monitoring and debugging. It's hard to see what's
going on in a well mixed dynamic storage heap.
Doug
https://newsroom.intel.com/news-releases/andrew-s-grove-1936-2016/
I know some of the processor people at intel and I was looking around,
found this, interesting read if you are into the history:
http://people.cs.clemson.edu/~mark/330/chronques.html
For those that don't know, Colwell did the P6 pipeline, I think under
Groove or right after Groove got cancer. There was P5, then P6, then
they did a different pipeline that they called Pentium 4 (made no sense
to me but their names never do). The Pentium 4 was the one where
they speculated on what the answer would be for some instructions.
As in you could do a load and they'd guess that it was zero or not.
They were going for great clock rate, and they got it, but they also
got instructions that would take 2000 cycles to get through.
That pipeline got booted and so far as I know, the Colwell P6 pipeline
lives on in every Intel processor after the Pentium 4.
Getting back to Andy, I loved his time as CEO, I think he did a lot of
good for that company. Here's to him!
Sorry to continue the detour from disk file systems to card trays, but this
> Walking down the corridors of Comp Sci, a student in front of me
> dropped his entire deck of approx 2000 cards, all over the floor...
> I have no idea whether he got them sorted, but I sure as hell used
> rubber bands after that!
reminded me that Vic Vyssotsky liked to say of his BLODI (block diagram)
language for simulating sample-data systems that it was the only card-safe
language. You could toss a program deck down the stairs, pick it up at the
bottom, submit it to the compiler, and it would work. That was 10 years
before the filing of the famous "natural order" patent on spreadsheets,
which ordered execution the same way.
Doug