> From: Dave Horsfall
> What timezone did you say you were in again?
Ah; didn't realize you wanted the exact time, not just a date! :-)
Well, according to this:
http://www.rfc-editor.org/rfc/museum/ddn-news/ddn-news.n19.1
it was _planned_ to happen at "00:01 (EST) 1 Jan 1983". Whether it did happen
at that moment, or if in reality there was some variance, I don't know (IIRC,
I was off sailing in the Caribbean when it actually happened :-).
Noel
It was sort of a soft change over. All the did on January 1, 1983, is
disable link 0 on the IMPs. This essentially blocked NCP from working
anymore.
You could (and many of us were) run IP on the Arpanet for years before that.
In fact the weeks before we were getting ready for the change and we had our
TCP/IP version of the system up.
It crashed.
We rebooted it and set about debugging it.
It crashed again. Immediately my phone rang. It was Louis Mamakos, then
of the University of Maryland. He had tried to ping our system. I
called out the information to Mike Muuss who was across the desk from me.
We set to find the bug. It turned out that the ICMP echo handling in the
4.1c (I believe this was the version) that we'd cribbed our PDP-11/70 TCP
from had a bug where it used the incoming message to make the outgoing one,
AND it freed it, eventually causing a double free and crash. It was at
this point we realized that BSD didn't have a ping client program. Mike
set out to write one. It oddly became the one piece of software he was
most known for over the years.
The previous changeover was to long leaders on the Arpanet (Jan 1, 1981?).
This changed the IMP addressing space from eight bits (6 bits for imp
number, 2 for a host on imp) to 16 bits for imp number and 8 for a host on
imp. While long leaders were available on a port basis for years earlier,
they didn't force the change until this date. The one casualty we had a
PDP-11/40 playing terminal server running software out of the University of
Illinois called "ANTS." Amusingly the normal DEC purple and red logo
panels at the top of the rack were silkscreened with orange ANTS logos (with
little ants crawling on it). The ANTS wasn't maintained anymore and stuck
in short-leader mode. We used that option to replace that machine with a
UNIX (we grabbed one of our ubiquitous PDP-11/34's that were kicking
around). I kept the racks and the ANTS logo for the BRL Gateways. I
turned in the non-MM PDP-11/40. A year later I get a call from the
military supply people.
ME: Yes?
GUY: I need you to identify $65,000 of computer equipment you turned in.
ME: What $65,000 machine?
GUY: One PDP-11/40 and accessories.
ME: That computer is 12 years old... and I kept all the racks and
peripherals. Do you know how much a 12-year-old computer is worth?
The other major cut over was in October of 1984. This was the
Arpanet-Milnet split. I had begged the powers that be NOT to do the
change over on the Jan 1st as it always meant I had to be working the days
leading up to it. (Oct 1 was the beginning of the "fiscal" year). Our
site had problems. I made a quick call to Mike Brescia of BBN. This was
pre-EGP, and things were primarily static routed in those days. He'd
forgotten that we had routers at BRL now on the MILNET (all the others were
on the ARPANET) and the ARPANET-MILNET "mail bridge" routers had been
configured for gateways on the MILNET side.
> From: Dave Horsfall <dave(a)horsfall.org>
> the ARPAnet got converted from NCP to TCP/IP in 1983; ... have a more
> precise date?
No, it was Jan 1.
It wasn't so much a 'conversion', as that was the date on which, except for a
few sites which got special _temporary_ dispensation to finish their
preparations, support for NCP was turned off for most ARPANET hosts. Prior to
that date, most hosts on the ARPANET had been running both, and after that,
only TCP worked. (Non-ARPANET hosts on the then-nascent Internet had always
only been using TCP before that date, of course.)
Noel
Some interesting historical stuff today...
We lost Rear Admiral Dr. Grace Hopper USN (Retd) in 1992; there's not much
more than can be said about her, but I will mention that she received
(posthumously) the Presidential Medal of Honor in 2016.
As every Unix geek knows, today is the anniversary of The Epoch[tm] back
in 1970, and at least one nosey web site thinks that that is my birthdate
too...
And according to my notes, the ARPAnet got converted from NCP to TCP/IP in
1983; do any greybeards here have a more precise date?
--
Dave Horsfall DTM (VK2KFU) "Those who don't understand security will suffer."
On Fri, Dec 29, 2017 at 04:04:01AM -0700, Kevin Bowling wrote:
> Alpha generally maintained integer/ALU and clockspeed leadership for
> most of the '90s
> http://www.cs.columbia.edu/~sedwards/classes/2012/3827-spring/advanced-arch…
Wow, that first graph is the most misleading graph on CPU performance
I've ever seen. Ever.
So from 1993 to 2000 the only CPUs released were Alphas?
That era was when I was busy measuring performance across cpus and
operating systems and I don't ever remember any processor being a
factor of 2 better than its peers. And maybe I missed it, I only
owned a couple of alpha systems, but I never saw an Alpha that was
a game changer. Alpha was cool but it was too little, too late to
save DEC.
In that time period, even more so now, you had to be 2x better to get
a customer to switch to your platform.
2x cheaper
2x faster
2x more reliable
Do one of those and people would consider switching platforms. Less than
that was really tough and it was always, so far as I remember, less than
that. SMP might be an exception but we went through that whole learning
process of "well, we advertised symmetric but when we said that what we
really meant was you should lock your processes down to a processor
because caches turn out to matter". So in theory, N processors were N
times faster than 1 but in practice not so much.
I was very involved in performance work and cpu architecture and I'd love
to be able to claim that we had a 2x faster CPU than someone else but we
didn't, not at Sun and not at SGI.
It sort of make sense that there weren't huge gaps, everyone was more or
less using the same sized transistors, the same dram, the same caches.
There were variations, Intel had/has the biggest and most advanced
foundries but IBM would push the state of the art, etc. But I don't
remember anyone ever coming out with a chip that was 2x faster. I
suspect you can find one where chip A is introduced at the end of chip
B's lifespan and A == 2*B but wait a few month's and B gets replaced
and A == .9*C.
Can anyone point to a 2x faster than it's current peers chip introduction?
Am I just not remembering one or is that not a thing?
--lm
A bit off the PDPs, but to do a minor correction on mail below
The commercial version of 'UNIX' on Alpha was maybe first called
Digital Unix OSF/1, but quickly changed to Digital Unix at least with
v3 and v4.0 (A - G). From there we had a 'break' which only in part
was due to take over by Compaq and we had Tru64 UNIX v5.1A and V5.1B.
The V5.1B saw updates till B-6.
As for the Digital C compiler, I'm still using
DTCCMPLR650 installed Compaq C Version 6.5 for Compaq Tru64 UNIX Systems
When I get some old source (some even developed on SCO UNIX 3.2V4.2) I
like to run it through all compiler /OS-es I got handy. With the
Compaq C compiler and HP-UX ANSI C I mostly get pages of warning and a
few errors. By the time I 'corrected' what I think is relevant some
nasty coredumps tend to disappear :-)
Compile for a better 2018,
uncle rubl
>Date: Fri, 29 Dec 2017 21:30:11 -0500.
>From: Paul Winalski <paul.winalski(a)gmail.com>
>To: Ron Natalie <ron(a)ronnatalie.com>
>Cc: TUHS main list <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] Why did PDPs become so popular?
>Message-ID: <CABH=_VRwNXUctFPav5rHX83wfUS0twMQuBhinRZ6QEY1cB3TNQ(a)mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
>
>On 12/29/17, Ron Natalie <ron(a)ronnatalie.com> wrote:
> The Alpha was hot
> stuff for about nine months. Ran OSF/1 formerly DigitalUnix formerly
> OSF/1.
>Digital UNIX for the VAX was indeed derived from OSF/1. The port to
>Alpha was called Tru64 UNIX.
>Tru64 UNIX was initially a pure 64-bit system, with no provision for
>building or running 32-bit program images. This turned out to be a
>mistake . DEC found out that a lot of ISVs had code that implicitly
>"knew" that sizeof() a pointer was the same as sizeof(int) was the
>same as 4 bytes. Tru64 was forced to implement a 32-bit compatibility
>mode.
>There was also a problem with the C compiler initially developed at
>DECwest in Seattle. It supported ONLY ANSI standard C and issued
>fatal errors for violations/extensions of the standard. We (DEC
>mainstream compiler group) called it the Rush Limbaugh
>compiler--extremely conservative, and you can't argue with it.
Warning: off-topic info
> I was told once that McIlroy and Morris invented macro instructions
> for assembly language.  And certainly they wrote the definitive
> paper on macros, with, as I recall, something like 10 decisions you
> needed to make about a macro processor and you could generate 1024
> different macro systems that way. I wonder if that ever got
> published
The suggestion that I invented macros can also be found on-line, but
it's not true. I learned of macros in 1957 or before. GE had added
a basic macro capability to an assembler; I don't know whether they
invented the idea or borrowed it. In 1959 George Mealy suggested
that Bell Labs install a similar facility in SAP (SHARE assembly
program). Doug Eastwood wrote the definition part and I handled
expansions.
Vic Vyssotsky later asked whether a macro could define a macro--a
neat idea that was obviously useful. When we went to demonstrate
it, we were chagrinned that it didn't work: definition happening
during expansion resulted in colliding calls to a low-level
string-handling subroutine that was not re-entrant. Once that
was fixed, Steve Johnson (as a high-school intern!) observed
that it allowed the macro table to serve as an addressable
memory, for which the store and load operations were MOP
(macro define) and MAC (macro call).
Probably before Steve's bright insight, Eastwood had folded
the separate macro table into the opcode table, and I had
implemented conditional assembly, iteration over a list, and
automatic generation of symbols. These features yielded
a clean Turing-complete language-extension mechanism. I
believe we were the first to achieve this power via macros.
However, with GPM, Christopher Strachey showed you don't need
conditionals; the ability to generate new macro names is
enough. It's conceivable, but unlikely, that this trick
could be done with earlier macro mechanisms.
As for publication, our macroprocessor inspired my CACM
paper, "Macro nstruction extension of compiler languages",
but the note that Steve remembers circulated only in the
Labs. A nontrivial example of our original macros in
action--a Turing machine simulator that ran completely within
the assembler--was reproduced in Peter Wegner's programming
book, so confusingly described that I am glad not to have
been acknowledged as the original author.
Doug
> From: Paul Winalski
> Lack of marketing skill eventually caught up to DEC by the late 1980s
> and was a principal reason for its downfall.
I got the impression that fundamentally, DEC's engineering 'corporate culture'
was the biggest problem; it wasn't suited to the commodity world of computing,
and it couldn't change fast enough. (DEC had always provided very well built
gear, lots of engineering documentation, etc, etc.)
I dunno, maybe my perception is wrong? There's a book about DEC's failure:
Edgar H. Schein, "DEC is Dead, Long Live DEC", Berett-Koehler, San
Francisco, 2003
which probably has some good thoughts. Also:
Clayton M. Christensen, "The Innovator's Dilemma: When New Technologies
Cause Great Firms to Fail", Harvard Business School, Boston, 1997
briefly mentions DEC.
Noel
'FUNCTION: To save the programmer effort by automatically incorporating library subroutines into the source program.'in Cobol whole 'functions' (subroutines) and even code snipplets are 'copied' into the main source file by the copy statement. That's different to preprocessor macros, -definitions, -literals and, since ansi c, function prototypes.
'FUNCTION: To save the programmer effort by automatically incorporating library subroutines into the source program.'in Cobol whole 'functions' (subroutines) and even code snipplets are 'copied' into the main source file by the copy statement. That's different to preprocessor macros, -definitions, -literals and, since ansi c, function prototypes.