I'm a bit puzzled, but then I only ever worked with some version of
Ultrix and an AT&T flavour of UNIX in Philips, SCO 3.2V4.2 (OpenServer
3ish), DEC Digital UNIX, Tru64, HP-UX 1123/11.31 and only ever used
"mkdir -p".
Some differences in the various versions are easily solved in scripts,
like shown below. Not the best of examples, but easy. Getting it to
work on a linux flavour wouldn't be too difficult :-)
OS_TYPE=`uname -n`
case "${OS_TYPE}" in
"OSF1")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="THA-7"
;;
"HP-UX")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/contrib/bin:/xyz/field/scripts:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="TST-7"
;;
*)
echo "${OS_TYPE} unknown, exit"
exit 1
;;
esac
> From: Doug McIlroy
> Perhaps the real question is why did IBM break so completely to hex for
> the 360?
Probably because the 360 had 8-bit bytes?
Unless there's something like the PDP-11 instruction format which makes octal
optimal, octal is a pain working with 8-bit bytes; anytime you're looking at
the higher bytes in a word, unless you are working through software which
will 'interpret' the bytes for you, it's a PITA.
The 360 instruction coding doesn't really benefit from octal (well,
instructions are in 4 classes, based on the high two bits of the first byte,
but past that, hex works better); opcodes are 8 or 16 bits, and register
numbers are 4 bits.
As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370
Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over
whether to use 6 or 8, and they finally went with 8 because i) statistics
showed that more customer data was numbers, rather than text, and storing
decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit
byte), and ii) they were looking forward to handling text with upper- and
lower-case.
Noel
> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug
=
> From: Joerg Schilling
> Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz
> channel?
Google is your friend:
https://en.wikipedia.org/wiki/T-carrierhttps://en.wikipedia.org/wiki/Digital_Signal_1
> How was the 64 ??? Kbit/s interface to the first IMPs implemented?
> Wasn't it AT&T that provided the lines for the first IMPs?
Yes and no. Some details are given in "The interface message processor for the
ARPA computer network" (Heart, Kahn, Ornstein, Crowther and Walden), but not
much. More detail of the business arrangement is contained in "A History of
the ARPANET: The First Decade" (BBN Report No. 4799).
Details of the interface, and the IMP side, are given in the BBN proposal,
"Interface Message Processor for the ARPA Computer Network" (BBN Proposal No.
IMP P69-IST-5): in each direction there is a digital data line, and a clock
line. It's synchronous (i.e. a constant stream of SYN characters is sent
across the interface when no 'frame' is being sent).
The 50KB modems were, IIRC, provided by the Bell system; the diagram in the
paper above seems to indicate that they were not considered part of the IMP
system. The modems at MIT were contained in a large rack, the same size as
the IMP, which stood next to it.
I wasn't able to find anything about anything past the IMP/modem interface.
Perhaps some AT+T publications of that period might detail how the modem,
etc, worked.
Noel
On the subject of the PDP-10, I recall seeing people at a DECUS
meeting in the early 1980s wearing T-shirts that proclaimed
I don't care what they say, 36 bits are here to say!
I also recall a funny advertizing video spoof at that meeting that
ended with the line
At DIGITAL, we're building yesterday's tomorrow, today.
That meeting was about the time of the cancellation of the Jupiter
project at DEC that was planned to produce a substantially more
powerful follow-on to the KL-10 processor model of the PDP-10 (we had
two such at the UofUtah), disappointing most of its PDP-10 customers.
Some of the Jupiter technology was transferred to later VAX models,
but DEC never produced anything faster than the KL-10 in the 36-bit
line. However, with microcomputers entering the market, and early
workstations from Apollo, LMI, Sun, and others, the economics of
computing changed dramatically, and departmental mainframes ceased to
be cost effective.
Besides our mainframe DEC-20/60 TOPS-20 system in the College of
Science, we also ran Wollongong BSD Unix on a VAX 750, and DEC VMS on
VAX 780 and 8600 models. In 1987, we bought our first dozen Sun
workstations (and for far less than the cost of a DEC-20/60).
After 12 good years of service (and a forklift upgrade from a 20/40 to
a 20/60), our KL-10 was retired on 31-Oct-1990, and the VAX 8600 in
July 1991. Our productivity increased significantly in the Unix
world.
I wrote about memories and history and impact of the PDP-10 in two
keynote addresses at TUG meetings in articles and slides available at
http://www.math.utah.edu/~beebe/talks/2003/tug2003/http://www.math.utah.edu/~beebe/talks/2005/pt2005/
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I was well aware of the comment in V6, but had no idea what it
referred to. When Dennis and I were porting what became V7 to the
Interdata 8/32, we spent about 10 frustrating days dealing with savu
and retu. Dennis did his most productive work between 10pm and 4am,
while I kept more normal hours. We would pore over the crash dumps
(in hex, then a new thing for us--PDP-ll was all octal, all the
time). I'd tinker with the compiler, he'd tinker with the code and
we would get it to limp, flap its wings, and then crash. The problem
was that the Interdata had many more registers than the PDP-11, so the
compiler only saved the register variables across a call, where the
PDP-11 saved all the registers. This was just fine inside a process,
but between processes it was deadly. After we had tried everything
we could think of, Dennis concluded that the fundamental architecture
was broken. In a couple of days, he came up with the scheme that
ended up in V7.
It was only several years later when I saw a T-shirt with savu and
retu on it along with the famous comment that I realized what it had
referred to, and enjoyed the irony that we hadn't understood it
either...
Steve
----- Original Message -----
From: "Brantley Coile" <brantleycoile(a)me.com>
To:"Larry McVoy" <lm(a)mcvoy.com>
Cc:<tuhs(a)tuhs.org>
Sent:Mon, 16 Jan 2017 05:11:02 -0500
Subject:Re: [TUHS] Article on 'not meant to understand this'
Tim Bradshaw <tfb(a)tfeb.org> writes on 17 Jan 2017 13:09 +0000
>> I think we've all lived in a wonderful time where it seemed like
>> various exponential processes could continue for ever: they can't.
For an update on the exponential scaling (Moore's Law et al), see
this interesting new paper:
Peter J. Denning and Ted G. Lewis
Exponential laws of computing growth
Comm. ACM 60(1) 54--65 January 2017
https://doi.org/10.1145/2976758
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> One thing that I'm unclear about is why all this Arpanet work was not filtering more into the versions of Unix done at Bell Labs.
The short answer is that Bell Lbs was not on Arpanet. In the early
80s the interim CSNET gave us a dial-up window into Arpanet, which
primarily served as a conduit for email. When real internet connection
became possible, network code from Berkeley was folded into the
research kernel. (I am tempted to say "engulfed the research kernel",
for this was a huge addition.)
The highest levels of AT&T were happy to carry digital data, but
did not see digital as significant business. Even though digital T1
was the backbone of long-distance transmission, it was IBM, not
AT&T, that offered direct digital interfaces to T1 in the 60s.
When Arpanet came along MCI was far more eager to carry its data
than AT&T was. It was all very well for Sandy Fraser to build
experimental data networks in the lab, but this was seen as a
niche market. AT&T devoted more effort to specialized applications
like hotel PBXs than to digital communication per se.
Doug
I'm having a lot of fun with a virtual 11/94 and 2.11. What a lot of
excellent engineering!
It seems like an obvious project would be to adapt a newer pcc with ANSI
C support of some sort. Has this already been done? I'll take a look
if not.
Thanks,
Andy Valencia
p.s. The "less" in /usr/local doesn't seem to handle stty based TTY
geometry. I re-ported "less2" from comp.sources.unix and added this.
Somebody ping me if the mildly edited sources are of interest.
> From: Larry McVoy
> It is pretty stunning that the company that had the largest network in
> the world (the phone system of course) didn't get packet switching at
> all.
Actually, it's quite logical - and in fact, the lack of 'getting it' about
packets follows directly from the former (their large existing circuit switch
network).
This dates back to Baran (see his oral history:
https://conservancy.umn.edu/handle/11299/107101
pg. 19 and on), but it was still detectable almost two decades later.
For a variety of all-too-human reasons (of the flavour of 'we're the
networking experts, what do you know'; 'we know all about circuit networks,
this packet stuff is too different'; 'we don't want to obsolete our giant
investment', etc, etc), along with genuine concerns about some real issues of
packet switching (e.g. the congestion stuff, and how well the system handled
load and overload), packet switching just was a bridge too far from what they
already had.
Think IBM and timesharing versus batch and mainframe versus small computers.
Noel