Just stumbled over another early TCP/IP for Unix:
http://bitsavers.informatik.uni-stuttgart.de/pdf/3Com/3Com_UNET_Nov80.pdf
It would seem to be a design similar to that of Holmgren's (NCP-based) Network Unix (basic packet processing in the kernel, connection management in a user space daemon). In time and in concept it would sit in between the Wingfield ('79, all user space) and the Gurwitz ('81, all kernel) implementations.
I think it was distributed initially as a mod versus V7 and later as a mod versus 2BSD.
Would anybody here know of surviving source of this implementation?
Thanks,
Paul
The recent discussion of Solaris made me think - what was the first Unix to
have centralized package management as part of the OS? I know that IRIX
had it, I think from the beginning (possibly even for the GL2 releases) but
I imagine there was probably something before that.
-Henry
guess it is the beginning of the end of Solaris and the Sparc CPU:
'Rumors have been circulating since late last year that Oracle was
planning to kill development of the Solaris operating system, with major
layoffs coming to the operating system's development team. Others
speculated that future versions of the Unix platform Oracle acquired
with Sun Microsystems would be designed for the cloud and built for the
Intel platform only and that the SPARC processor line would meet its
demise. The good news, based on a recently released Oracle roadmap for
the SPARC platform, is that both Solaris and SPARC appear to have a
future.
The bad news is that the next major version of Solaris—Solaris 12— has
apparently been canceled, as it has disappeared from the roadmap.
Instead, it's been replaced with "Solaris 11.next"—and that version is
apparently the only update planned for the operating system through
2021.
With its on-premises software and hardware sales in decline, Oracle has
been undergoing a major reorganization over the past two years as it
attempts to pivot toward the cloud. Those changes led to a major speed
bump in the development cycle for Java Enterprise Edition, a slowdown
significant enough that it spurred something of a Java community revolt.
Oracle later announced a new roadmap for Java EE that recalibrated
expectations, focusing on cloud services features for the next version
of the software platform. '
http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confir…
--
Kay Parker
kayparker(a)mailite.com
--
http://www.fastmail.com - The way an email service should be
Now that we have quite a few ex-Bell Labs staff on the list, and several
other luminaries, and with the Unix 50th anniversary not far off, perhaps
it is time to form a working group to help lobby to get 8th, 9th and 10th
Editions released.
I'm after volunteers to help. People who can actually move this forward.
Let me know if and how you can help out.
Thanks, Warren
I'm a bit puzzled, but then I only ever worked with some version of
Ultrix and an AT&T flavour of UNIX in Philips, SCO 3.2V4.2 (OpenServer
3ish), DEC Digital UNIX, Tru64, HP-UX 1123/11.31 and only ever used
"mkdir -p".
Some differences in the various versions are easily solved in scripts,
like shown below. Not the best of examples, but easy. Getting it to
work on a linux flavour wouldn't be too difficult :-)
OS_TYPE=`uname -n`
case "${OS_TYPE}" in
"OSF1")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="THA-7"
;;
"HP-UX")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/contrib/bin:/xyz/field/scripts:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="TST-7"
;;
*)
echo "${OS_TYPE} unknown, exit"
exit 1
;;
esac
> From: Doug McIlroy
> Perhaps the real question is why did IBM break so completely to hex for
> the 360?
Probably because the 360 had 8-bit bytes?
Unless there's something like the PDP-11 instruction format which makes octal
optimal, octal is a pain working with 8-bit bytes; anytime you're looking at
the higher bytes in a word, unless you are working through software which
will 'interpret' the bytes for you, it's a PITA.
The 360 instruction coding doesn't really benefit from octal (well,
instructions are in 4 classes, based on the high two bits of the first byte,
but past that, hex works better); opcodes are 8 or 16 bits, and register
numbers are 4 bits.
As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370
Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over
whether to use 6 or 8, and they finally went with 8 because i) statistics
showed that more customer data was numbers, rather than text, and storing
decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit
byte), and ii) they were looking forward to handling text with upper- and
lower-case.
Noel
> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug
=
> From: Joerg Schilling
> Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz
> channel?
Google is your friend:
https://en.wikipedia.org/wiki/T-carrierhttps://en.wikipedia.org/wiki/Digital_Signal_1
> How was the 64 ??? Kbit/s interface to the first IMPs implemented?
> Wasn't it AT&T that provided the lines for the first IMPs?
Yes and no. Some details are given in "The interface message processor for the
ARPA computer network" (Heart, Kahn, Ornstein, Crowther and Walden), but not
much. More detail of the business arrangement is contained in "A History of
the ARPANET: The First Decade" (BBN Report No. 4799).
Details of the interface, and the IMP side, are given in the BBN proposal,
"Interface Message Processor for the ARPA Computer Network" (BBN Proposal No.
IMP P69-IST-5): in each direction there is a digital data line, and a clock
line. It's synchronous (i.e. a constant stream of SYN characters is sent
across the interface when no 'frame' is being sent).
The 50KB modems were, IIRC, provided by the Bell system; the diagram in the
paper above seems to indicate that they were not considered part of the IMP
system. The modems at MIT were contained in a large rack, the same size as
the IMP, which stood next to it.
I wasn't able to find anything about anything past the IMP/modem interface.
Perhaps some AT+T publications of that period might detail how the modem,
etc, worked.
Noel
On the subject of the PDP-10, I recall seeing people at a DECUS
meeting in the early 1980s wearing T-shirts that proclaimed
I don't care what they say, 36 bits are here to say!
I also recall a funny advertizing video spoof at that meeting that
ended with the line
At DIGITAL, we're building yesterday's tomorrow, today.
That meeting was about the time of the cancellation of the Jupiter
project at DEC that was planned to produce a substantially more
powerful follow-on to the KL-10 processor model of the PDP-10 (we had
two such at the UofUtah), disappointing most of its PDP-10 customers.
Some of the Jupiter technology was transferred to later VAX models,
but DEC never produced anything faster than the KL-10 in the 36-bit
line. However, with microcomputers entering the market, and early
workstations from Apollo, LMI, Sun, and others, the economics of
computing changed dramatically, and departmental mainframes ceased to
be cost effective.
Besides our mainframe DEC-20/60 TOPS-20 system in the College of
Science, we also ran Wollongong BSD Unix on a VAX 750, and DEC VMS on
VAX 780 and 8600 models. In 1987, we bought our first dozen Sun
workstations (and for far less than the cost of a DEC-20/60).
After 12 good years of service (and a forklift upgrade from a 20/40 to
a 20/60), our KL-10 was retired on 31-Oct-1990, and the VAX 8600 in
July 1991. Our productivity increased significantly in the Unix
world.
I wrote about memories and history and impact of the PDP-10 in two
keynote addresses at TUG meetings in articles and slides available at
http://www.math.utah.edu/~beebe/talks/2003/tug2003/http://www.math.utah.edu/~beebe/talks/2005/pt2005/
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------