> From: Clem Cole
> Steve Ward's guys writing Trix hacked together a compiler, assembler and
> the like.
All of which I have the source for - just looked through it.
> If memory serves me, tjt wrote the assembler
I have the NROFF source for the "A68 Assembler Reference", and it's by James
L. Gula and Thomas J. Teixeira. It says that "A68 is an edit of the MICAL
assembler also written by Mike [Patrick].".
> Jack Test did much of the compiler and again IIRC that was based on PCC.
I dunno, I'm not familiar with PCC, so I can't say. It definitely looks very
different from the Ritchie C compiler.
Noel
> From: Paul Ruizendaal <pnr(a)planet.nl>
>> I have this distinct memory of Dave Clark mentioning the Liza Martin
>> TCP/IP for Unix in one of the meeting report publihed as IENs
> It may be mentioned in this report:
> http://web.mit.edu/Saltzer/www/publications/rfc/csr-rfc-228.pdf
Yeah, I had run across that in my search for any remnants of the Martin
stuff.
> Would you know if any of its source code survived?
As I had mentioned, I had found some old dump tapes, and had one of them read;
it had some bad spots, but we've just (this morning) succeeding in having a
look as to what's there, and I _think_ all of the source is OK (including the
kernel code, as well as applications like server Telnet and FTP). No SCCS or
anything like that, so it's a bit hit or miss doing history - the file write
dates were preserved, but of course a lot of them would have been edited over
time to fix bugs, add features, etc.
The tape appears to contains a _lot_ of other historic material, and it's
going to take a while to sort it all out; it includes a Version 6 with NCP
from NOSC/SRI, some Unix from BBN; a BCPL compiler; a 'bind' for .rel format
files (produced by MACRO-11 and probably BCPL) written in BCPL; programs to
convert from .rel to a.out and back; an early verion of Montgomery EMACS;
another Unix from 'TMI' (whoever that might be); another UNIX that's somehow
associated with TRIX; someone's early kernel overlay stuff; an early 68K C
compiler, and also an early 8080 C compiler - just a ton of stuff (that's just
a few items that grabbed my eye as I scrolled by).
Algol, alas, appears not to be there (we probably didn't add it, because of
space reasons). The copy of LISP on this tape seem to be damaged; I do have 3
other tapes, and between them, I hope we'll be able to retrieve it.
Noel
> From: Nick Downing
> This is a wonderful find
Yes, I was _very_ happy to find those tapes in my basement; up till that, I
was almost sure all those bits were gone forever.
Thanks to Chuck Guzis, whose old data recovery service made this possible - he
actually read the tape.
> is it possible for you to read the other tapes also?
Alas, they're all of the same system. So the most we're going to get is the
files that are missing on this one due to bad spots on the tape.
Noel
> some Unix from BBN
This one is from 1979, it includes Mike Wingfield's TCP. The 'Trix UNIX' is a
port to the 68K, probably started with something V7ish (I see "setjmp.h" in
there). Bits of the Montgomery EMACS appear to date from 1981, but the main
source files seem to be from 1984. I also have the source to 'vsh' (Visual
Shell), whatever that is.
Noel
Just stumbled over another early TCP/IP for Unix:
http://bitsavers.informatik.uni-stuttgart.de/pdf/3Com/3Com_UNET_Nov80.pdf
It would seem to be a design similar to that of Holmgren's (NCP-based) Network Unix (basic packet processing in the kernel, connection management in a user space daemon). In time and in concept it would sit in between the Wingfield ('79, all user space) and the Gurwitz ('81, all kernel) implementations.
I think it was distributed initially as a mod versus V7 and later as a mod versus 2BSD.
Would anybody here know of surviving source of this implementation?
Thanks,
Paul
The recent discussion of Solaris made me think - what was the first Unix to
have centralized package management as part of the OS? I know that IRIX
had it, I think from the beginning (possibly even for the GL2 releases) but
I imagine there was probably something before that.
-Henry
guess it is the beginning of the end of Solaris and the Sparc CPU:
'Rumors have been circulating since late last year that Oracle was
planning to kill development of the Solaris operating system, with major
layoffs coming to the operating system's development team. Others
speculated that future versions of the Unix platform Oracle acquired
with Sun Microsystems would be designed for the cloud and built for the
Intel platform only and that the SPARC processor line would meet its
demise. The good news, based on a recently released Oracle roadmap for
the SPARC platform, is that both Solaris and SPARC appear to have a
future.
The bad news is that the next major version of Solaris—Solaris 12— has
apparently been canceled, as it has disappeared from the roadmap.
Instead, it's been replaced with "Solaris 11.next"—and that version is
apparently the only update planned for the operating system through
2021.
With its on-premises software and hardware sales in decline, Oracle has
been undergoing a major reorganization over the past two years as it
attempts to pivot toward the cloud. Those changes led to a major speed
bump in the development cycle for Java Enterprise Edition, a slowdown
significant enough that it spurred something of a Java community revolt.
Oracle later announced a new roadmap for Java EE that recalibrated
expectations, focusing on cloud services features for the next version
of the software platform. '
http://arstechnica.com/information-technology/2017/01/oracle-sort-of-confir…
--
Kay Parker
kayparker(a)mailite.com
--
http://www.fastmail.com - The way an email service should be
Now that we have quite a few ex-Bell Labs staff on the list, and several
other luminaries, and with the Unix 50th anniversary not far off, perhaps
it is time to form a working group to help lobby to get 8th, 9th and 10th
Editions released.
I'm after volunteers to help. People who can actually move this forward.
Let me know if and how you can help out.
Thanks, Warren
I'm a bit puzzled, but then I only ever worked with some version of
Ultrix and an AT&T flavour of UNIX in Philips, SCO 3.2V4.2 (OpenServer
3ish), DEC Digital UNIX, Tru64, HP-UX 1123/11.31 and only ever used
"mkdir -p".
Some differences in the various versions are easily solved in scripts,
like shown below. Not the best of examples, but easy. Getting it to
work on a linux flavour wouldn't be too difficult :-)
OS_TYPE=`uname -n`
case "${OS_TYPE}" in
"OSF1")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="THA-7"
;;
"HP-UX")
PATH=".:/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/contrib/bin:/xyz/field/scripts:/xyz/shell:/xyz/appl/unix/bin:/xyz/utils:"
TZ="TST-7"
;;
*)
echo "${OS_TYPE} unknown, exit"
exit 1
;;
esac
> From: Doug McIlroy
> Perhaps the real question is why did IBM break so completely to hex for
> the 360?
Probably because the 360 had 8-bit bytes?
Unless there's something like the PDP-11 instruction format which makes octal
optimal, octal is a pain working with 8-bit bytes; anytime you're looking at
the higher bytes in a word, unless you are working through software which
will 'interpret' the bytes for you, it's a PITA.
The 360 instruction coding doesn't really benefit from octal (well,
instructions are in 4 classes, based on the high two bits of the first byte,
but past that, hex works better); opcodes are 8 or 16 bits, and register
numbers are 4 bits.
As to why the 360 had 8-bit bytes, according to "IBM's 360 and Early 370
Systems" (Pugh, Johnson, and Palmer, pp. 148-149), there was a big fight over
whether to use 6 or 8, and they finally went with 8 because i) statistics
showed that more customer data was numbers, rather than text, and storing
decimal numbers in 6-bit bytes was inefficient (BCD does two digits per 8-bit
byte), and ii) they were looking forward to handling text with upper- and
lower-case.
Noel
> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b is
a multiple of 3. But PDP-11 is 16b, multiple of 4.
Octal predates the 6-bit byte. Dumps on Whirwind II, a 16-bit machine,
were issued in octal. And to help with arithmetic, the computer lab
had an octal Friden (IIRC) desk calculator. One important feature of
octal is you don't have to learn new numerals and their addition
and multiplication tables, 2.5x the size of decimal tables.
Established early, octal was reinforced by a decade of 6-bit bytes.
Perhaps the real question is why did IBM break so completely to hex
for the 360? (Absent actual knowledge, I'd hazard a guess that it
was eased in on the 7030.)
Doug
=
> From: Joerg Schilling
> Was T1 a "digital" line interface, or was this rather a 24x3.1 kHz
> channel?
Google is your friend:
https://en.wikipedia.org/wiki/T-carrierhttps://en.wikipedia.org/wiki/Digital_Signal_1
> How was the 64 ??? Kbit/s interface to the first IMPs implemented?
> Wasn't it AT&T that provided the lines for the first IMPs?
Yes and no. Some details are given in "The interface message processor for the
ARPA computer network" (Heart, Kahn, Ornstein, Crowther and Walden), but not
much. More detail of the business arrangement is contained in "A History of
the ARPANET: The First Decade" (BBN Report No. 4799).
Details of the interface, and the IMP side, are given in the BBN proposal,
"Interface Message Processor for the ARPA Computer Network" (BBN Proposal No.
IMP P69-IST-5): in each direction there is a digital data line, and a clock
line. It's synchronous (i.e. a constant stream of SYN characters is sent
across the interface when no 'frame' is being sent).
The 50KB modems were, IIRC, provided by the Bell system; the diagram in the
paper above seems to indicate that they were not considered part of the IMP
system. The modems at MIT were contained in a large rack, the same size as
the IMP, which stood next to it.
I wasn't able to find anything about anything past the IMP/modem interface.
Perhaps some AT+T publications of that period might detail how the modem,
etc, worked.
Noel
On the subject of the PDP-10, I recall seeing people at a DECUS
meeting in the early 1980s wearing T-shirts that proclaimed
I don't care what they say, 36 bits are here to say!
I also recall a funny advertizing video spoof at that meeting that
ended with the line
At DIGITAL, we're building yesterday's tomorrow, today.
That meeting was about the time of the cancellation of the Jupiter
project at DEC that was planned to produce a substantially more
powerful follow-on to the KL-10 processor model of the PDP-10 (we had
two such at the UofUtah), disappointing most of its PDP-10 customers.
Some of the Jupiter technology was transferred to later VAX models,
but DEC never produced anything faster than the KL-10 in the 36-bit
line. However, with microcomputers entering the market, and early
workstations from Apollo, LMI, Sun, and others, the economics of
computing changed dramatically, and departmental mainframes ceased to
be cost effective.
Besides our mainframe DEC-20/60 TOPS-20 system in the College of
Science, we also ran Wollongong BSD Unix on a VAX 750, and DEC VMS on
VAX 780 and 8600 models. In 1987, we bought our first dozen Sun
workstations (and for far less than the cost of a DEC-20/60).
After 12 good years of service (and a forklift upgrade from a 20/40 to
a 20/60), our KL-10 was retired on 31-Oct-1990, and the VAX 8600 in
July 1991. Our productivity increased significantly in the Unix
world.
I wrote about memories and history and impact of the PDP-10 in two
keynote addresses at TUG meetings in articles and slides available at
http://www.math.utah.edu/~beebe/talks/2003/tug2003/http://www.math.utah.edu/~beebe/talks/2005/pt2005/
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
I was well aware of the comment in V6, but had no idea what it
referred to. When Dennis and I were porting what became V7 to the
Interdata 8/32, we spent about 10 frustrating days dealing with savu
and retu. Dennis did his most productive work between 10pm and 4am,
while I kept more normal hours. We would pore over the crash dumps
(in hex, then a new thing for us--PDP-ll was all octal, all the
time). I'd tinker with the compiler, he'd tinker with the code and
we would get it to limp, flap its wings, and then crash. The problem
was that the Interdata had many more registers than the PDP-11, so the
compiler only saved the register variables across a call, where the
PDP-11 saved all the registers. This was just fine inside a process,
but between processes it was deadly. After we had tried everything
we could think of, Dennis concluded that the fundamental architecture
was broken. In a couple of days, he came up with the scheme that
ended up in V7.
It was only several years later when I saw a T-shirt with savu and
retu on it along with the famous comment that I realized what it had
referred to, and enjoyed the irony that we hadn't understood it
either...
Steve
----- Original Message -----
From: "Brantley Coile" <brantleycoile(a)me.com>
To:"Larry McVoy" <lm(a)mcvoy.com>
Cc:<tuhs(a)tuhs.org>
Sent:Mon, 16 Jan 2017 05:11:02 -0500
Subject:Re: [TUHS] Article on 'not meant to understand this'
Tim Bradshaw <tfb(a)tfeb.org> writes on 17 Jan 2017 13:09 +0000
>> I think we've all lived in a wonderful time where it seemed like
>> various exponential processes could continue for ever: they can't.
For an update on the exponential scaling (Moore's Law et al), see
this interesting new paper:
Peter J. Denning and Ted G. Lewis
Exponential laws of computing growth
Comm. ACM 60(1) 54--65 January 2017
https://doi.org/10.1145/2976758
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> One thing that I'm unclear about is why all this Arpanet work was not filtering more into the versions of Unix done at Bell Labs.
The short answer is that Bell Lbs was not on Arpanet. In the early
80s the interim CSNET gave us a dial-up window into Arpanet, which
primarily served as a conduit for email. When real internet connection
became possible, network code from Berkeley was folded into the
research kernel. (I am tempted to say "engulfed the research kernel",
for this was a huge addition.)
The highest levels of AT&T were happy to carry digital data, but
did not see digital as significant business. Even though digital T1
was the backbone of long-distance transmission, it was IBM, not
AT&T, that offered direct digital interfaces to T1 in the 60s.
When Arpanet came along MCI was far more eager to carry its data
than AT&T was. It was all very well for Sandy Fraser to build
experimental data networks in the lab, but this was seen as a
niche market. AT&T devoted more effort to specialized applications
like hotel PBXs than to digital communication per se.
Doug
I'm having a lot of fun with a virtual 11/94 and 2.11. What a lot of
excellent engineering!
It seems like an obvious project would be to adapt a newer pcc with ANSI
C support of some sort. Has this already been done? I'll take a look
if not.
Thanks,
Andy Valencia
p.s. The "less" in /usr/local doesn't seem to handle stty based TTY
geometry. I re-ported "less2" from comp.sources.unix and added this.
Somebody ping me if the mildly edited sources are of interest.
> From: Larry McVoy
> It is pretty stunning that the company that had the largest network in
> the world (the phone system of course) didn't get packet switching at
> all.
Actually, it's quite logical - and in fact, the lack of 'getting it' about
packets follows directly from the former (their large existing circuit switch
network).
This dates back to Baran (see his oral history:
https://conservancy.umn.edu/handle/11299/107101
pg. 19 and on), but it was still detectable almost two decades later.
For a variety of all-too-human reasons (of the flavour of 'we're the
networking experts, what do you know'; 'we know all about circuit networks,
this packet stuff is too different'; 'we don't want to obsolete our giant
investment', etc, etc), along with genuine concerns about some real issues of
packet switching (e.g. the congestion stuff, and how well the system handled
load and overload), packet switching just was a bridge too far from what they
already had.
Think IBM and timesharing versus batch and mainframe versus small computers.
Noel
> From: Warren Toomey
> Something I've been meaning to ask for a while: why Unix and octal on
> the PDP-11? Because of the DEC documentation?
Yeah, DEC did it all in octal.
> I understand why other DEC architectures (e.g. PDP-7) were octal: 18b
> is a multiple of 3. But PDP-11 is 16b, multiple of 4.
Look at PDP-11 machine code. Two-op instructions look like this (bit-wise):
oooossssssdddddd
where 'ssssss' and 'dddddd' (source and destination) have the same format:
mmmrrr
where 'mmm' is the mode (things like R, @Rn, etc) and 'rrr' is the register
number. All on octal boundaries. So if you see '010011' in a dump (or when
looking at memory through the front console switches :-), you know
immediately that means:
MOV R0, @R1
Much harder in hex... :-)
Noel
> From: Angelo Papenhoff
> The problem is that the function which did the savu was not necessarily
> the same as the function that does the retu, so after retu the function
> could have the call stack of a different function. As dmr explained,
> this worked with the PDP-11 compiler but not with the interdata
> compiler.
To put it slightly differently, in PDP-11 C all stack frames look identical,
but this is not true of other machines/compilers. So if routine A called
savu(), and routine B called aretu(), when the call to aretu() returned,
procedure B is still running, but on procedure A's stack frame. So on machines
where A's stack frame looks different from B's, hilarity ensues.
(Note that aretu() was significantly different from retu() - the latter
switched to a different process/stack, whereas aretu() did a 'non-local goto'
[technically, switched to a different stack frame on the current stack] in the
current process.)
> Note that Lions doesn't explain this either, he assumed that the
> difficulty was with with u_rsav and u_ssav .. (he probably wasn't that
> wrong though, it really is confusing, but it's just not what the comment
> refers to)
Right. There are actually _three_ sets of saved stack info:
int u_rsav[2]; /* save r5,r6 when exchanging stacks */
int u_qsav[2]; /* label variable for quits and interrupts */
int u_ssav[2]; /* label variable for swapping */
and it was the interaction among the three of them that I found very hard to
understand - hence my (incorrect) memory that the 'you are not' comment
actually referred to that, not the savu/aretu stuff!
Calls to retu(), the primitive to switch stacks/processes, _always_ use
rsav. The others are for 'non-local gotos' inside a process.
Think of qsav as a poor man's exception handler for process software
interrupts. When a process is sleeping on some event, when it is interrupted,
rather than the sleep() call returning, it wakes up returning from the
procedure that did the savu(qsav). (That last is because sleep() - which is
the procedure that's running when the call to aretu(qsav) returns - does a
return immediately after restoring the stack to the frame saved in qsav.)
And I've forgotten exactly how ssav worked - IIRC it was something to do with
how when a process is swapped out, since that can happen in a number of
ways/places, the stack can contains calls to various things like expand(),
etc; when it's swapped back in, the simplest thing to do is to just throw that
all away and have it go back to where it was just before it was decided to
swap it out.
Noel
> From: Tony Finch
This is getting a bit far afield from Unix, so my apologies to the list for
that. But to avoid dumping it in the Internet-History list abruptly, let me
answer here _briefly_ (believe it or not, the below _is_ brief).
> AIUI there were two major revisions to the IPv4 addressing architecture:
Not quite (see below). First, one needs to understand that there are two
different timelines for changes to addressing: in the hosts, and in the
routers (called 'gateways' originally). To start with, they were tied
together, but as of RFC-1122, they were formally separated: hosts no longer
fully understood the syntax/semantics of addresses, just (mostly) treated them
as opaque 32-bit quantities.
> subnetting (RFC 917, October 1994 ... RFC 950, August 1985), and
> classless routing (RFC 1519, September 1993)
Originally, network numbers were 8 bits, and the 'rest' (local) part was 24.
Mapping from IP addresses to physical network addresses was some with direct
mapping - ARP did not exist - the actual local address (e.g. IMP/Port) was
contained in the 'rest' field - each network had a document which specified
the mapping. (Which is part of the interoperability issue with old
implementations.)
As some point early on, it was realized that 8 bits of network number were not
enough, and the awful A/B/C kludge was added (it was dropped on the community,
not discussed before-hand). Subnetting was indeed the next change. Then the
host/router split happened.
Classless routing (which effectively extended addesses, for path-computation
purposes, to 32+N bits - since you couldn't look at a 32-bit IP address and
immediately tell which was the 'network' part any more, you _had_ to have the
mask as well, to tell you how many bits of any given address were the network
number) was more of a process than a single change - the inter-AS routing
(BGP) had to change, but so did IGP's (OSPF, IS-IS), etc, etc.
> originally called supernetting (RFC 1338, June 1992).
There was this effort called ROAD which produced RFC-1338 and 1519, and IIRC
there was an intermediate, involving blocks of network numbers (1338), and
that slowly evolved into arbitrary blocks (1519).
One should also note that the term "super-netting" comes from a proposal by
Carl-Hubert ("Roki") Rokitansky which did not, alas, make it to RFC. (His
purpose was different, but it used the same mechanism.) Alas, the authors of
1338/1519 failed to properly acknowledge his earlier work.
Noel
> From: Johnny Billquist
>> everyone working on TCP/IP heard about Version 4 shortly after the
>> June, 1978 meeting.
> Over a year before any documents said anything about it.
Incorrect. It's documented in IEN-44, June 1978 (written shortly after the
meeting, in the same month).
> I'm sure people were doing networking protocols and stuff earlier, but
> it wasn't the TCP/IP we know and talk about today
People were working on Unix in 1977, but it's not the same Unix we know and
talk about today. Does that mean it's not Unix they were working on?
>> there were working implementations (as in, they could exchange data with
>> other implementations) of TCP/IPv4 by January 1979 - see IEN 77.
^^
> But not TCP4 then.
I just specified that it was v4 (see above).
> thus, not interoperable with an implementation today
No, with properly-chosen addresses (because of the changes in address
handling), they probably would be.
Noel
> From: Paul Ruizendaal
> I guess by April 1981 (RFC777) we reach a point where things are
> specified to a level where implementations would interoperate with
> today's implementations.
Yes and no. Earlier boxes would interoperate, _if addresses on each end were
chosen properly_. Modern handling of addresses on hosts (for the 'is this
destination on my physical network' step of the packet-sending algorithm) did
not come in until RFC-1122 (October 1989); prior to that, lots of host code
probably tried to figure out if the destination was class A, B or C, etc, etc.
Also, until RFC-826 (ARP, November 1982) pretty much all the network
interfaces (and thus the code to turn the 'destination IP address' into an
attached physical network address, for the first hop) were things like ARPANet
that no longer exist, so you could't _actually_ fire up one of them unless you
do something like the 'ARPANet emulation' that the various PDP-10 simulators
use to allow old OS's running on them to talk to the current Internet.
> only if one accepts IEN54/55 as 'TCP/IP'
What are they, if not TCP/IP?
Not the modern variant, of course, but then again, nothing before the early
90's is truly 'modern TCP/IP'.
> IEN98 mentions a TCP3 stack done for Network Unix ... in 1978 by DTI /
> Gary Grossman.
I read this, BITD, but don't recall much about it. I was not impressed by the
coding style.
> at the same time it also uses old-style assignments ('=+' instead of
> '+='). Could this be "typesetter C"?
I don't know. IIRC, that compiler supported both styles. It had to have been a
later compiler than the one that came with V6, that didn't support longs. But
I don't recall any bug with long support in the typetter C compiler we had at
MIT.
> From the above I would support the moniker "first TCP/IP in C on Unix"
No. That clearly belongs to the DTI one. (The difference between V3 and V4,
while significant, aren't enough to make the DTI not 'TCP/IP in C for Unix'.)
If you want to say 'first V4TCP/IP in C for Unix', maybe; I'd have to look for
dates on the one done at MIT for V6, that may be earlier, but I don't think
so. (Check the minutes in the IEN's, that's probably the best source of data
on progress of the various implementations.)
> One thing that I'm unclear about is why all this Arpanet work was not
> filtering more into the versions of Unix done at Bell Labs.
Here's my _guess_ - ask someone from Bell for a sure answer.
You're using 20/20 hindsight. At that point in time, it was not at all obvious
that TCP/IP was going to take over the world.
There were a couple of alternatives for moving data around that Bell used -
Datakit, and UUCP - and they worked pretty well, and there was no reason to
pick up on this TCP/IP thing.
I suspect that it wasn't until LAN's became popular than TCP/IP looked like a
good thing to have - it fits very well with the capabilities most LANs had (in
term of the service provided to things attached to them). Datakit was its own
thing, and for UUCP you'd have to provide a reliable stream, and TCP/IP 'just
did that'.
Noel