> From: Larry McVoy
> TCP/IP was the first wide spread networking stack that you could get
> from a pile of different vendors, Sun, Dec, SGI, IBM's AIX, every kernel
> supported it.
Well, not quite - X.25 was also available on just about everything. TCP/IP's
big advantage over X.25 was that it worked well with LAN's, whereas X.25 was
pretty specific to WAN's.
Although the wide range of TCP/IP implementations available, as well as the
multi-vendor support, and its not being tied to any one vendor, was a big
help. (Remember, I said the "_principle_ reason for TCP/IP's success"
[emphasis added] was the size of the community - other factors, such as these,
did play a role.)
The wide range of implementations was in part a result of DARPA's early
switch-over - every machine out there that was connected to the early Internet
(in the 80s) had to get a TCP/IP, and DARPA paid for a lot of them (e.g. the
BBN one for VAX Unix that Berkeley took on). The TOPS-20 one came from that
source, a whole bunch of others (many now extinct, but...). MIT did one for
MS-DOS as soon as the IBM PC came out (1981), and that spun off to a business
(FTP Software) that was quite successful for a while (Windows 95 was, IIRC,
the first uSloth product with TCP/IP built in). Etc, etc.
Noel
> From: Kevin Bowling
> Seems like a case of winners write the history books.
Hey, I'm just trying to pass on my best understanding as I saw it at the time,
and in retrospect. If you're not interested, I'm happy to stop.
> There were corporate and public access networks long before TCP was set
> in stone as a dominant protocol.
Sure, there were lots of alternatives (BITNET, HEPNET, SPAN, CSNET, along with
commercial systems like TYMNET and TELENET, along with a host of others whose
names now escape me). And that's just the US; Europe had an alphabet soup of its
own.
But _very_ early on (1 Jan 1983), DARPA made all their fundees (which included
all the top CS departments across the US) convert to TCP/IP. (NCP was turned
off on the ARPANET,and everyone was forced to switch over, or get off the
network.) A couple of other things went for TCP/IP too (e.g. NSF's
super-computer network). A Federal ad hoc inter-departmental committee called
the FRICC moved others (e.g. NASA and DoE) in the direction of TCP/IP,
too.
That's what created the large user community that eventually drove all the
others out of business. (Metcalfe's Law.)
Noel
> From: Kevin Bowling
> I think TCP was a success because of BSD/UNIX rather than its own
> merits.
Nope. The principle reason for TCP/IP's success was that it got there first,
and established a user community first. That advantage then fed back, to
increase the lead.
Communication protocols aren't like editors/OS's/yadda-yadda. E.g. I use
Epsilon - but the fact that few others do isn't a problem/issue for me. On the
other hand, if I designed, implemented and personally adopted the world's best
communication protocol... so what? There'd be nobody to talk to.
That's just _one_ of the ways that communication systems are fundamentally
different from other information systems.
Noel
On Mon, Feb 4, 2019 at 8:43 PM Warner Losh <imp(a)bsdimp.com> wrote:
>
>
> On Sun, Feb 3, 2019, 8:03 AM Noel Chiappa <jnc(a)mercury.lcs.mit.edu wrote:
>
>> > From: Warner Losh
>>
>> > a bunch of OSI/ISO network stack posters (thank goodness that didn't
>> > become standard, woof!)
>>
>> Why?
>>
>
> Posters like this :). OSI was massively over specified...
>
oops. Hit the list limit.
Posters like this:
https://people.freebsd.org/~imp/20190203_215836.jpg
which show just how over-specified it was. I also worked at The Wollongong
Group back in the early 90's and it was a total dog on the SysV 386
machines that we were trying to demo it on. A total and unbelievable PITA
to set it up, and crappy performance once we got it going. Almost bad
enough that we didn't show it at the trade show we were going to.... And
that was just the lower layers of the stack plus basic name service. x.400
email addresses were also somewhat overly verbose. In many ways, it was a
classic second system effect because they were trying to fix everything
they thought was wrong with TCP/IP at the time without really, truly
knowing the differences between actual problems and mere annoyances and how
to properly weight the severity of the issue in coming up with their
solutions.
So x.400 vs smtp mail addresses:
"G=Warner;S=Losh;O=WarnerLoshConsulting;PRMD=bsdimp;A=comcast;C=us" vis "
imp(a)bsdimp.com"
(assuming I got all the weird bits of the x.400 address right, it's been a
long time and google had no good examples on the first page I could just
steal...) The x.400 addresses were so unwieldy that a directory service was
added on top of them x.500, which was every bit as baroque IIRC.
TP4 might not have been that bad, but all the stuff above it was kinda
crazy...
Warner
> From: Grant Taylor
> I'm not quite sure what you mean by naming a node vs network interface.
Does one name (in the generic high-level sense of the term 'name'; e.g. an
'address' is a name for a unit of main memory) apply to the node (host) no
matter how many interfaces it has, or where it is/moves in the network? If so,
that name names the node. If not...
> But I do know for a fact that in IPv4, IP addresses belonged to the
> system.
No. Multi-homed hosts in IPv4 had multiple addresses. (There are all sorts
of kludges out there now, e.g. a single IP address shared by a pool of
servers, precisely because the set of entity classes in IPvN - hosts,
interfaces, etc - and namespaces for them were not rich enough for the
things that people actually wanted to do - e.g. have a pool of servers.)
Ignore what term(s) anyone uses, and apply the 'quack/walk' test - how is
it used, and what can it do?
> I don't understand what you mean by using "names" for "path selection".
Names (in the generic sense above) used by the path selection mechanism
(routing protocols do path selection).
> That's probably why I don't understand how routes are allocated by a
> naming authority.
They aren't. But the path selection system can't aggregate information (e.g.
routes) about multiple connected entities into a single item (to make the path
selection scale, in a large network like the Internet) if the names the path
selection system uses for them (i.e. addresses, NSAP's, whatever) are
allocated by several different naming authorities, and thus bear no
relationship to one another.
E.g. if my house's street address is 123 North Street, and the house next door's
address is 456 South Street, and 124 North Street is on the other side of town,
maps (i.e. the data used by a path selection algorithm to decide how to get from
A to B in the road network) aren't going to be very compact.
Noel
> Ken wrote ... ed(before regexp ed)
Actually Ken wrote a regexp qed (for Multics) before he wrote ed.
He wrote about it here, before the birth of Unix:
Programming Techniques: Regular expression search algorithm
Ken Thompson
June 1968 Communications of the ACM: Volume 11 Issue 6, June 1968
This is the nondetermistic regexp recognizer that's been used
ever since. Amusingly a reviewer for Computing Reviews panned
the article on the grounds that everybody already knew how to
write a deterministic recognizer that runs in linear time.
There's no use for this slower program. What the reviewer failed
to observe is that it may take time exponential in the size of
the regexp (and ditto for space) to make such a recognizer.
In real life for a one-shot recognizer that can easily be the
dominant cost.
The problem of exponential construction time arose in Al Aho's
egrep. I was an early adopter--for the calendar(1) daemon. The
daemon generated a date recognizer that accepted most any
(American style) date. The regular expresssions were a couple
of hundred bytes long, full of alternations. Aho was chagrinned
to learn that it took about 30 seconds to make a recognizer
that would be used for less than a second. That led Al to the
wonderful invention of a lazily-constructed recognizer that
would only construct the states that were actually visited
during recognition. At last a really linear-time algorithm!
This is one of my favorite examples of the synergy of having
sytems builders and theoreticians together in one small
department.
Doug
Sorry to drop in on the thread a bit late, and, strictly speaking, not
(according to headers) connected to the thread; I am well acquainted
with David Tilbrook, who is sadly not doing too well; it is not
surprising that Leah Neukirchen was unable to get a hold of him as he
hasn't been using email for some number of years > 1, and is
definitely not programming.
Hugh Redelmeier and I are looking into trying to do some preservation
of his QEF toolset that included the QED port.
Neither Hugh nor I are ourselves QED users; I'm about 30 years into my
Emacs learning curve, albeit using Remacs (the Rust implementation)
lately, while Hugh maintains JOVE to the extent to which it remains
maintained. http://www.cs.toronto.edu/pub/hugh/jove-dev/
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
Co-inventor of Unix, he was born on this day in 1943. Just think: without
those two, we'd all be running M$ Windoze and thinking that it's wonderful (I
know, it's an exaggeration, but think about it).
-- Dave
On Sun, Feb 3, 2019 at 2:59 PM Cág <ca6c(a)bitmessage.ch> wrote:
> [Hockey Pucks and AIX are alive, Wikipedia says.
> The problem could be that neither support amd64 and/or
Be careful. The history of proprietary commercial UNIX implementations is
that they were developed by HW manufacturers that had proprietary ISAs. So
that fact that UX was Itanium and AIX was Power (or Tru64 in its day was
Alpha) should not be surprising. It was the way the market developed. Each
vendor sold a unique ecosystem and tried very hard to keep you in it.
Portability was designed as an >>import<< idea, and they tried to keep you
from exporting by getting you to use 'value add.'
I remember during the reign of terror that Solaris created. Take as an
example, the standard portable threading library was pThreads. But
Solaris threads were faster and Sun did everything it could get the ISV's
write using Solaris Threads. Guess what -- they did. So at DEC we found
ourselves implementing a Solaris Threads package for Tru64, so the ISVs
could run their code (I don't know if IBM or HP did it too, because at the
time, our competition was Sun).
BTW: this attitude was nothing new. I've said it before, the greatest
piece of marketing DEC ever did was convince the world that VMS Fortran was
Fortran-77. It was not close. And when you walked into most people
writing real production code (in Fortran of course), you discovered they
had used all of the VMS Fortran extensions. When the UNIX folks arrived
on the scene the f77 in Seventh Edition was not good enough. You saw first
Masscomp in '85, then a year later Apollo and 2 years after that, Sun
develop really, really good Fortran's -- all that were VMS Fortran
compatible.
nobody cares about commercial Unix systems anymore.
>
This is a bit of blind and sweeping statement which again, I would take
some care.
There are very large commercial sites that continue to run proprietary UNIX
on those same proprietary ISAs, often with ISV and in-home developed
applications that are quite valuable. For instance, a lot of the financial
and insurance industries live here. The question comes to how to value
and count it. Just because the hackers don't work there, does not mean
there are not a lots firms doing it.
Those sites are extremely large and represent a lot of money. The number
of them is unlikely to be growing last time I looked at the numbers. In
fact, in some cases, they >>are<< being displaced by Intel*64 systems
running a flavor of Linux. The key driver for this was the moving the
commercial applications such as Oracle and SAP to Linux and in particular,
Linux running on VMs. But a huge issue was code reuse. To reuse, Henry's
great line about BSD, Linux is just like Unix; only different.
Simply has the cost of maintaining your own ISA and complete SW ecosystem
for it continues to rise and in fact is getting more and more expensive as
the market shrinks. At this point, the only ones left are HP, IBM and the
shadow of Sunoracle. They are servicing a market that is fixed.
>
> As far as commercial systems go, even CentOS has a far larger market
> share on the supercomputer territory than RHEL does, according to
> TOP500.
>
Again be careful. In fact this my world that I have lived for about 40+
years. The Top100 system folks really do not want any stinking OS between
their application and the hardware. They never have. Don't kid yourself.
This is why systems like mOS (Rolf Riesen's MultiOS slides
<https://wrome.github.io/slides/rome16_riesen.pdf> and github sources
<https://github.com/intel/mOS/wiki>) are being developed.
Simply put, the HPC folks have always wanted the OS out the way. Unix was
a convenience for them and Linux just replaced UNIX. The RHEL licensing
scheme is per CPU and on a Beowulf style cluster, it does not make a lot of
sense.
I know a lot of the Linux community likes to crow about the supers using
Linux. They really don't Its what runs on the login node and the job
scheduler. It could be anything as long as its cheap, fast and the
physicists can hack on it. This is a behavior that goes back the
Manhatten Project and its unchanged. The 'capability' systems are a
high-end world that is tuned for a very specific job. You can learn a lot
in that area, but because about making generalizations.
As I like to say -- Fortran still pays my salary. These folks codes are
unchanged since my father's time as a 'computer' at Rocket Dyne in the
1950s. What has changed is the size of the datasets. But open up those
codes and you'll discover the same math. They tend to be equation
solvers. We just have a lot more variables.
Clem
> From: Warner Losh
> a bunch of OSI/ISO network stack posters (thank goodness that didn't
> become standard, woof!)
Why? The details have faded from my memory, but the lower 2 layers of the
stack (CLNP and TP4) I don't recall as being too bad. (The real block to
adoption was that people didn't want to get snarled up in the ISO standards
process.)
It at least managed (IIRC) to separate the concepts of, and naming for, 'node'
and 'network interface' (which is more than IPv6 managed, apparently on the
grounds that 'IPv4 did it that way', despite lengthy pleading that in light of
increased understanding since IPv4 was done, they were separate concepts and
deserved separate namespaces). Yes, the allocation of the names used by the
path selection (I use that term because to too many people, 'routing' means
'packet forwarding') was a total dog's breakast (allocation by naming
authority - the very definition of 'brain-damaged') but TCP/IP's was not any
better, really.
Yes, the whole session/presentation/application thing was ponderous and probably
over-complicated, but that could have been ditched and simpler things run
directly on TP4.
{And apologies for the non-Unix content, but at least it's about computers,
unlike all the postings about Jimmy Page's guitar; typical of the really poor
S/N on this list.)
Noel