At risk of inciting some inflammation I think TCP was a success because of
BSD/UNIX rather than its own merits. Moving the stack into the kernel,
therefore not requiring elaborate external communications controllers (like
IBM's VTAM, predecessors, and other vendor networking approaches that
existed since the '60s), and piggybacking on the success of UNIX and then
"open systems" cemented victory. It has basically been made to work at
scale, the same way gasoline engines have been made to work in automobiles.
There were protocols that fit better in the era like DeltaT with a simpler
state machine and connection handling. Then there was a mad dash of
protocol development in the mid to late ‘80s that were measured by various
metrics to outperform TCP in practical and theoretical space. Some of
these seemed quite nice like XTP and are still in use in niche defense
applications.
Positive, rather than negative acknowledgement has aged well as computers
got more powerful (the sender can pretty well predict loss when it hasn't
gotten an ACK back and opportunistically retransmit). But RF people do
weird things that violate end to end principle on cable modems and radios
to try and minimize transmissions.
One thing I think that is missing in my contemporary modern software
developers is a willingness to dig down and solve problems at the right
level. People do clownish things like write overlay filesystems in garbage
collected languages. Google's QUIC is a fine example of foolishness. I am
mortified that is genuinely being considered for the HTTP 3 standard. But
I guess we've entered the era where enough people have retired that the
lower layers are approached with mysticism and deemed unable and unsuitable
to change. So the layering will continue until eventually things topple
over like the garbage pile in the movie Idiocracy.
Since the discussion meandered to the distinction of path
selection/routing.. for provider level networks, label switching to this
day makes a lot more sense IMO.. figure out a path virtual circuit that can
cut through each hop with a small flow table instead of trying to
coagulate, propagate routes from a massive address space that has to fit in
an expensive CAM and buffer and forward packets at each hop.
Regards,
Kevin
On Wed, Feb 6, 2019 at 10:49 AM Noel Chiappa <jnc(a)mercury.lcs.mit.edu>
wrote:
On Wed, Feb
06, 2019 at 10:16:24AM -0700, Warner Losh wrote:
In many ways, it was a classic second system
effect because they were
trying to fix everything they thought was wrong with TCP/IP at the
time
I'm not sure this part is accurate: the two efforts were contemporaneous;
and
my impression was they were trying to design the next step in networking,
based
on _their own_ analysis of what was needed.
without really, truly knowing the differences
between actual problems
and mere annoyances and how to properly weight the severity of the
issue
in coming up with their solutions.
This is I think true, but then again, TCP/IP fell into some of those holes
too: fragmentation for one (although the issue there was unforseen
problems in
doing it, not so much in it not being a real issue), all the 'unused'
fields
in the IP and TCP headers for things that never got really got
used/implemented (Type of Service, Urgent, etc).
` Noel