Seeing as how this is diverging from TUHS, I'd encourage replies to the
COFF copy that I'm CCing.
On 02/06/2019 01:47 PM, Kevin Bowling wrote:
> There were protocols that fit better in the era like DeltaT with a
> simpler state machine and connection handling. Then there was a mad
> dash of protocol development in the mid to late ‘80s that were measured
> by various metrics to outperform TCP in practical and theoretical
> space. Some of these seemed quite nice like XTP and are still in use in
> niche defense applications.
$ReadingList++
> Positive, rather than negative acknowledgement has aged well as
> computers got more powerful (the sender can pretty well predict loss
> when it hasn't gotten an ACK back and opportunistically retransmit).
> But RF people do weird things that violate end to end principle on cable
> modems and radios to try and minimize transmissions.
Would you care to elaborate?
> One thing I think that is missing in my contemporary modern software
> developers is a willingness to dig down and solve problems at the right
> level. People do clownish things like write overlay filesystems in
> garbage collected languages. Google's QUIC is a fine example of
> foolishness. I am mortified that is genuinely being considered for the
> HTTP 3 standard. But I guess we've entered the era where enough people
> have retired that the lower layers are approached with mysticism and
> deemed unable and unsuitable to change. So the layering will continue
> until eventually things topple over like the garbage pile in the movie
> Idiocracy.
I thought one of the reasons that QUIC was UDP based instead of it's own
transport protocol was because history has shown that the possibility
and openness of networking is not sufficient to encourage the adoption
of newer technologies. Specifically the long tail of history / legacy
has hindered the introduction of a new transport protocol. I thought I
remembered hearing that Google wanted to do a new transport protocol,
but they thought that too many things would block it thus slowing down
it's deployment. Conversely putting QUIC on top of UDP was a minor
compromise that allowed the benefits to be adopted sooner.
Perhaps I'm misremembering. I did a quick 45 second search and couldn't
find any supporting evidence.
The only thing that comes to mind is IPsec's ESP(50) and AH(51) which—as
I understand it—are filtered too frequently because they aren't ICMP(1),
TCP(6), or UDP(17). Too many firewalls interfere to the point that they
are unreliable.
> Since the discussion meandered to the distinction of path
> selection/routing.. for provider level networks, label switching to this
> day makes a lot more sense IMO.. figure out a path virtual circuit that
> can cut through each hop with a small flow table instead of trying to
> coagulate, propagate routes from a massive address space that has to fit
> in an expensive CAM and buffer and forward packets at each hop.
I think label switching has it's advantages. I think it also has some
complications.
I feel like ATM, Frame Relay, and MPLS are all forms of label switching.
Conceptually they all operate based on a per-programed path.
--
Grant. . . .
unix || die
We lost computer pioneer John von Neumann on this day in 1957; the "von
Neumann" architecture (stored program etc) is the basis of all modern
computers, and he almost certainly borrowed it from Charles Babbage.
-- Dave
It looks like Kevin's response didn't make the COFF mailing list. So
I'm including it and my email that it's in response to below.
Kevin, are you subscribed to the COFF mailing list?
On 2/7/19 5:16 PM, Kevin Bowling wrote:
> On Thu, Feb 7, 2019 at 11:08 AM Grant Taylor via TUHS
> <tuhs(a)minnie.tuhs.org> wrote:
>> Seeing as how this is diverging from TUHS, I'd encourage replies to the
>> COFF copy that I'm CCing.
>
> My thesis was that the specific vector of TCP becoming dominant is "UH"
>
>> $ReadingList++
>
> Pick up ISBN-13: 978-0201563511 if you work on transport protos.
>
>> Would you care to elaborate?
>
> Only briefly for this list -- certain common devices will intercept
> TCP streams and cause what are called stretch ACKs from a host because
> transmission is expensive in some vector -- shared collision domain
> like the air or a coaxial bus, battery power to key up the radio etc.
> This means the sender has to accommodate that behavior and know how to
> react if it is modeling the connection.
Intriguing.
It seems that the more answers that I get end up resulting in even more
questions and things I want to learn about.
Thank you Kevin.
>> I thought one of the reasons that QUIC was UDP based instead of it's own
>> transport protocol was because history has shown that the possibility
>> and openness of networking is not sufficient to encourage the adoption
>> of newer technologies. Specifically the long tail of history / legacy
>> has hindered the introduction of a new transport protocol. I thought I
>> remembered hearing that Google wanted to do a new transport protocol,
>> but they thought that too many things would block it thus slowing down
>> it's deployment. Conversely putting QUIC on top of UDP was a minor
>> compromise that allowed the benefits to be adopted sooner.
>>
>> Perhaps I'm misremembering. I did a quick 45 second search and couldn't
>> find any supporting evidence.
>
> G and Facebook also admit it uses 200% more CPU to do the same
> throughput. All for basically avoiding HOL-blocking, which can be
> worked around in most common scenarios, especially when you control
> the most popular browser :facepalm:. Both companies together could
> pull networking in any direction they want. I see QUIC as yet another
> unfortunate direction.
Okay.
>> The only thing that comes to mind is IPsec's ESP(50) and AH(51) which—as
>> I understand it—are filtered too frequently because they aren't ICMP(1),
>> TCP(6), or UDP(17). Too many firewalls interfere to the point that they
>> are unreliable.
>>
>>
>> I think label switching has it's advantages. I think it also has some
>> complications.
>>
>> I feel like ATM, Frame Relay, and MPLS are all forms of label switching.
>> Conceptually they all operate based on a per-programed path.
>
> By provider networks I mean tier 1 and 2 and core infrastructure like
> CDNs. MPLS is good. ATM got a couple things right and a lot of
> things less right.
I think one thing that MPLS, ATM, FR do / did is to transparently carry
other protocols without modification to their operation. I think such
transparency allows them to do things that would be considerably more
difficult if the switching / routing logic of the transport network was
trying to interpret network addressing and / or protocol information of
the payloads they were carrying.
I don't know how flexible MPLS, et al, are without the support of other
associated protocols. How well can a MPLS cloud with redundant
connections find an alternate path if a link goes down. That's arguably
something that IP is exceedingly capable of doing, even if it's not the
most efficient at it.
--
Grant. . . .
unix || die
> On Jan 6, 2019, at 11:36 PM, on TUHS Andy Kosela
> <akosela(a)andykosela.com> wrote:
>
>> On Sun, Jan 6, 2019 at 9:01 PM A. P. Garcia
>> <a.phillip.garcia(a)gmail.com> wrote:
>>
>>
>>
>>> On Sun, Jan 6, 2019, 9:39 PM Warner Losh <imp(a)bsdimp.com wrote:
>>>
>>>
>>>
>>>> On Sun, Jan 6, 2019, 7:06 PM Steve Nickolas <usotsuki(a)buric.co wrote:
>>>>
>>>> On Sun, 6 Jan 2019, A. P. Garcia wrote:
>>>>
>>>> If not for GNU, Unix would still have been cloned. Net/2 happened in
>>>> parallel, did it not?
>>>
>>>
>>> Berkeley actively rewrote most of unix yes. Net/1 was released about
>>> the same time GNU was getting started. Net/2 and later 4.4 BSD
>>> continued this trend, where 4.4 was finally a complete system.
>>> BSD386 only lagged Linux by about a year and had much stronger
>>> networking support, but supported fewer obscure devices than linux...
>>>
>>> Warner
>>>
>>> Ps I know this glosses over a lot, and isn't intended to be pedantic
>>> as to who got where first. Only they were about the same time... and
>>> I'm especially glossing over the AT&T suits, etc.
>>
>>
>> It's really hard to say. How would you compile it? Clang didn't come
>> along until 2007. The Amsterdam Compiler Kit, perhaps?
>
> I find it ironic that BSD people are so quick to bash RMS, yet they
> have been using his tools (gcc and various utils) for years... By
> reading this thread it appears there are more people that have
> personal issues with RMS than technical ones. I find usually there
> are two sides to each story though.
>
> One side is eloquently presented in 2001 movie "Revolution OS"[1]
> starring RMS amongst others.
>
> [1] https://www.youtube.com/watch?v=4vW62KqKJ5A
>
> --Andy
>
> --Andy
I know a lot of folks seek adulation and acclaim. They can’t seem to
help themselves. Often, they deserve more recognition than they deserve,
other times not. As I have sought information regarding the history of
UNIX, it has as often as not involved personal narrative of those who
were involved in that history. Thankfully, a lot of those folks are
alive and reasonably cogent. This is a treasure that deserves
appreciation. However, to say that so and so invented/discovered such
and such is to make little the environment, resources, patronage,
gestalt, and even the zeitgeist.
Who discovered Oxygen? Lavosier? Did Pascal save the world? Read Bruno
Latour’s work for a different perspective on science and discovery.
Particularly, Science in Action or Laboratory Life. Discovery is rarely,
if ever, a solo activity, contrary to a lot of hype and romanticization
in the literature and media.
With regard to who wrote what code... I’ve written a lot of it, that I
personally designed. That code still lives in binary form that runs on
devices that folks use to this day. However, while my code is probably
still, as originally written, in the heart of many running systems, it
is certain that many of these system have been enhanced beyond
recognition. I guarantee you I didn’t do the extensions (I get bored way
to easily). So, if you look at the system today and ask who wrote it,
who wrote it? I would contend that while I wrote the key abstractions
that allowed these systems to come into existence, others also wrote
code to make the systems what they are today and without those
contributions, they would be lesser systems.
I love ancient UNIX, thank you Ken Thompson, Dennis Ritchie, DEC, Bell
Labs management, those Multics folks, IBM, etc :). But I work, everyday,
on a mid-2012 15” Aluminum Unibody Macbook Pro with an Intel processor
and Hynix ram, some Chinese SSD, and who knows what else... Thank you
Steve Jobs and hosts of thousands for making this miracle possible. Oh,
and yes, I daily use gcc/gdb (although, I’d prefer clang/lldb because it
comes with no associated community whining), so thank you RMS.
Later,
Will
Myself it was v6 (most likely the typesetter version).
What I’d like to see discussed is how people today learn to write, enhance, design, and otherwise get involved with an OS.
When I was teaching at UCSD my class on Unix Internals used writing a device driver as the class project and covered an overview of the Unix OS using the Bach book. Even then (the late 80’s) it was hard to do a deep dive into the whole of the Unix system.
Today Linux is far too complex for someone to be able to sit down and make useful contributions to in a few weeks possibly even months, unlike v6, v7 or even 32v. By the time of BSD 4.1[a,b,c] and 4.2 those had progressed to the point that someone just picking up the OS source and trying to understand the whole thing (VM, scheduling, buffer cache, etc) would take weeks to months.
So what is happening today in the academic world to teach new people about OS internals?
David
Computer pioneer Donald Knuth was born on this day in 1938; amongst other
things he gave us the TeX typesetting language, and he's still working on "The
Art Of Computer Programming" fascicles (I only got as far as the first two
volumes).
I really must have a look at his new MMIX, too; I love learning new programming
languages (I've used about 50 the last time I looked, and I'm happy to post a
list for my coterie of disbelievers).
-- Dave
Another weird question from my warped mind...
Would anyone happen to have a copy of the (in)famous Mark V. Shaney
software (written by Bruce Ellis, I believe) that they coud share with me?
I did get it under an NDA from Chris Maltby (who was at Softway at the
time, I think) but it got lost during my dreaded house move (itself a long
story which I don't feel like sharing for a while).
Are you here, Chris? Yes, I used it on Amateur ("ham") packet radio to
annoy someone, and it sucked in not a few people who were wondering just
what the hell he was imbibing/smoking at the time... I think the Statute
of Limitations is now over :-)
Thanks.
-- Dave
Since y'all started it...
1. Close to 100% of gah-noo projects violate the Unix philosophy, the
Worse is better philosophy, the KISS principle, minimalist philosophies
etc. Take just about any of the ``core'' projects, like glibc, as an
example, starting with getopt_long, which lets a developer create
PowerShell-like-named arguments, that are ``human readable''; and
ending with its static linking abilities, which needs no further
ranting. It is in addition to the lead developer's attitude.
2. Close to 100% of GNU projects that simulate classic Unix utilities
introduce GNU'isms, one of them being already presented in the previous
point. Sometimes it goes very far, so we cannot call the Linux kernel
as being written in C, but rather the ``GCC C'' dialect.
The musl and Clang projects demonstrate how much it takes to being able
to replace the respective components while maintaining compatibility.
3. GNU has never been about quality. In fact, the aforementioned GCC
and glibc let developers write more and more bad code.
4. GNU and FSF have never been technical movements, they are political
movements that serve the interests of rms, and should be called as
such.
There are other projects that you all know and they don't need
additional ranting: GRUB, info, Autohell, GTK+, GNOME, and on top of
this, GPL. (HURD is a joke, so not included).
--
caóc
We lost a co-inventor of ENIAC, John Mauchly, on this day in 1980. ENIAC
is claimed to be the "first general purpose electronic digital computer",
but I guess it comes down to a matter of definition[*], as every culture
likes to be the "first" (just ask the Penguins, for example); for
"computer" you could go all the way back to the Mk-I Antikythera (Hellenic
variation, from about the 100BC production run)...
[*]
Analogue/digital/hybrid
Mechanical/electrical/electronic/hybrid
General/special
Wired/programmable/Turing
Prototype/production
Harvard/Neumann/quantum
Etc...
-- Dave
Augusta Ada King-Noel, Countess of Lovelace, was lost to us in 1852 from
uterine cancer. Regarded as the first computer programmer and a
mathematical prodigy (when such things were unseemly for a mere woman),
she was the daughter of Lord Byron, and a friend of Charles Babbage.
-- Dave
A replica of EDSAC, the Electronic Delay Storage Automatic Calculator, was
switched on at Bletchley Park on this day in 2014; EDSAC was the first
practical general purpose stored program electronic computer (and how's
that for hair-splitting?).
-- Dave
(I decided it was more suitable for COFF instead of TUHS (but sadly the
membership does not overlap much.)
https://en.wikipedia.org/wiki/Leonard_Kleinrock#ARPANET
``The first permanent ARPANET link was established on November 21, 1969,
between the IMP at UCLA and the IMP at the Stanford Research Institute.''
And thus from little acorns...
-- Dave
> From: Grant Taylor
> Thank you for the reply
Sure; I love to yap about stuff like this.
> I occasionally bump into some Multicians and am always pleasantly
> surprised at how different their world is.
Yes, it's very unlike most of what's been done since. Some of it (e.g. a
strictly ordered set of rings for protection) was wrong, but there's a lot of
good stuff there yet to be mined.
>> Which is a pity, because when done correctly (which it was - Intel
>> hired Paul Karger to architect it)
Ooops, it was Roger Schell, I get them mixed up all the time. His oral
history, here:
https://conservancy.umn.edu/handle/11299/133439
is a Must Read.
>> it's just what you > need for a truly secure system (which Multics also
>> had) - but that's another long message.
> If you're ever feeling bored and would like to share that story.
OK, soon.
> From: Bakul Shah
> All of this would be easily possible on the Mill arch. if ever it gets
> built. Mill has segments and protected function calls.
What I found about that mostly talked about the belt stuff. Do you happen to
have a pointer to the segment/call stuff?
> set-uid has its own issues. Plan9 doesn't have it.
Ah, what were the issues (if you happen to know)?
Noel
> From: Grant Taylor
>> as an operating system person, I was, and remain, a big fan of Multics
>> ... which I still think was a better long-term path for OSes (long
>> discussion of why elided to keep this short).
> Can I ask for the longer discussion?
Sure. (Note that I'm not on COFF, so if anyone replies, please make sure I'm
CC'd directly.)
Basically, it boils down to 'monolithic OS's are bad' - for all the reasons
laid out in the usual ukernel places (I first saw them in the BSTJ MERT
paper, IIRC).
{What follows is not the crispest way to explain it; sorry. I didn't feel like
deleting it all and re-writing to get straight to the point, which is 'Multics
had many of the advantages of a ukernel - and a long time before the ukernel
idea arrived - but without the IPC overhead, which seems to be the biggest
argument against ukernels'.}
Now, early UNIX may have done pretty well in a _tiny_ amount of memory (I
think our -11/40 V6 had about 64KB of main memory _total_, or some such
ridiculous number), and to do that, maybe you have to go monolithic, but
monolithic is pretty Procrustean.
This was brought home to me in doing early TCP/IP work on PDP-11 Unix. The
system was just not well suited to networking work - if what you wanted to do
didn't fit into the UNIX model, you were screwed. Some people (e.g. BBN) did
TCP as a process (after adding IPC; no IPC in UNIX, back then), but then
you're talking a couple of process switches to do anything.
I looked at how Dave Clark was doing it on Multics, and I was green with envy.
He added, debugged and improved his code _on the running main campus system_,
sharing the machine with dozens of real users! Try doing that on UNIX
(although nowadays it's getting there, with loadable kernel stuff - but this
was in the 70's)!
To explain how this was even possible, you need to know a tiny bit about
Multics. It was a 'single level store' system; process memory and files were
not disjoint (as in UNIX), there were only segments (most of which were
long-lived, and survived reboots); processes had huge address spaces (up to
2^18 segments), and if you needed to use one, you just mapped it into your
address space, and off you went.
So he wrote his code, and if I in my command process executed the 'TELNET'
command, when it needed to open a TCP connection, it just mapped in his TCP
code, and called TCP$open() (or whatever he named it). It fiddled around in
the networking state (which was in a data segment that Dave had set up in his
'networking' process when it started up), and set things up. So it was kind of
doing a subroutine call to code in another process.
The security wasn't good, because Multics didn't have set-uid (so that only
Dave's code would have had access to that state database) - when they later
productized the code, they used Multics rings to make it secure.
But still, that was pretty amazing. The key thing here is that nobody had to
modify the Multics 'kernel' to support adding networking - the basic
mechanisms (the single-level store, etc) were so powerful and general you
could write entirely new basic things (like networking) and add them 'on the
fly'.
What Multics had was quite different from ukernels, but it amounted to the
same thing; the system wasn't this monolithic blob, it was
layered/modularized, and you could see (and gain access to, and in some cases
replace - either just for yourself, or for everyone) the layers/modules.
The nice thing was that to call up some subsystem to perform some service for
you, you didn't have to do IPC and then a process switch - it was a
_subroutine call_, in the CPU's hardware.
I don't think anyone else ever tried to go down that path as a way to
structure an operating system (in part because you need hardware support), and
it's a pity. (UNIX, which would run on anything, killed just about _everything
else_ off.)
The 386-Pentium actually had support for many segments, but I gather they are
in the process of deleting it in the latest machines because nobody's using
it. Which is a pity, because when done correctly (which it was - Intel hired
Paul Karger to architect it) it's just what you need for a truly secure system
(which Multics also had) - but that's another long message.
Noel
On this day in 1970 computer pioneer Douglas Engelbart was awarded the
patent for the computer mouse. It was a fugly thing: just a squarish box
with two wheels underneath it mounted at right-angles and a button.
Ergonomic it wasn't...
-- Dave
For some reason, I have in my calendar for today:
VMS Epoch
http://h41379.www4.hpe.com/wizard/wiz_2315.html
Epoch of the Smithsonian Astrophysical Observatory, and is Julian Day
2400000 (to allow date to fit into a 36-bit word on the IBM 704).
When I visited that page, NoScript went bersek...
-- Dave
On 11/16/2018 12:29 PM, Noel Chiappa wrote:
> _That_ is what made me such a huge fan of Unix, even though as an
> operating system person, I was, and remain, a big fan of Multics (maybe
> the only person in the world who really likes both :-), which I still
> think was a better long-term path for OSes (long discussion of why elided
> to keep this short).
Can I ask for the longer discussion? It sounds like an enlightening
sidebar that would be good to have over a cup of COFFee. Maybe the
barista on the COFF mailing list will brew you a cup to discuss this
there. ~wink~wink~nudge~nudge~
--
Grant. . . .
unix || die
Weird day...
Computer architect Gene Amdahl was born in on this day in 1922; he had a
hand in the IBM 704 and the System/360, founded Amdahl Corporation (maker
of /360 clones), and devised Amdahl's Law in relation to parallel
processing.
But we lost Jay W. Forrester in 2016; another computer pioneer, he
invented core memory (remember that, with its destructive read cycle?).
Oh, and LSD was first synthesised in 1938 by Dr. Hofmann of Sandoz Labs,
Switzerland; it had nothing to do with Berkeley and BSD, man...
-- Dave
Computer scientist Per Brinch Hansen was born on this day in 1938; he was
known for his work on "monitors" (now known as operating systems),
concurrent programming, parallel processing, etc.
-- Dave
Donald Michie, a computer scientist, was born in 1923; he was famous for
his work in AI, and also worked at Bletchley Park on the "Tunny" cipher.
And Robert Fano, Computer scientist and Professor of Electrical
Engineering at MIT, was born on this day in 1917. He worked with Claude
Shannon on Information Theory, was involved in the development of
time-sharing computers, and was Founding Director of Project Mac, which
became MIT's AI Lab.
-- Dave
We lost computer architect Gene Amdahl on this day in 2015; responsible
for "Amdahl's Law" (referring to parallel computing), he had a hand in the
IBM-704, the System/360, and founded Amdahl Corporation (a clone of the
360/370 series).
-- Dave
PUP sent out an announcement of BWK's latest tome
(https://press.princeton.edu/titles/14171.html) I am bemused that
his accolades do not mention UNIX (and yes, target audience and all
that).
N.
The infamous Morris Worm was released in 1988; making use of known
vulnerabilities in Sendmail/finger/RSH (and weak passwords), it took out a
metric shitload of SUN-3s and 4BSD Vaxen (the author claimed that it was
accidental, but the idiot hadn't tested it on an isolated network first). A
temporary "condom" was discovered by Rich Kulawiec with "mkdir /tmp/sh".
Another fix was to move the C compiler elsewhere.
-- Dave