<SNIP>
Is this thread really a good place for TUHS discussion? Maybe COFF would
be better suited for it.
And maybe the explanation why there are more men in IT is simpler than some
folks who forcefully try to create elaborate sociological theories think.
In nature males are just wired differently from females. And that is why
they ARE different, like 1 and 0. Otherwise they would be just one sex.
And as we know nothing can come from just one number...
--Andy
Seeing as how this is diverging from TUHS, I'd encourage replies to the
COFF copy that I'm CCing.
On 02/06/2019 01:47 PM, Kevin Bowling wrote:
> There were protocols that fit better in the era like DeltaT with a
> simpler state machine and connection handling. Then there was a mad
> dash of protocol development in the mid to late ‘80s that were measured
> by various metrics to outperform TCP in practical and theoretical
> space. Some of these seemed quite nice like XTP and are still in use in
> niche defense applications.
$ReadingList++
> Positive, rather than negative acknowledgement has aged well as
> computers got more powerful (the sender can pretty well predict loss
> when it hasn't gotten an ACK back and opportunistically retransmit).
> But RF people do weird things that violate end to end principle on cable
> modems and radios to try and minimize transmissions.
Would you care to elaborate?
> One thing I think that is missing in my contemporary modern software
> developers is a willingness to dig down and solve problems at the right
> level. People do clownish things like write overlay filesystems in
> garbage collected languages. Google's QUIC is a fine example of
> foolishness. I am mortified that is genuinely being considered for the
> HTTP 3 standard. But I guess we've entered the era where enough people
> have retired that the lower layers are approached with mysticism and
> deemed unable and unsuitable to change. So the layering will continue
> until eventually things topple over like the garbage pile in the movie
> Idiocracy.
I thought one of the reasons that QUIC was UDP based instead of it's own
transport protocol was because history has shown that the possibility
and openness of networking is not sufficient to encourage the adoption
of newer technologies. Specifically the long tail of history / legacy
has hindered the introduction of a new transport protocol. I thought I
remembered hearing that Google wanted to do a new transport protocol,
but they thought that too many things would block it thus slowing down
it's deployment. Conversely putting QUIC on top of UDP was a minor
compromise that allowed the benefits to be adopted sooner.
Perhaps I'm misremembering. I did a quick 45 second search and couldn't
find any supporting evidence.
The only thing that comes to mind is IPsec's ESP(50) and AH(51) which—as
I understand it—are filtered too frequently because they aren't ICMP(1),
TCP(6), or UDP(17). Too many firewalls interfere to the point that they
are unreliable.
> Since the discussion meandered to the distinction of path
> selection/routing.. for provider level networks, label switching to this
> day makes a lot more sense IMO.. figure out a path virtual circuit that
> can cut through each hop with a small flow table instead of trying to
> coagulate, propagate routes from a massive address space that has to fit
> in an expensive CAM and buffer and forward packets at each hop.
I think label switching has it's advantages. I think it also has some
complications.
I feel like ATM, Frame Relay, and MPLS are all forms of label switching.
Conceptually they all operate based on a per-programed path.
--
Grant. . . .
unix || die
We lost computer pioneer John von Neumann on this day in 1957; the "von
Neumann" architecture (stored program etc) is the basis of all modern
computers, and he almost certainly borrowed it from Charles Babbage.
-- Dave
It looks like Kevin's response didn't make the COFF mailing list. So
I'm including it and my email that it's in response to below.
Kevin, are you subscribed to the COFF mailing list?
On 2/7/19 5:16 PM, Kevin Bowling wrote:
> On Thu, Feb 7, 2019 at 11:08 AM Grant Taylor via TUHS
> <tuhs(a)minnie.tuhs.org> wrote:
>> Seeing as how this is diverging from TUHS, I'd encourage replies to the
>> COFF copy that I'm CCing.
>
> My thesis was that the specific vector of TCP becoming dominant is "UH"
>
>> $ReadingList++
>
> Pick up ISBN-13: 978-0201563511 if you work on transport protos.
>
>> Would you care to elaborate?
>
> Only briefly for this list -- certain common devices will intercept
> TCP streams and cause what are called stretch ACKs from a host because
> transmission is expensive in some vector -- shared collision domain
> like the air or a coaxial bus, battery power to key up the radio etc.
> This means the sender has to accommodate that behavior and know how to
> react if it is modeling the connection.
Intriguing.
It seems that the more answers that I get end up resulting in even more
questions and things I want to learn about.
Thank you Kevin.
>> I thought one of the reasons that QUIC was UDP based instead of it's own
>> transport protocol was because history has shown that the possibility
>> and openness of networking is not sufficient to encourage the adoption
>> of newer technologies. Specifically the long tail of history / legacy
>> has hindered the introduction of a new transport protocol. I thought I
>> remembered hearing that Google wanted to do a new transport protocol,
>> but they thought that too many things would block it thus slowing down
>> it's deployment. Conversely putting QUIC on top of UDP was a minor
>> compromise that allowed the benefits to be adopted sooner.
>>
>> Perhaps I'm misremembering. I did a quick 45 second search and couldn't
>> find any supporting evidence.
>
> G and Facebook also admit it uses 200% more CPU to do the same
> throughput. All for basically avoiding HOL-blocking, which can be
> worked around in most common scenarios, especially when you control
> the most popular browser :facepalm:. Both companies together could
> pull networking in any direction they want. I see QUIC as yet another
> unfortunate direction.
Okay.
>> The only thing that comes to mind is IPsec's ESP(50) and AH(51) which—as
>> I understand it—are filtered too frequently because they aren't ICMP(1),
>> TCP(6), or UDP(17). Too many firewalls interfere to the point that they
>> are unreliable.
>>
>>
>> I think label switching has it's advantages. I think it also has some
>> complications.
>>
>> I feel like ATM, Frame Relay, and MPLS are all forms of label switching.
>> Conceptually they all operate based on a per-programed path.
>
> By provider networks I mean tier 1 and 2 and core infrastructure like
> CDNs. MPLS is good. ATM got a couple things right and a lot of
> things less right.
I think one thing that MPLS, ATM, FR do / did is to transparently carry
other protocols without modification to their operation. I think such
transparency allows them to do things that would be considerably more
difficult if the switching / routing logic of the transport network was
trying to interpret network addressing and / or protocol information of
the payloads they were carrying.
I don't know how flexible MPLS, et al, are without the support of other
associated protocols. How well can a MPLS cloud with redundant
connections find an alternate path if a link goes down. That's arguably
something that IP is exceedingly capable of doing, even if it's not the
most efficient at it.
--
Grant. . . .
unix || die
> On Jan 6, 2019, at 11:36 PM, on TUHS Andy Kosela
> <akosela(a)andykosela.com> wrote:
>
>> On Sun, Jan 6, 2019 at 9:01 PM A. P. Garcia
>> <a.phillip.garcia(a)gmail.com> wrote:
>>
>>
>>
>>> On Sun, Jan 6, 2019, 9:39 PM Warner Losh <imp(a)bsdimp.com wrote:
>>>
>>>
>>>
>>>> On Sun, Jan 6, 2019, 7:06 PM Steve Nickolas <usotsuki(a)buric.co wrote:
>>>>
>>>> On Sun, 6 Jan 2019, A. P. Garcia wrote:
>>>>
>>>> If not for GNU, Unix would still have been cloned. Net/2 happened in
>>>> parallel, did it not?
>>>
>>>
>>> Berkeley actively rewrote most of unix yes. Net/1 was released about
>>> the same time GNU was getting started. Net/2 and later 4.4 BSD
>>> continued this trend, where 4.4 was finally a complete system.
>>> BSD386 only lagged Linux by about a year and had much stronger
>>> networking support, but supported fewer obscure devices than linux...
>>>
>>> Warner
>>>
>>> Ps I know this glosses over a lot, and isn't intended to be pedantic
>>> as to who got where first. Only they were about the same time... and
>>> I'm especially glossing over the AT&T suits, etc.
>>
>>
>> It's really hard to say. How would you compile it? Clang didn't come
>> along until 2007. The Amsterdam Compiler Kit, perhaps?
>
> I find it ironic that BSD people are so quick to bash RMS, yet they
> have been using his tools (gcc and various utils) for years... By
> reading this thread it appears there are more people that have
> personal issues with RMS than technical ones. I find usually there
> are two sides to each story though.
>
> One side is eloquently presented in 2001 movie "Revolution OS"[1]
> starring RMS amongst others.
>
> [1] https://www.youtube.com/watch?v=4vW62KqKJ5A
>
> --Andy
>
> --Andy
I know a lot of folks seek adulation and acclaim. They can’t seem to
help themselves. Often, they deserve more recognition than they deserve,
other times not. As I have sought information regarding the history of
UNIX, it has as often as not involved personal narrative of those who
were involved in that history. Thankfully, a lot of those folks are
alive and reasonably cogent. This is a treasure that deserves
appreciation. However, to say that so and so invented/discovered such
and such is to make little the environment, resources, patronage,
gestalt, and even the zeitgeist.
Who discovered Oxygen? Lavosier? Did Pascal save the world? Read Bruno
Latour’s work for a different perspective on science and discovery.
Particularly, Science in Action or Laboratory Life. Discovery is rarely,
if ever, a solo activity, contrary to a lot of hype and romanticization
in the literature and media.
With regard to who wrote what code... I’ve written a lot of it, that I
personally designed. That code still lives in binary form that runs on
devices that folks use to this day. However, while my code is probably
still, as originally written, in the heart of many running systems, it
is certain that many of these system have been enhanced beyond
recognition. I guarantee you I didn’t do the extensions (I get bored way
to easily). So, if you look at the system today and ask who wrote it,
who wrote it? I would contend that while I wrote the key abstractions
that allowed these systems to come into existence, others also wrote
code to make the systems what they are today and without those
contributions, they would be lesser systems.
I love ancient UNIX, thank you Ken Thompson, Dennis Ritchie, DEC, Bell
Labs management, those Multics folks, IBM, etc :). But I work, everyday,
on a mid-2012 15” Aluminum Unibody Macbook Pro with an Intel processor
and Hynix ram, some Chinese SSD, and who knows what else... Thank you
Steve Jobs and hosts of thousands for making this miracle possible. Oh,
and yes, I daily use gcc/gdb (although, I’d prefer clang/lldb because it
comes with no associated community whining), so thank you RMS.
Later,
Will
Myself it was v6 (most likely the typesetter version).
What I’d like to see discussed is how people today learn to write, enhance, design, and otherwise get involved with an OS.
When I was teaching at UCSD my class on Unix Internals used writing a device driver as the class project and covered an overview of the Unix OS using the Bach book. Even then (the late 80’s) it was hard to do a deep dive into the whole of the Unix system.
Today Linux is far too complex for someone to be able to sit down and make useful contributions to in a few weeks possibly even months, unlike v6, v7 or even 32v. By the time of BSD 4.1[a,b,c] and 4.2 those had progressed to the point that someone just picking up the OS source and trying to understand the whole thing (VM, scheduling, buffer cache, etc) would take weeks to months.
So what is happening today in the academic world to teach new people about OS internals?
David
Computer pioneer Donald Knuth was born on this day in 1938; amongst other
things he gave us the TeX typesetting language, and he's still working on "The
Art Of Computer Programming" fascicles (I only got as far as the first two
volumes).
I really must have a look at his new MMIX, too; I love learning new programming
languages (I've used about 50 the last time I looked, and I'm happy to post a
list for my coterie of disbelievers).
-- Dave
Another weird question from my warped mind...
Would anyone happen to have a copy of the (in)famous Mark V. Shaney
software (written by Bruce Ellis, I believe) that they coud share with me?
I did get it under an NDA from Chris Maltby (who was at Softway at the
time, I think) but it got lost during my dreaded house move (itself a long
story which I don't feel like sharing for a while).
Are you here, Chris? Yes, I used it on Amateur ("ham") packet radio to
annoy someone, and it sucked in not a few people who were wondering just
what the hell he was imbibing/smoking at the time... I think the Statute
of Limitations is now over :-)
Thanks.
-- Dave
Since y'all started it...
1. Close to 100% of gah-noo projects violate the Unix philosophy, the
Worse is better philosophy, the KISS principle, minimalist philosophies
etc. Take just about any of the ``core'' projects, like glibc, as an
example, starting with getopt_long, which lets a developer create
PowerShell-like-named arguments, that are ``human readable''; and
ending with its static linking abilities, which needs no further
ranting. It is in addition to the lead developer's attitude.
2. Close to 100% of GNU projects that simulate classic Unix utilities
introduce GNU'isms, one of them being already presented in the previous
point. Sometimes it goes very far, so we cannot call the Linux kernel
as being written in C, but rather the ``GCC C'' dialect.
The musl and Clang projects demonstrate how much it takes to being able
to replace the respective components while maintaining compatibility.
3. GNU has never been about quality. In fact, the aforementioned GCC
and glibc let developers write more and more bad code.
4. GNU and FSF have never been technical movements, they are political
movements that serve the interests of rms, and should be called as
such.
There are other projects that you all know and they don't need
additional ranting: GRUB, info, Autohell, GTK+, GNOME, and on top of
this, GPL. (HURD is a joke, so not included).
--
caóc