On 06/25/2018 09:51 AM, Steffen Nurpmeso wrote:
I cannot really imagine how hard it is to write a
modern web browser,
with the highly integrative DOM tree, CSS and Javascript and such,
however, and the devil is in the details anyway.
Apparently it's difficult. Or people aren't sufficiently motivated.
I know that I personally can't do it.
Good question. Then again, young men (and women) need
to have a chance
to do anything at all. Practically speaking. For example we see almost
thousands of new RFCs per year. That is more than in the first about
two decades all in all. And all is getting better.
Yep. I used to have aspirations of reading RFCs. But work and life
happened, then the rate of RFC publishing exploded, and I gave up.
Really? Not that i know of. Resolvers should be
capable to provide
quality of service, if multiple name servers are known, i would say.
I typically see this from the authoritative server side. I've
experienced this (at least) once myself and have multiple colleagues
that have also experienced this (at least) once themselves too.
If the primary name server is offline for some reason (maintenance,
outage, hardware, what have you) clients start reporting a problem that
manifests itself as a complete failure. They seem to not fall back to
the secondary name server.
I'm not talking about "slowdown" complaints. I'm talking about
"complete failure" and "inability to browse the web".
I have no idea why this is. All the testing that I've done indicate
that clients fall back to secondary name servers. Yet multiple
colleagues and myself have experienced this across multiple platforms.
It's because of these failure symptoms that I'm currently working with a
friend & colleague to implement VRRP between his name servers so that
either of them can serve traffic for both DNS service IPs / Virtual IPs
(VIPs) in the event that the other server is unable to do so.
We all agree that a 5 ~ 90 second (depending on config) outage with
automatic service continuation after VIP migration is a LOT better than
end users experiencing what are effectively service outages of the
primary DNS server.
Even in the outages, the number of queries to the secondary DNS
server(s) don't increase like you would expect as clients migrate from
the offline primary to the online secondary.
In short, this is a big unknown that I, and multiple colleagues, have
seen and can't explain. So, we have each independently decided to
implement solutions to keep the primary DNS IP address online using what
ever method we prefer.
This is even RFC 1034 as i see, SLIST and SBELT,
whereas the latter
i filled in from "nameserver" as of /etc/resolv.conf, it should have
multiple, then. (Or point to localhost and have a true local resolver
or something like dnsmasq.)
I completely agree that per all documentation, things SHOULD work with
just the secondary. Yet my experience from recursive DNS server
operator stand point, things don't work.
I do see DNS failures via Wifi but that is not the
fault of DNS, but of
the provider i use.
Sure. That's the nature of a wireless network. It's also unrelated to
the symptoms I describe above.
P.S.: actually the only three things i have ever hated
about DNS, and
i came to that in 2004 with EDNS etc. all around yet, is that backward
compatibly has been chosen for domain names, and therefore we gained
IDNA, which is a terribly complicated brainfuck thing that actually
caused incompatibilities, but these where then waved through and ok.
That is absurd.
I try to avoid dealing with IDNs. Fortunately I'm fairly lucky in doing so.
And the third is DNSSEC, which i read the standard of
and said "no".
Just last year or the year before that we finally got DNS via TCP/TLS
and DNS via DTLS, that is, normal transport security!
My understanding is that DNSSEC provides verifiable authenticity of the
information transported by DNSSEC. It's also my understanding that
DTLS, DOH, DNScrypt, etc provide DNS /transport/ encryption
(authentication & privacy).
As I understand it, there is nothing about DTLS, DOH, DNScrypt, etc that
prevent servers on the far end of said transports from modifying queries
/ responses that pass through them, or serving up spoofed content.
To me, DNSSEC serves a completely different need than DTLS, DOH,
DNScrypt, etc.
Please correct me and enlighten me if I'm wrong or have oversimplified
things.
Twenty years too late, but i really had good days when
i saw those
standards flying by! Now all we need are zone administrators which
publish certificates via DNS and DTLS and TCP/TLS consumers which can
integrate those in their own local pool (for at least their runtime).
DANE has been waiting on that for a while.
I think DANE does require DNSSEC. I think DKIM does too.
I actually hate the concept very, very much ^_^, for
me it has
similarities with brainfuck. I could not use it.
Okay. To each his / her own.
Except that this will work only for all-english, as
otherwise character
sets come into play, text may be in a different character set, mail
standards may impose a content-transfer encoding, and then what you are
looking at is actually a database, not the data as such.
Hum. I've not considered non-English as I so rarely see it in raw email
format.
This is what i find so impressive about that Plan9
approach, where
the individual subparts of the message are available as files in the
filesystem, subjects etc. as such, decoded and available as normal files.
I think this really is .. impressive.
I've never had the pleasure of messing with Plan9. It's on my
proverbial To-Do-Someday list. It does sound very interesting.
Thanks, eh, thanks. Time will bring.. or not.
;-)
It is of course not the email that leaves you no more.
It is not just
headers are added to bring the traceroute path. I do have a bad feeling
with these, but technically i do not seem to have an opinion.
I too have some unease with things while not having any technical
objection to them.
Well, this adds the burden onto TUHS. Just like i
have said.. but
you know, more and more SMTP servers connect directly via STARTSSL or
TCP/TLS right away. TUHS postfix server does not seem to do so on the
sending side
The nature of email is (and has been) changing.
We're no longer trying to use 486s to manage mainstream email. My
opinion is that we have the CPU cycles to support current standards.
-- you know, i am not an administor, no earlier but on
the 15th of March
this year i realized that my Postfix did not reiterate all the smtpd_*
variables as smtp_* ones, resulting in my outgoing client connections
to have an entirely different configuration than what i provided for
what i thought is "the server", then i did, among others
smtpd_tls_security_level = may +smtp_tls_security_level =
$smtpd_tls_security_level
Oy vey.
But if TUHS did, why should it create a DKIM
signature? Ongoing is the
effort to ensure SMTP uses TLS all along the route, i seem to recall i
have seen RFCs pass by which accomplish that. Or only drafts?? Hmmm.
SMTP over TLS (STARTTLS) is just a transport. I can send anything I
want across said transport.
DKIM is about enabling downstream authentication in email. Much like
DNSSEC does for DNS.
There is nothing that prevents sending false information down a secure
communications channel.
Well S/MIME does indeed specify this mode of
encapsulating the entire
message including the headers, and enforce MUAs to completely ignore the
outer envelope in this case. (With a RFC discussing problems of this
approach.)
Hum. That's contrary to my understanding. Do you happen to have RFC
and section numbers handy? I've wanted to, needed to, go read more on
S/MIME. It now sounds like I'm missing something.
I wonder if the same can be said about PGP.
The BSD Mail clone i maintain does not support this
yet, along with other
important aspects of S/MIME, like the possibility to "self-encrypt" (so
that the message can be read again, a.k.a. that the encrypted version
lands on disk in a record, not the plaintext one). I hope it will be
part of the OpenPGP, actually privacy rewrite this summer.
I thought that many MUAs handled that problem by adding the sender as an
additional recipient in the S/MIME structure. That way the sender could
extract the ephemeral key using their private key and decrypt the
encrypted message that they sent.
--
Grant. . . .
unix || die