Hello. And sorry for the late reply, "lovely weather we're
having"..
Grant Taylor via TUHS wrote in <c4debab5-181d-c071-850a-a8865d6e98d0@spa\
mtrap.tnetconsulting.net>:
|On 06/25/2018 09:51 AM, Steffen Nurpmeso wrote:
...
|> I cannot really imagine how hard it is to write a modern web browser,
|> with the highly integrative DOM tree, CSS and Javascript and such,
|> however, and the devil is in the details anyway.
|
|Apparently it's difficult. Or people aren't sufficiently motivated.
|
|I know that I personally can't do it.
|
|> Good question. Then again, young men (and women) need to have a chance
|> to do anything at all. Practically speaking. For example we see almost
|> thousands of new RFCs per year. That is more than in the first about
|> two decades all in all. And all is getting better.
|
|Yep. I used to have aspirations of reading RFCs. But work and life
|happened, then the rate of RFC publishing exploded, and I gave up.
I think it is something comparable.
|> Really? Not that i know of. Resolvers should be capable to provide
|> quality of service, if multiple name servers are known, i would say.
|
|I typically see this from the authoritative server side. I've
|experienced this (at least) once myself and have multiple colleagues
|that have also experienced this (at least) once themselves too.
|
|If the primary name server is offline for some reason (maintenance,
|outage, hardware, what have you) clients start reporting a problem that
|manifests itself as a complete failure. They seem to not fall back to
|the secondary name server.
|
|I'm not talking about "slowdown" complaints. I'm talking about
|"complete failure" and "inability to browse the web".
|
|I have no idea why this is. All the testing that I've done indicate
|that clients fall back to secondary name servers. Yet multiple
|colleagues and myself have experienced this across multiple platforms.
|
|It's because of these failure symptoms that I'm currently working with a
|friend & colleague to implement VRRP between his name servers so that
|either of them can serve traffic for both DNS service IPs / Virtual IPs
|(VIPs) in the event that the other server is unable to do so.
|
|We all agree that a 5 ~ 90 second (depending on config) outage with
|automatic service continuation after VIP migration is a LOT better than
|end users experiencing what are effectively service outages of the
|primary DNS server.
|
|Even in the outages, the number of queries to the secondary DNS
|server(s) don't increase like you would expect as clients migrate from
|the offline primary to the online secondary.
|
|In short, this is a big unknown that I, and multiple colleagues, have
|seen and can't explain. So, we have each independently decided to
|implement solutions to keep the primary DNS IP address online using what
|ever method we prefer.
That is beyond me, i have never dealt with the server side. I had
a pretty normal QOS client: a sequentially used searchlist,
configurable DNS::conf_retries, creating SERVFAIL cache entries if
all bets were off. Placing failing servers last in the SBELT, but
keeping them; i see a dangling TODO note for an additional place for
failing nameservers, to have them rest for a while.
|> This is even RFC 1034 as i see, SLIST and SBELT, whereas the latter
|> i filled in from "nameserver" as of /etc/resolv.conf, it should have
|> multiple, then. (Or point to localhost and have a true local resolver
|> or something like dnsmasq.)
|
|I completely agree that per all documentation, things SHOULD work with
|just the secondary. Yet my experience from recursive DNS server
|operator stand point, things don't work.
That is bad.
|> I do see DNS failures via Wifi but that is not the fault of DNS, but of
|> the provider i use.
|
|Sure. That's the nature of a wireless network. It's also unrelated to
|the symptoms I describe above.
Well, in fact i think there is something like a scheduler error
around there, too, because sometimes i am placed nowhere and get
no bits at all, most often at minute boundaries, and for minutes.
But well...
|> P.S.: actually the only three things i have ever hated about DNS, and
|> i came to that in 2004 with EDNS etc. all around yet, is that backward
|> compatibly has been chosen for domain names, and therefore we gained
|> IDNA, which is a terribly complicated brainfuck thing that actually
|> caused incompatibilities, but these where then waved through and ok.
|> That is absurd.
|
|I try to avoid dealing with IDNs. Fortunately I'm fairly lucky in \
|doing so.
I censor this myself. I for one would vote for UTF-7 with
a different ACE (ASCII compatible encoding) prefix, UTF-7 is
somewhat cheap to implement and used for, e.g., the pretty
omnipresent IMAP. I mean, i (but who am i) have absolutely
nothing against super smart algorithms somewhere, for example in
a software that domain registrars have to run in order to grant
domain names. But the domain name as such should just be it,
otherwise it is .. not. Yes that is pretty Hollywood but if
i mistype an ASCII hostname i also get a false result.
Anyway: fact is that for example German IDNA 2008 rules are
incompatible with the earlier ones: really, really bizarre.
|> And the third is DNSSEC, which i read the standard of and said "no".
|> Just last year or the year before that we finally got DNS via TCP/TLS
|> and DNS via DTLS, that is, normal transport security!
|
|My understanding is that DNSSEC provides verifiable authenticity of the
|information transported by DNSSEC. It's also my understanding that
|DTLS, DOH, DNScrypt, etc provide DNS /transport/ encryption
|(authentication & privacy).
|
|As I understand it, there is nothing about DTLS, DOH, DNScrypt, etc that
|prevent servers on the far end of said transports from modifying queries
|/ responses that pass through them, or serving up spoofed content.
|
|To me, DNSSEC serves a completely different need than DTLS, DOH,
|DNScrypt, etc.
|
|Please correct me and enlighten me if I'm wrong or have oversimplified
|things.
No, a part is that you can place signatures which can be used to
verify the content that is actually delivered. This is right.
And yes, this is different to transport layer security. And DNS
is different in that data is cached anywhere in the distributed
topology, so having the data ship with verifieable signatures may
sound sound. But when i look around this is not what is in use,
twenty years thereafter. I mean you can lookup
netbsd.org and see
how it works, and if you have a resolver who can make use that
would be ok (mine can not), but as long as those signatures are
not mandatory they can be left out somewhere on the way, can they.
My VM hoster offers two nameservers and explicitly does not
support DNSSEC for example (or did so once we made the contract,
about 30 months ago).
Then again i wish (i mean, it is not that bad in respect to this,
politics or philosophy are something different) i could reach out
securely to the mentioned servers. In the end you need to put
trust in someone, i mean, most of you have smartphones, and some
processors run entire operating systems aside the running
operating system, and the one software producer who offers its
entire software open on a public hoster is condemned whereas
others who do not publish any source code are taken along to the
toilet and into the bedroom, that is anything but not in
particular in order.
So i am sitting behind that wall and have to take what i get.
And then i am absolutely convinced that humans make a lot of
faults, and anything that does not autoconfigure correctly is
prone to misconfiguration and errors, whatever may be the cause,
from a hoped-for girl friend to a dying suffering wife, you know.
I think i personally could agree with these signatures if the
transport would be secure, and if i could make a short single
connection to the root server of a zone and get the certificate
along with the plain TLS handshake. What i mean is, the DNS would
automatically provide signatures based on the very same key that
is used for TLS, and the time to live for all delivered records is
en par with the lifetime of that certificate at maximum.
No configuration at all, automatically secure, a lot code less to
maintain. And maybe even knowledge whether signatures are to be
expected for a zone, so that anything else can be treated as
spoofed.
All this is my personal blabla though. I think that thinking is
not bad, however. Anyway anybody who is not driving all the
server her- or himself is putting trust since decades and all the
time, and if i can have a secure channel to those people i (put)
trust (in) then this is a real improvement. If those people do
the same then i have absolutely zero problems with only encrypted
transport as opposed to open transport and signed data.
|> Twenty years too late, but i really had good days when i saw those
|> standards flying by! Now all we need are zone administrators which
|> publish certificates via DNS and DTLS and TCP/TLS consumers which can
|> integrate those in their own local pool (for at least their runtime).
|
|DANE has been waiting on that for a while.
|
|I think DANE does require DNSSEC. I think DKIM does too.
I have the RFCs around... but puuh. DKIM says
The DNS is proposed as the initial mechanism for the public keys.
Thus, DKIM currently depends on DNS administration and the security
of the DNS system. DKIM is designed to be extensible to other key
fetching services as they become available.
|> I actually hate the concept very, very much ^_^, for me it has
|> similarities with brainfuck. I could not use it.
|
|Okay. To each his / her own.
Of course.
|> Except that this will work only for all-english, as otherwise character
|> sets come into play, text may be in a different character set, mail
|> standards may impose a content-transfer encoding, and then what you are
|> looking at is actually a database, not the data as such.
|
|Hum. I've not considered non-English as I so rarely see it in raw email
|format.
|
|> This is what i find so impressive about that Plan9 approach, where
|> the individual subparts of the message are available as files in the
|> filesystem, subjects etc. as such, decoded and available as normal \
|> files.
|> I think this really is .. impressive.
|
|I've never had the pleasure of messing with Plan9. It's on my
|proverbial To-Do-Someday list. It does sound very interesting.
To me it is more about some of the concepts in there, i actually
cannot use it with the defaults. I cannot the editors Sam nor
Acme, and i do not actually like using the mouse, which is
a problem.
|> Thanks, eh, thanks. Time will bring.. or not.
|
|;-)
Oh, i am hoping for "will", but it takes a lot of time. It would
have been easier to start from scratch, and, well, in C++. I have
never been a real application developer, i am more for libraries
or say, interfaces which enable you to do something. Staying
a bit in the back, but providing support as necessary. Well.
I hope we get there.
|> It is of course not the email that leaves you no more. It is not just
|> headers are added to bring the traceroute path. I do have a bad feeling
|> with these, but technically i do not seem to have an opinion.
|
|I too have some unease with things while not having any technical
|objection to them.
|
|> Well, this adds the burden onto TUHS. Just like i have said.. but
|> you know, more and more SMTP servers connect directly via STARTSSL or
|> TCP/TLS right away. TUHS postfix server does not seem to do so on the
|> sending side
|
|The nature of email is (and has been) changing.
|
|We're no longer trying to use 486s to manage mainstream email. My
|opinion is that we have the CPU cycles to support current standards.
|
|> -- you know, i am not an administor, no earlier but on the 15th of March
|> this year i realized that my Postfix did not reiterate all the smtpd_*
|> variables as smtp_* ones, resulting in my outgoing client connections
|> to have an entirely different configuration than what i provided for
|> what i thought is "the server", then i did, among others
|>
|> smtpd_tls_security_level = may +smtp_tls_security_level =
|> $smtpd_tls_security_level
|
|Oy vey.
Hm.
|> But if TUHS did, why should it create a DKIM signature? Ongoing is the
|> effort to ensure SMTP uses TLS all along the route, i seem to recall i
|> have seen RFCs pass by which accomplish that. Or only drafts?? Hmmm.
|
|SMTP over TLS (STARTTLS) is just a transport. I can send anything I
|want across said transport.
That is true.
|DKIM is about enabling downstream authentication in email. Much like
|DNSSEC does for DNS.
|
|There is nothing that prevents sending false information down a secure
|communications channel.
I mean, that argument is an all destroying hammer. But it is
true. Of course.
|> Well S/MIME does indeed specify this mode of encapsulating the entire
|> message including the headers, and enforce MUAs to completely ignore the
|> outer envelope in this case. (With a RFC discussing problems of this
|> approach.)
|
|Hum. That's contrary to my understanding. Do you happen to have RFC
|and section numbers handy? I've wanted to, needed to, go read more on
|S/MIME. It now sounds like I'm missing something.
In draft "Considerations for protecting Email header with S/MIME"
by Melnikov there is
[RFC5751] describes how to protect the Email message
header [RFC5322], by wrapping the message inside a message/rfc822
container [RFC2045]:
In order to protect outer, non-content-related message header
fields (for instance, the "Subject", "To", "From", and
"Cc"
fields), the sending client MAY wrap a full MIME message in a
message/rfc822 wrapper in order to apply S/MIME security services
to these header fields. It is up to the receiving client to
decide how to present this "inner" header along with the
unprotected "outer" header.
When an S/MIME message is received, if the top-level protected
MIME entity has a Content-Type of message/rfc822, it can be
assumed that the intent was to provide header protection. This
entity SHOULD be presented as the top-level message, taking into
account header merging issues as previously discussed.
I am a bit behind, the discussion after the Efail report this
spring has revealed some new development to me that i did not know
about, and i have not yet found the time to look at those.
|I wonder if the same can be said about PGP.
Well yes, i think the same is true for such, i have already
received encrypted mails which ship a MIME multipart message, the
first being 'Content-Type: text/rfc822-headers;
protected-headers="v1"', the latter providing the text.
|> The BSD Mail clone i maintain does not support this yet, along with \
|> other
|> important aspects of S/MIME, like the possibility to "self-encrypt" (so
|> that the message can be read again, a.k.a. that the encrypted version
|> lands on disk in a record, not the plaintext one). I hope it will be
|> part of the OpenPGP, actually privacy rewrite this summer.
|
|I thought that many MUAs handled that problem by adding the sender as an
|additional recipient in the S/MIME structure. That way the sender could
|extract the ephemeral key using their private key and decrypt the
|encrypted message that they sent.
Yes, that is how this is done. The former maintainer implemented
this rather as something like GnuPG's --hidden-recipient option,
more or less: each receiver gets her or his own S/MIME encrypted
copy. The call chain itself never sees such a mail.
--End of <c4debab5-181d-c071-850a-a8865d6e98d0(a)spamtrap.tnetconsulting.net>
Grant Taylor via TUHS wrote in <5da463dd-fb08-f601-68e3-197e720d5716@spa\
mtrap.tnetconsulting.net>:
|On 06/25/2018 10:10 AM, Steffen Nurpmeso wrote:
|> DKIM reuses the *SSL key infrastructure, which is good.
|
|Are you saying that DKIM relies on the traditional PKI via CA
|infrastructure? Or are you saying that it uses similar technology that
|is completely independent of the PKI / CA infrastructure?
I mean it uses those *SSL tools and thus an infrastructure that is
standardized as such and very widely used, seen by many eyes.
|> (Many eyes see the code in question.) It places records in DNS, which
|> is also good, now that we have DNS over TCP/TLS and (likely) DTLS.
|> In practice however things may differ and to me DNS security is all in
|> all not given as long as we get to the transport layer security.
|
|I believe that a secure DNS /transport/ is not sufficient. Simply
|security the communications channel does not mean that the entity on the
|other end is not lying. Particularly when not talking to the
|authoritative server, likely by relying on caching recursive resolvers.
That record distribution as above, yes, but still: caching those
TSIGs or what their name was is not mandatory i think.
|> I personally do not like DKIM still, i have opendkim around and
|> thought about it, but i do not use it, i would rather wish that public
|> TLS certificates could also be used in the same way as public S/MIME
|> certificates or OpenPGP public keys work, then only by going to a TLS
|> endpoint securely once, there could be end-to-end security.
|
|All S/MIME interactions that I've seen do use certificates from a well
|know CA via the PKI.
|
|(My understanding of) what you're describing is encryption of data in
|flight. That does nothing to encrypt / protect data at rest.
|
|S/MIME /does/ provide encryption / authentication of data in flight
|/and/ data at rest.
|
|S/MIME and PGP can also be used for things that never cross the wire.
As above, i just meant that if the DNS server is TLS protected,
that key could be used to automatically sign the entire zone data,
so that the entire signing and verification is automatic and can
be deduced from the key used for and by TLS. Using the same
library, a single configuration line etc. Records could then be
cached anywhere just the same as now. Just an idea.
You can use self-signed S/MIME, you can use an OpenPGP key without
any web of trust. Different to the latter, where anyone can upload
her or his key to the pool of OpenPGP (in practice rather GnuPG only
i would say) servers (
sks-keyservers.net, which used a self-signed
TLS certificate last time i looked btw., fingerprint
79:1B:27:A3:8E:66:7F:80:27:81:4D:4E:68:E7:C4:78:A4:5D:5A:17),
there is no such possibility for the former. So whereas everybody
can look for the fingerprint i claim via hkps:// and can be pretty
sure that this really is me, no such thing exists for S/MIME.
(I for one was very disappointed when the new German passport had
been developed around Y2K, and no PGP and S/MIME was around. The
Netherlands i think are much better in that. But this is
a different story.)
But i think things happen here, that HTTP .well-known/ mechanism
seems to come into play for more and more things, programs learn
to use TOFU (trust on first use), and manage a dynamic local pool
of trusted certificates. (I do not know exactly, but i have seen
fly by messages which made me think the mutt MUA put some work
into that, for example.) So this is decentralized, then.
|> I am not a cryptographer, however. (I also have not read the TLS v1.3
|> standard which is about to become reality.) The thing however is that
|> for DKIM a lonesome user cannot do anything -- you either need to have
|> your own SMTP server, or you need to trust your provider.
|
|I don't think that's completely accurate. DKIM is a method of signing
|(via cryptographic hash) headers as you see (send) them. I see no
|reason why a client can't add DKIM headers / signature to messages it
|sends to the MSA.
|
|Granted, I've never seen this done. But I don't see anything preventing
|it from being the case.
|
|> But our own user interface is completely detached. (I mean, at least
|> if no MTA is used one could do the DKIM stuff, too.)
|
|I know that it is possible to do things on the receiving side. I've got
|the DKIM Verifier add-on installed in Thunderbird, which gives me client
|side UI indication if the message that's being displayed still passes
|DKIM validation or not. The plugin actually calculates the DKIM hash
|and compares it locally. It's not just relying on a header added by
|receiving servers.
I meant that the MUA could calculate the DKIM stuff itself, but
this works only if the MTA does not add or change headers. That
is what i referred to, but DKIM verification could be done by
a MUA, if it could. Mine cannot.
--End of <5da463dd-fb08-f601-68e3-197e720d5716(a)spamtrap.tnetconsulting.net>
Grant Taylor via TUHS wrote in <8c0da561-f786-8039-d2fc-907f2ddd09e3@spa\
mtrap.tnetconsulting.net>:
|On 06/25/2018 10:26 AM, Steffen Nurpmeso wrote:
|> I have never understood why people get excited about Maildir, you have
|> trees full of files with names which hurt the eyes,
|
|First, a single Maildir is a parent directory and three sub-directories.
| Many Maildir based message stores are collections of multiple
|individual Maildirs.
This is Maildir+ i think was the name, yes.
|Second, the names may look ugly, but they are named independently of the
|contents of the message.
|
|Third, I prefer the file per message as opposed to monolithic mbox for
|performance reasons. Thus I message corruption impacts a single message
|and not the entire mail folder (mbox).
Corruption should not happen, then. This is true for each
database or repository it think.
|Aside: I already have a good fast database that most people call a file
|system and it does a good job tracking metadata for me.
This can be true, yes. I know of files systems which get very
slow when there are a lot of files in a directory, or which even
run into limits in the worst case. (I have indeed once seen the
latter on a FreeBSD system with which i though were good file
system defaults.)
|Fourth, I found maildir to be faster on older / slower servers because
|it doesn't require copying (backing up) the monolithic mbox file prior
|to performing an operation on it. It splits reads & writes into much
|smaller chunks that are more friendly (and faster) to the I/O sub-system.
I think this depends. Mostly anything incoming is an appending
write, in my use case then everything is moved away in one go,
too. Then it definetely depends on the disks you have. Before
i was using a notebook which had a 3600rpm hard disk, then you
want it compact. And then it also depends on the cleverness of
the software. Unfortunately my MUA cannot, but you could have an
index like git has or like the already mentioned Zawinski
describes as it was used for Netscape. I think the i think
Enterprise Mail server dovecot also uses MBOX plus index by
default, but i could be mistaken. I mean, if you control the file
you do not need to perform an action immediately, for example --
and then, when synchronization time happens, you end up with
a potentially large write on a single file, instead of having to
fiddle around with a lot of files. But your experience may vary.
|Could many of the same things be said about MH? Very likely. What I
|couldn't say about MH at the time I went looking and comparing (typical)
|unix mail stores was the readily available POP3 and IMAP interface.
|Seeing as how POP3 and IMAP were a hard requirement for me, MH was a
|non-starter.
|
|> they miss a From_ line (and are thus not valid MBOX files, i did not
|> miss that in the other mail).
|
|True. Though I've not found that to be a limitation. I'm fairly
|confident that the Return-Path: header that is added / replaced by my
|MTA does functionally the same thing.
With From_ line i meant that legendary line that i think was
introduced in Unix V5 mail and which prepends each mail message in
a MBOX file, and which causes that ">From" quoting at the
beginning of lines of non MIME-willing (or so configured) MUAs.
|I'm sure similar differences can be had between many different solutions
|in Unix's history. SYS V style init.d scripts vs BSD style /etc/rc\d
|style init scripts vs systemd or Solaris's SMD (I think that's it's
name).
Oh! I am happy to have the former two on my daily machines again,
and i have been reenabled to tell you what is actually going on
there (in user space), even without following some external
software development. Just in case of interest.
|To each his / her own.
It seems you will never be able to 1984 that from the top,
especially not over time.
--End of <8c0da561-f786-8039-d2fc-907f2ddd09e3(a)spamtrap.tnetconsulting.net>
Ciao.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)