On Fri, Aug 6, 2021 at 10:20 AM Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
Bakul Shah wrote in
<74E8349E-95A4-40C9-B429-11A4E396BE12(a)iitbombay.org>:
|It was efficient but the control flow got
|quite messy as effectively my code had to do explicit
|continuation passing.
Only twenty years ago but i was under the impression that i got
good (better) performance by having a single event loop object
(thread) doing the select(2) aka the OS interaction, as the driver
under the hood of an IOEvent thing, which then were emitted.
It sounds like you are in violent agreement: you get more performance with
an event loop. So perhaps Cox's claim is true after all: threads are for
programmers who can't deal with events. And then my claim is that at a
certain not-very-high level of complexity no one can deal with events.
After all, the interface to all modern operating systems is as a coroutine:
when we call read() we give up control to the kernel, who does the read for
us and delivers it to a buffer in our space, then surrenders control back.
(How it looks from inside the kernel is another story.) Only when we
request signal handling does the kernel actually interrupt the ordered flow
within our program, but the restrictions on what a signal handler can do in
Posix are so many that it's hardly safe to do more than set a flag to be
polled later or to do a longjmp() to invoke a continuation further up the
stack.
Historic operating systems worked quite differently: they had the
equivalent of signals that meant "disk block has been read" (use the
buffer, and fast, before the OS starts to write the next block into it) and
"disk block has been written" (the buffer is available for reuse). And
programming for them was, in someone's memorable phrase, like moving dead
whales along the beach by kicking them.