From: Paul Winalski
I'm curious as to what the rationale was for Unix
to have been designed
with basic I/O being blocking rather than asynchronous.
It's a combination of two factors, I reckon. One, which is better depends a
lot on the type of thing you're trying to do. For many typical thing (e.g.
'ls'), blocking is a good fit. And, as As Arnold says, asyhchronous I/O is
more complicated, and Unix was (well, back then at least) all about getting
the most bang for the least bucks.
More complicated things do sometimes benefit from asynchronous I/O, but
complicated things weren't Unix's 'target market'. E.g. even though
pipes
post-date the I/O decision, they too are a better match to blocking I/O.
From: Arnold Skeeve
the early Unixs were on smaller -11s, not the /45 or
/70 with split I&D
space and the ability to address lost more RAM.
Ahem. Lots more _core_. People keeep forgetting that we're looking at
decicions made at a time when each bit in main memory was stored in a
physically separate storage device, and having tons of memory was a dream of
the future.
E.g. the -11/40 I first ran Unix on had _48 KB_ of core memory - total!
And that had to hold the resident OS, plus the application! It's no
surprise that Unix was so focused on small size - and as a corollary, on
high bang/buck ratio.
But even in his age of lighting one's cigars with gigabytes of main memory
(literally), small is still beautiful, because it's easier to understand, and
complexity is bad. So it's too bad Unix has lost that extreme parsimony.
From: Dan Cross
question whether asynchrony itself remains untamed, as
Doug put it, or
if rather it has proved difficult to retrofit asynchrony onto a system
designed around fundamentally synchronous primitives?
I'm not sure it's 'either or'; I reckon they are both true.
Noel