On Wed, Oct 31, 2018 at 12:51 PM Paul Winalski <paul.winalski(a)gmail.com>
wrote:
For me one of the most important software design
principles is that the
simple and most common use cases should be the simplest for theuser to
code.
It's OK if complicated things are complicated to code, but simple things
should be kept simple.
+1 Amen.
I've always thought that the UNIX file primitives very elegantly adhere
to this principle. Reading a file in UNIX is a simple open()/read().../close()
sequence. Contrast that with VMS's $QIO or IBM OS access methods, where
the full complexity is not only exposed in the interface, it must be
considered, set up, and controlled by the
user for even the simplest operations.
As my friend Tom Texiera once so wisely observered, "*it was not so bad
that QIO had over a 1000 options, it was that each of them had to be test
for on each I/O.*"
I also remember having an argument with Culter about QIO vs UNIX I/O (he
was defending QIO at the time as it was 'better' - as being more
complete). But I pointed out, that it was interesting that after stream
I/O was added to VMS, most DEC customers (and the internal language
libraries team) had switched to using stream (UNIX style I/O) for most
operations because it was simplier and just as fast. It was one of the
few times, I saw Dave stop arguing as he really did not have a response.
Multibuffered, asynchronous, interrupt-driven I/O is
more complicated (if
not downright clumsy) in UNIX than in VMS or OS/VS, but that's OK,
IMO--it shifts the
complexity burden to those doing complex things.
Exactly, those programs that need to do it, can when they need to. That
said, because of our VMS experience, we added ASTs (which I tink are
simplier than UNIX signals) and a QIO like call to RTU because the
Real-Time customers coming from VMS needed (wanted) it. Truth is async
I/O was really useful for some special cases that we had customers that
really need use it. But for the most part the simple UNIX 5 I/O calls
were good enough (athough I would still rather AST's).
BTW: when we added threads we discovered that a lot of the need/use for
async I/O also went away and frankly much of the complexity (clumsiness) of
the async I/O code was not longer needed, as the threading to care of it
and certain was smaller and simplier.
Also, I'll agree it was a tad clumsy, but the old double-dd (ddd) program
(you can find it in the UUNET archives), uses two processes that
communicate via a pipe to do the same trick with a traditional (6th
edition) UNIX I/O system to do some of the same things. Basically the two
processes, take turns between reading and writing. Thus the asynchronous
is purely left in the kernel. The user code is actually very simple.
There is a pipe that passes a token back and forth as to when the next
read/write can start. For old UNIX systems lacking async I/O, ddd(8) was
only way I knew you get a tape drive such as QIC/4 or 8 mm, much less a 1/2
streamer to keep up.
Clem
ᐧ