Ahem. Lots more _core_. People keeep forgetting that we're looking at
decicions made at a time when each bit in main memory was stored in a
physically separate storage device, and having tons of memory was a dream of
the future.
Yeah -- that is something that forgotten. There's a kit/hackday project to make 32-byte core for an Arduino I did with some of my boy scouts doing electronic MB a while back just to try to give them a feel what a 'bit' was. Similarly, there was a update of in late 1960's children's book originally called 'A Million' it's now called: A Million Dots Each page has 10K dots. The idea is to help young readers get a real feel for what 'a million' means visually.
E.g. the -11/40 I first ran Unix on had _48 KB_ of core memory - total!
And that had to hold the resident OS, plus the application! It's no
surprise that Unix was so focused on small size - and as a corollary, on
high bang/buck ratio.'
Amen -- I ran an 11/34 with 64K under V6 for about 3-6 months while we were awaiting the 256K memory upgrade.
But even in his age of lighting one's cigars with gigabytes of main memory
(literally), small is still beautiful, because it's easier to understand, and
complexity is bad. So it's too bad Unix has lost that extreme parsimony.
Yep -- I think we were discussing this last week WRT to cat -v/fmt et al.
I fear some people confuse 'progress' with 'feature creep.' Just because we can do something, does not mean we should.
As I said, I'm a real fan of async I/O and like Paul, feel that it is a 'better' primitive. But I fully understand and accept, that given the tradeoffs of the time, UNIX did really well and I much prefer what we got than the alternative. I'm happy we ended up with simply and just works.