My experience is that the problems involved in making
a program faster are
often quite interesting and fun to work on. But the problems making things
fit in a small space are, IMHO, really deadly.
First make things "as simple as possible, but no simpler" (Einstein). Ken and
Dennis not only cut out fat, they also found generalizations that combined
traditionally disparate features, so the new whole was smaller (and more
comprehensible) than the sum of the old parts. The going gets tough in the
presence of constraints on space or time. Steve's perception, I think, is
colored by the experience of facing hard limits on space, but not on time.
Describing one complication of hard time constraints, John Kelly used to
say that the Packard Bell 250 was "the only machine I ever used where
you transfer to a time of day rather than a memory location". (The delay-
line memory had two instruction formats: one was operation + address-of-
next-instruction, the other was just the operation--the next instruction
being whatever came out of the delay line when the operation ended. The
latter mode minimized both execution time and code space, but the attention
one had to pay to time was, to borrow Steve's phrase, "really deadly".)
Design tradeoffs for efficiency pose an almost moral conundrum:
whether to make things fast or make them easy. For example, the
classic Unix kernel typically did table lookup by linear search,
whereas Linux (when I last looked) typically used binary search.
The price of Linux's choice is that one must take care to keep
the tables sorted. Heavy discipline has to be imposed on making
entries and deletions.
Doug