These stories certainly rang true to me. I think it's interesting to
pose the question, to be a bit more contemporary, "What do we need to
do to make open source code higher quality?" I think the original
arguments (that open source would be high quality because everybody
would read the code and fix the bugs) has a bit of validity. But,
IMHO, it is swamped by the typical in-coherence of the software.
It seems to me to be glaringly obvious that if you add a single on/off
option to a program, and don't want the quality of the code to
decrease, you should _a_ _priori_ double the amount of testing you do
before it releasing it. If you have a carefully designed program
with multiple phases with firewalls between them, you might be able to
get away with only 10 or 20% more testing.
So look at gcc with nearly 600 lines in the man page describing just
the _names_ of the options... It seems obvious that the actual
amount of testing of these combinations is a set of measure 0 in the
space of all possible invocations. And the observed fact is that if
you try to do something at all unusual, no matter how reasonable it
may seem, it is likely to fail. Additionally, it is kind sad to see
the same functionality (e.g., increasing the default stack size) is
done so differently on different OS's. Even within Linux, setting
the locale (at least at one time) was quite different from Linux to
Linux.
And I think you can argue that gcc is a success story...
But how much longer can this go on? What can we do to fight the
exponential bloat of this and other open-souce projects. All ideas
cheerfully entertained...
Steve