George Michaelson <ggm(a)algebras.org> writes:
we used to argue about that. I disliked autoconf
because I felt
99% of
the work could be precomputed, which is what MIT X11 Makefiles
did:
they had recipes for the common architectures.
A point still being made:
So, okay, fine, at some point it made sense to run
programs to
empirically determine what was supported on a given system. What
I don't understand is why we kept running those stupid little
shell snippets and little bits of C code over and over. It's
like, okay, we established that this particular system does
<library function foobar> with two args, not three. So why the
hell are we constantly testing for it over and over?
Why didn't we end up with a situation where it was just a
standard thing that had a small number of possible values, and
it would just be set for you somewhere? Whoever was responsible
for building your system (OS company, distribution packagers,
whatever) could leave something in /etc that says "X = flavor 1,
Y = flavor 2" and so on down the line.
And, okay, fine, I get that there would have been all kinds of
"real OS companies" that wouldn't have wanted to stoop to the
level of the dirty free software hippies. Whatever. Those same
hippies could have run the tests ONCE per platform/OS combo, put
the results into /etc themselves, and then been done with it.
Then instead of testing all of that shit every time we built
something from source, we'd just drag in the pre-existing
results and go from there. It's not like the results were going
to change on us. They were a reflection of the way the kernel, C
libraries, APIs and userspace happened to work. Short of that
changing, the results wouldn't change either.
--https://rachelbythebay.com/w/2024/04/02/autoconf/
Alexis.