What would be nice if programming languages provided some support for such exhaustive
testing[1].
At one point I had suggested turning Go's Interface type to something like Guttag
style abstract data types in that relevant axioms are specified right in the interface
definition. The idea was that any concrete type that implements that interface must
satisfy its axioms. Even if the compiler ignored these axioms, one can write a support
program that can generate a set of comprehensive tests based on these axioms. [Right now a
type "implementing" an interface only needs to have a set of methods that
exactly match the interface methods but nothing more] The underlying idea is that each
type is in essence a constraint on what values an instance of that type can take. So
adding such axioms simply tightens (& documents) these constraints. Just the process
of coming up with such axioms can improve the design (sor of like test driven design but
better!).
Now it may be that applying this to anything more complex than stacks won't work well
& it won't be perfect but I thought this was worth experimenting with. This would
be like functional testing of all the nuts and bolts and components that go in an
airplane. The airplane may still fall apart but that would be a "composition"
error!
[1] There are "proof assisant" or formal spec languages such as TLA+, Coq,
Isabelle etc. but they don't get used much by the average programmer. I want
something more retail!
On May 23, 2024, at 1:52 PM, Rob Pike
<robpike(a)gmail.com> wrote:
The semantic distinction is important but the end result is very similar.
"Fuzzing" as it is now called (for no reason I can intuit) tries to get to the
troublesome cases faster by a sort of depth-first search, but exhaustive will always beat
it for value. Our exhaustive tester for bitblt, first done by John Reiser if I remember
right, set the stage for my own thinking about how you properly test something.
-rob
On Thu, May 23, 2024 at 11:49 PM Douglas McIlroy <douglas.mcilroy(a)dartmouth.edu
<mailto:douglas.mcilroy@dartmouth.edu>> wrote:
> > Doug McIlroy was generating random regular expressions
>
> Actually not. I exhaustively (within limits) tested an RE recognizer without
knowingly generating any RE either mechanically or by hand.
>
> The trick: From recursive equations (easily derived from the grammar of REs), I
counted how many REs exist up to various limits on token counts, Then I generated all
strings that satisfied those limits, turned the recognizer loose on them and counted how
many it accepted. Any disagreement of counts revealed the existence (but not any symptom)
of bugs.
>
> Unlike most diagnostic techniques, this scheme produces a certificate of (very high
odds on) correctness over a representative subdomain. The scheme also agnostically checks
behavior on bad inputs as well as good. It does not, however, provide a stress test of a
recognizer's capacity limits. And its exponential nature limits its applicability to
rather small domains. (REs have only 5 distinct kinds of token.)
>
> Doug