On Tuesday, May 21st, 2024 at 9:59 AM, Paul Winalski <paul.winalski(a)gmail.com>
wrote:
On Tue, May 21, 2024 at 12:09 AM Serissa
<stewart(a)serissa.com> wrote:
> Well this is obviously a hot button topic.
AFAIK I was nearby when fuzz-testing for software was invented. I was the main advocate
for hiring Andy Payne into the Digital Cambridge Research Lab. One of his little projects
was a thing that generated random but correct C programs and fed them to different
compilers or compilers with different switches to see if they crashed or generated
incorrect results. Overnight, his tester filed 300 or so bug reports against the Digital C
compiler. This was met with substantial pushback, but it was a mostly an issue that many
of the reports traced to the same underlying bugs.
>
> Bill McKeemon expanded the technique and published "Differential Testing of
Software"
https://www.cs.swarthmore.edu/~bylvisa1/cs97/f13/Papers/DifferentialTesting…
In the mid-late 1980s Bill Mckeeman worked with DEC's compiler product teams to
introduce fuzz testing into our testing process. As with the C compiler work at DEC
Cambridge, fuzz testing for other compilers (Fortran, PL/I) also found large numbers of
bugs.
The pushback from the compiler folks was mainly a matter of priorities. Fuzz testing is
very adept at finding edge conditions, but most failing fuzz tests have syntax that no
human programmer would ever write. As a compiler engineer you have limited time to devote
to bug testing. Do you spend that time addressing real customer issues that have been
reported or do you spend it fixing problems with code that no human being would ever
write? To take an example that really happened, a fuzz test consisting of 100 nested
parentheses caused an overflow in a parser table (it could only handle 50 nested parens).
Is that worth fixing?
As you pointed out, fuzz test failures tend to occur in clusters and many of the failures
eventually are traced to the same underlying bug. Which leads to the counter-argument to
the pushback. The fuzz tests are finding real underlying bugs. Why not fix them before a
customer runs into them? That very thing did happen several times. A customer-reported bug
was fixed and suddenly several of the fuzz test problems that had been reported went away.
Another consideration is that, even back in the 1980s, humans weren't the only ones
writing programs. There were programs writing programs and they sometimes produced bizarre
(but syntactically correct) code.
-Paul W.
A happy medium could be including far-out fuzzing to characterize issues, but not
necessarily then immediately sink the resources into resolving bizarre discoveries from
the fuzzing. Better to know then not but also have the wisdom to determine "is
someone actually going to trip this" vs. "this is something that is possible and
good to document". In my own work we have several of the latter where something is
almost guaranteed to never happen with a human interaction, but is also something we want
documented somewhere so if unlikely problem <xyz> ever does happen, the discovery is
already done and we just start plotting out a solution. That's also some nice low
hanging fruit to pluck when there isn't much else going on, but avoids the phenomenon
where we sink critical time into bugfixes with a microscopic ROI.
- Matt G.