I first encountered the fuzz-testing work of Barton Miller (Computer
Sciences Department, University of Wisconsin in Madison) and his
students and colleagues in their original paper on the subject
An empirical study of the reliability of UNIX utilities
Comm. ACM 33(12) 32--44 (December 1990)
https://doi.org/10.1145/96267.96279
which was followed by
Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services
University of Wisconsin CS TR 1264 (18 February 1995)
ftp://ftp.cs.wisc.edu/pub/techreports/1995/TR1268.pdf
and
An Empirical Study of the Robustness of MacOS Applications Using Random Testing
ACM SIGOPS Operating Systems Review 41(1) 78--86 (January 2007)
https://doi.org/10.1145/1228291.1228308
I have used their techniques and tools many times in testing my own,
and other, software.
By chance, I found today in Web searches on another subject that
Milller's group has a new paper in press in the journal IEEE
Transactions on Software Engineering:
The Relevance of Classic Fuzz Testing: Have We Solved This One?
https://doi.org/10.1109/TSE.2020.3047766
https://arxiv.org/abs/2008.06537
https://ieeexplore.ieee.org/document/9309406
I track that journal at
http://www.math.utah.edu/pub/tex/bib/ieeetranssoftwengYYYY.{bib,html}
[YYYY = 1970 to 2020, by decade], but the new paper has not yet been
assigned a journal issue, so I had not seen it before today.
The Miller group work over 33 years has examined the reliability of
common Unix tools in the face of unexpected input, and in the original
work that began in 1988, they were able to demonstrate a significant
failure rate in common, and widely used, Unix-family utilities.
Despite wide publicity of their first paper, things have not got much
better, even from reprogramming software tools in `safe' languages,
such as Rust.
In each paper, they analyze the reasons for the exposed bugs, and
sadly, much the same reasons still exist in their latest study, and in
several cases, have been introduced into code since their first work.
The latest paper also contains mention of Plan 9, which moved
bug-prone input line editing into the window system, and of bugs in
pdftex (they say latex, but I suspect they mean pdflatex, not latex
itself: pdflatex is a macro-package enhanced layer over the pdftex
engine, which is a modified TeX engine). The latter are significant
to me and my friends and colleagues in the TeX community, and for the
TeX Live 2021 production team
http://www.math.utah.edu/pub/texlive-utah/
especially because this year, Don Knuth revisited TeX and Metafont,
produced new bug-fixed versions of both, plus updated anniversary
editions of his five-volume Computers & Typesetting book series. His
recent work is described in a new paper announced this morning:
The \TeX{} tuneup of 2021
TUGboat 42(1) ??--?? February 2021
https://tug.org/TUGboat/tb42-1/tb130knuth-tuneup21.pdf
Perhaps one or more list members might enjoy the exercise of applying
the Barton-group fuzz tests (all of which are available from a Web
site
ftp://ftp.cs.wisc.edu/paradyn/fuzz/fuzz-2020/
as discussed in their paper) to 1970s and 1980s vintage Unix systems
that they run on software-simulated CPUs (or rarely, on real vintage
hardware).
The Unix tools of those decades were generally much smaller (in lines
of code), and most were written by the expert Unix pioneers at Bell
Labs. It would of interest to compare the tool failure rates in
vintage Unix with tool versions offered by commercial distributions,
the GNU Project, and the FreeBSD community, all of which are treated
in the 2021 paper.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL:
http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------