On Feb 12, 2020, at 3:05 PM, Larry McVoy
<lm(a)mcvoy.com> wrote:
On Wed, Feb 12, 2020 at 04:45:54PM -0600, Charles H Sauer wrote:
If I recall correctly:
- all doctoral candidates ended up taking two semesters of numerical
analysis. I still have two volume n.a. text in the attic (orange, but not
"burnt orange", IIRC).
- numerical analysis was covered on the doctoral qualifying exam.
Pretty sure Madison required that stuff for Masters degrees. Or maybe
undergrad, I feel like I took that stuff pretty early on.
I'm very systems oriented so I can't imagine I would have taking that
willingly. I hate the whole idea of floating point, just seems so
error prone.
David Goldberg's article "What every computer scientist should know
about floating-point arithmetic" is a good one to read:
https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf
I still have Bill Press's Numerical Recipes book though not opened
recently (as in not since '80s)!
It is interesting that older languages such as Lisp & APL have a
builtin concept of tolerance. Here 0.3 < 0.1 + 0.2 is false. But
Mathematica also does significance tracking. It is probably the single
most egregious omission from IEEE FP and unfortunately not corrected in
the new standard.
Check Steve Richfield's old posts on Usenet for more.
--Toby
in most modern languages it is true! This is so since
0.1 + 0.2 is
0.30000000000000004. In Fortran you'd write something like
abs(0.3 - (0.1 + 0.2)) > tolerance. You can do the same in C
etc.but for some reason it seems to be uncommon :-)
_______________________________________________
COFF mailing list
COFF(a)minnie.tuhs.org
https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff