On Wed, Feb 12, 2020, 11:13 AM Clem Cole <clemc(a)ccc.com> wrote:
On Tue, Feb 11, 2020 at 10:01 PM Larry McVoy <lm(a)mcvoy.com> wrote:
What little Fortran background I have suggests
that the difference
might be mind set. Fortran programmers are formally trained (at least I
was, there was a whole semester devoted to this) in accumulated errors.
You did a deep dive into how to code stuff so that the error was reduced
each time instead of increased. It has a lot to do with how floating
point works, it's not exact like integers are.
Just a thought, but it might also be the training. My Dad (a
mathematician and 'computer') passed a few years ago, I'd love to have
asked him. But I suspect when he and his peeps were doing this with a
slide rule or at best an Friden mechanical adding machine, they were
acutely aware of how errors accumulated or not. When they started to
convert their processes/techniques to Fortran in the early 1960s, I agree
with you that I think they were conscious of what they were doing. I'm
not sure modern CS types are taught the same things as what might be taught
in a course being run by a pure scientist who cares in the same way folks
like our mothers and fathers did in the 1950s and 60s.
Most cs types barely know that 2.234 might not be an exact number when
converted to binary... A few, however can do sophisticated analysis on the
average ULP for complex functions over the expected range..
Warner
_______________________________________________