I think it's implicit with fooling around with slide rules. Everything
is logarithmic, therefore imprecise beyond a certain level (floating
point number or iteration, it's the same problem). You learn to
approximate, within a certain level of confidence (or diffidence :).
(FWVLIW - I bought a Dover book on slide rule in the late 70s while at
high school, and shortly after, a real slide rule, and it's stuck with
me.)
Wesley Parish
On 2/13/20, Clem Cole <clemc(a)ccc.com> wrote:
On Tue, Feb 11, 2020 at 10:01 PM Larry McVoy
<lm(a)mcvoy.com> wrote:
What little Fortran background I have suggests
that the difference
might be mind set. Fortran programmers are formally trained (at least I
was, there was a whole semester devoted to this) in accumulated errors.
You did a deep dive into how to code stuff so that the error was reduced
each time instead of increased. It has a lot to do with how floating
point works, it's not exact like integers are.
Just a thought, but it might also be the training. My Dad (a
mathematician and 'computer') passed a few years ago, I'd love to have
asked him. But I suspect when he and his peeps were doing this with a
slide rule or at best an Friden mechanical adding machine, they were
acutely aware of how errors accumulated or not. When they started to
convert their processes/techniques to Fortran in the early 1960s, I agree
with you that I think they were conscious of what they were doing. I'm
not sure modern CS types are taught the same things as what might be taught
in a course being run by a pure scientist who cares in the same way folks
like our mothers and fathers did in the 1950s and 60s.