I think the astonishment is not so much that tailored specialty floating
point libraries exist, or that programmers are able to squeeze performance
out of processors using hardware floating point and/or vector units. What
is impressive to me is that floating point was being done on processors
that had essentially no hardware support for floating point whatsoever, and
that the processors in question were running at what we would consider
infinitesimally slow speeds by the standards of what is inside my barely
modern home thermostat.
These hacks make me think of the "demoscene" folks who write programs for
early '80s home 8 bit microcomputers in assembly. The idea is to squeeze
as much apparent visual performance out of a system as possible, so an
example might be scrolling characters across the screen while a wireframe
cube rotates in the background, all on a Commodore VIC-20 with 4K of RAM.
The idea is to start with a mathematically sound algorithm but then cut
every corner possible and account for every single timing bug and hardware
quirk. It's a fun demonstration for two or three minutes but I imagine
that after an hour or two the numerical inaccuracies would set in, just as
described earlier in this thread.
-Henry
On Sat, 19 Oct 2019 at 15:20, Clem Cole <clemc(a)ccc.com> wrote:
Abhinav -- it is still done today. For Intel's
MKL we must have a team
of programmers that specialize in writing math at the lowest levels. DEC,
CDC, Cray, IBM did the same thing back in the day. Check out: Intel
Math Kernel Library (*a.k.a.* MKL) <https://software.intel.com/en-us/mkl>.
On Sat, Oct 19, 2019 at 2:34 PM Abhinav Rajagopalan <
abhinavrajagopalan(a)gmail.com> wrote:
> Forgive me for both hijacking this thread, and to address my amateurish
> gnawing concern, but how was it be possible to write differential/integral
> equations at an assembly/machine level at the time, especially in machines
> such as the PDP-7 and such which had IIRC just 16 instructions and operated
> on the basis of mere words, especially the floating point math being done.
> Surmising from some personal experience that writing mathematical programs
> is hard even now, although there exist certain functional paradigms, and
> specialised environments such as MATLAB or Mathematica. The
> complexity seems to remain the same if not more now, due to the vast oodles
> of data to handle stemming from the nature of the world.
>
> Were they loaded as just words as any other instruction or were there
> separate coprocessors that did the number crunching? I'm guessing
> Fortran-ish kind of implementations were done, but the hardware level
> computation itself I just can't process.
>
> It just blows my mind now thinking backwards in terms of those
> monster machines being loaded with trails of paper tape instructions to
> play Space Travel. Being born in the late 90's doesn't help me too.
>
> Also, on a related note, don't know if you've watched the interview
> <https://youtu.be/EY6q5dv_B-o> of Ken done by Brian at the Vintage
> Comptuer Federation 2019, there might be a few surprises lurking around the
> middle of that when they discuss pipes and grep.
>
> Thank you!
>
> On Sat, Oct 19, 2019 at 8:11 PM Doug McIlroy <doug(a)cs.dartmouth.edu>
> wrote:
>
>> I was about to add a footnote to history about
>> how the broad interests and collegiality of
>> Bell Labs staff made Space Travel work, when
>> I saw that Ken beat me to telling how he got
>> help from another Turing Award winner.
>>
>> > while writing "space travel,"
>> > i could not get the space ship integration
>> > around a planet to keep from either gaining or
>> > losing energy due to floating point errors.
>> > i asked dick hamming if he could help. after
>> > a couple hours, he came back with a formula.
>> > i tried it and it worked perfectly. it was some
>> > weird simple double integration that self
>> > corrected for fp round off. as near as i can
>> > ascertain, the formula was never published
>> > and no one i have asked (including me) has
>> > been able to recreate it.
>>
>> If I remember correctly, the cause of Ken's
>> difficulty was not roundoff error. It
>> was discretization error in the integration
>> formula--probably f(t+dt)=f(t)+f'(t)dt.
>> Dick saw that the formula did not conserve
>> energy and found an alternative that did.
>>
>
>
> --
>
> Abhinav Rajagopalan
>
>
>