On Tue, Jul 9, 2024 at 1:35 PM Paul Winalski <paul.winalski(a)gmail.com> wrote:
I don't know of any OSes that use floating point.
I've seen it. Not for numeric computations, primarily, but for fast
data transfer a la `memcpy` using SIMD instructions in the FPU. In
the BSD kernels circa 4.3, load average calculation used a modicum of
FP (multiplication as double's). Oh, and I've seen some amount of FP
hardware used for computing checksums.
Of course, the OS must save and restore the state of the FPU on a
context switch, but that's rather obviously not what you meant; I only
mention it for completeness.
But the IBM operating systems for S/360/370 did use
packed decimal instructions in a few places. This was an issue for the System/360 model
44. The model 44 was essentially a model 40 but with the (much faster) model 65's
floating point hardware. It was intended as a reduced-cost high-performance technical
computing machine for small research outfits.
To keep the cost down, the model 44 lacked the packed decimal arithmetic instructions,
which of course are not needed in HPTC. But that meant that off-the-shelf OS/360 would
not run on the 44. It had its own OS called PS/44.
IIRC VAX/VMS ran into similar issues when the microVAX architecture was adopted. To save
on chip real estate, microVAX did not implement packed decimal, the complicated character
string instructions, H-floating point, and some other exotica (such as CRC) in hardware.
They were emulated by the OS. For performance reasons it behooved one to avoid those data
types and instructions on later VAXen.
I once traced a severe performance problem to a subroutine where there were only a few
instructions that weren't generating emulator faults. The culprit was the oddball
conversion semantics of PL/I, which caused what should have been D-float arithmetic to be
done in 15-digit packed decimal. Once I fixed that the program ran 100 times faster.
It never ceases to amaze me how seemingly minor details can have such
outsized impact.
- Dan C.