On Dec 12, 2022, at 7:30 PM, Rudi Blom <rudi.j.blom(a)gmail.com> wrote:
I vaguely remember having read here about 'clever code' which took into account
the time a magnetic drum needed to rotate in order to optimise access.
Similar consideration applied in the early days of unix workstations.
Fortune 32:16 was a 5.6Mhz machine and couldn't process 1020KB/sec
(17 sectors/track of early ST412/ST506 disks) fast enough. As Warner
said, one dealt with it by formatting the disk so that the logical
blocks N & N+1 (from the OS PoV) were physically more than 1 sector
apart. No clever coding needed!
The "clever" coding we did was to use all 17 sectors rather than 16
+ 1 spare as was intended. Since our first disks were only 5MB in
size, we wanted to use all the space and typical error rate is much
less than 6%. This complicated handling bad blocks and slowed things
down as blocks with soft read errors were copied to blocks at the
very end of the disk. I don't recall whose idea it was but I was the
one who implemented it. I had an especially bad disk for testing on
which I used to build V7 kernels....
Similarly I can imagine that with resource restraints
you sometimes need to be clever in order to get your program to fit. Of course, any such
cleverness needs extra documentation.
One has to be careful here as resource constraints change over time.
An optimization for rev N h/w can be a *pessimization* for later revs.
Even if you document code, people tend to leave "working code" alone.
I only ever programmed in user space but even then
without lots of comment in my code I may already start wondering what I did after only a
few months past.
Over time comments tend to turn into fake news! Always go to the
primary sources!