On 6/9/2017 5:21 PM, Steve Johnson wrote:
C's conventions provided a big improvement in
code correctness with a
small cost when compared to assembler..
Being an assembler snob for a number of years when I first started
programming, having started with MACRO-10 on a PDP-10 in high school, I
was reluctant to consider C as an alternative.
At one point early on, I decided I would see what the difference was in
actual performance. So, on an IBM-AT, I wrote two programs. Both did a
wire-frame rotation of a 3D cube, rotating around both the X and Y axis.
They both used my own graphics library written in assembler that used
the basic BIOS graphics calls to draw lines in 3D.
One program was written in straight Intel 80286 assembler. The other, in
C, C86 from Computer Innovations, Inc.
The straight assembler version, I wrote using double-word integers. One
word for everything above the decimal point, one word for everything
below the decimal point. Now, because of the loss of precision, and that
I defined the matrices for rotation as static integers, after a few
hundred iterations, the cube would start to deform but it was a proof of
concept.
The C version, I used real sin(), cos(), but - the interesting part is
that I didn't have a floating point coprocessor - so the floating point
was emulated in the supplied math library.
The assembler version ran at a ratio of 3:4 compared to the C
version.Meaning if the assembler version did X rotations in 3 minutes,
the C version did it in 4 minutes. BUT - the C version cube didn't
deform over time, because the accuracy was intact.
After that, I gave up on assembler except where critical. C as a concept
did what I needed it to do ;)
On topic for this conversation is the fact that arrays in C are
basically a perfect mapping to assembler. if you moved from assembler to
C like I did, there was no concern over "arrays should start at 1". It
was just intuitive anyway.
On the other hand, if you went from some other language to C, well, I
can understand the alien-ness.
art k.