A bit about my background:  The first machine I programmed was ca. 1970 in high school--a Monroe programmable calculator with four registers and a maximum of 40 program steps.  The most complicated thing I got it to do was to iteratively calculate factorials (of course it got arithmetic overflow pretty quickly).  As an undergrad Biology major I learned BASIC on DTSS and later Fortran IV and PL/I for an S/360 model 25 (32K max memory).  I decided to go into computer technology rather than Biology and attended CS grad school.  I completed all the coursework but never finished my thesis.  After interning at IBM's Cambridge Scientific Center for a few years, I joined DEC's software development tools group in early 1980, later joining the GEM compiler team to design and implement the non-compiler parts of the compiler back end (heap memory management, generating listing files, command line parsing, object file generation, etc.).  Compaq acquired DEC and about a year later sold the Alpha chip technology--including the GEM optimizing compiler back end--to Intel.  Many of the engineers--including me--went with it.  I retired from Intel in 2016.

One important thing I learned early on in my career at DEC is that there is a big difference between Computer Science and Software Engineering.  I was very lucky that many of the top engineers in DEC's compiler group had studied at CMU--one of the few schools that taught SW Engineering skills as well as CS.  I learned good SW engineering practices from the get-go.  Unlike CS programming, SW engineering has to worry about things such as:

o design and implementation for testability and maintainability

o test system development

o commenting and documentation so that others can pick up and maintain your code

o algorithm scalability

This thread has spent a lot of time discussing how programming has changed over the years.  I bring the SW Engineering skill set up because IMO it's just as relevant today as it was in the past.  Perhaps even more so.

My observation is that programming style has changed in response to hardware getting faster and memory capacity getting larger.  If your program has to fit into 8K or 32K you have to make every byte count--often at the expense of maintainability and algorithmic efficiency.  As machines got larger and faster, programming for small size became less and less important.

The first revolution along these lines was the switch from writing in machine code (assembler) to higher-level languages.  Machines had become fast enough that in general it didn't matter if the compiler didn't generate the most efficient code possible.  In most cases the increase in productivity (HLLs are less error-prone than assembler) and maintainability more than made up for less efficient code.

The second revolution was structured programming.  Machines fast enough and large enough that one didn't have to resort to rat's nest coding to make the program small and fast enough to be useful.  Structured programming made code more easily understood--both by humans and by optimizing compilers.

These days we have machines with several levels of data caching and multiple processor cores running asynchronously.  If (as in the HPTC world) you want to get the maximum performance out of the hardware, you have to worry about framing your program in a way that can be multitasked (to take advantage of all those cores) and you have to worry about efficient cache management and interprocessor communication.  The ways to do this are not always intuitively obvious.  Modern optimizing compilers know all the (often completely non-intuitive) efficiency rules and can best apply them when you write your code in an algorithmically clean manner and leave the grunt work of running it efficiently on the hardware to the compiler.  It's a very different world than when you had to figure out how to fit your code and data into 8K!

-Paul W.