On 7 Sep 2022, at 02:09, Marc Donner
<marc.donner(a)gmail.com> wrote:
Having spent many formative years at IBM Research throwing (metaphorical) bombs at
mainframe systems and infiltrating the place with BSD, I have thought about this question
on and off.
I would like to augment your comments with a couple of extra observations:
UNIX was built for a particular set of users - document writers and programmers
Before UNIX the systems were industrial in design and scope. Sort of like MVS was a
locomotive - designed to be used for hauling heavy freight (acres of data entry clerks
answering 800 numbers and entering transactions). UNIX was more like cars and trucks -
designed for use by knowledge workers.
When I was a grad student I hung out with some remarkable programmers. We all observed
that learning to program was impossible without a body of code to read, study, and learn
from. The best places to learn programming in the 70s and 80s were places like MIT,
Berkeley, Bell Labs, and IBM Research ... places with an established culture of sharing
code internally and large repositories of code to read.
By the mid-1980s the Microsoft folks established the notion that software was
economically valuable. People stopped giving away source code (IBM's change in
strategy was called OCO - "Object Code Only") and it totally shocked the
software developer community by destroying the jobs for programmers at user sites.
Combine that with the mid-1980s recession and the first layoffs that programmers had ever
seen and we saw the first horrified realization that the social contract between
programmers and employers did not actually exist.
We, the programmer community, woke up and committed ourselves as much as ever we could to
non-proprietary languages and tools, putting our shoulders to the OSS movement and hence
to UNIX and the layer of tools built on top of it.
Of course it helped to have some brilliant engineers like Ken, Dennis, Doug, Rob,
Michael, Stu (and and and) and brilliant writers like Brian so that the thing (UNIX) had
intellectual integrity and scope.
It took UNIX twenty to thirty years, but the economic logic of our approach put an end to
efforts to totally dominate the tech world.
=====
nygeek.net
mindthegapdialogs.com/home
Marc,
Good observations. Thank you.
I’ve never heard anyone mention that “reading large codebases” was the best way to learn
programming. Absolutely my experience as well.
If Professional Programmers aren’t doing “Programming in the Large” to provide critical
services for others, then what work are they doing?
In its first 10 years (1974-84), the future of Unix was uncertain. The formation of SUN in
1982 and other Unix-only vendors made Unix a commercial alternative, complete with support
and a help number.
At UNSW, there was a significant political battle over Unix. The manager of CSU (central
Computing Services Unit) resigned over Unix. His 35 staff later supported Unix across the
Uni.
If he’d won the battle, it’s very likely all Unix at UNSW would’ve been expunged, stopping
the networking work with Sydney University, shutting down the Unix kernel course &
dramatically slowing the spread of Unix in Australia.
Robert Elz at Melbourne Uni was later an important contributor to IP protocols and DNS.
In the 1984 BSTJ issue on Unix, there’s no mention of SUN (1982) & SUNOS, but they do
note “100,000 licenses” had been shipped, up from the 300 internal & ~600 total
licenses mentioned in the 1978 BSTJ.
While still not “cannot fail” status, Unix’s future was becoming more certain.
Today, there are 2-4 billion active smartphone and tablets - almost all of which are Unix
variants - Android and iOS. [ I’m sure other ‘platforms’ exist, but haven’t followed the
market closely ]
There’s an estimated 200M-250M “Personal Computers” in active use - 10% of all active
devices. Even if all run Microsoft and not Chromebooks, MS-Windows is now a minor player.
I’ve no idea how big the fleet of servers powering “The Cloud” in Datacentres is, but
inferring from power consumption, it’s measured in millions.
Only MS-Azure provide Windows instances and then it runs on top of a hypervisor, not bare
metal. Is MS Hyper-V derived from a Unix variant? If not, is certainly influenced by
VMware & Xen which were.
To a first approximation, 90%+ of ‘computers' now run a Unix variant. [ disregarding
the larger fleet of embedded devices, especially in cars ]
As you say, UNIX & its variants broke the monopoly / lock-in of software by hardware
vendors.
The timing of Unix and it displacing hardware enforced “Software Silos" wasn’t
accidental. [ A notable beneficiary of breaking Silos is Oracle - their early promise was
database “portability”. ]
It falls directly out of “Moore’s Law” and “Bell's Law of Computer Classes”.
The PDP-11 ‘regressed’ to 16-bits compared to the IBM 360’s 32-bits:
Bell’s Law in action - a new, much cheaper, lower performance “class” appearing each
decade.
In 1977, UNSW acquired a PDP-11/70 for teaching that was 1/10th the price of the IBM
360/50 that’d been purchased in 1966. [11/70 was in service in April 1978 - hardware
delays]
This 11/70 provided at least the same raw performance as the 360/50, but had ~50 2400baud
terminals attached, not cards + printer.
It was much more effective for learning/ teaching and provided much higher “useful
throughput” than the IBM batch system. Certainly with VDU’s, much less paper was wasted
:)
DEC and others first leveraged the cheaper, faster silicon transistors to build bipolar
discrete-part machines:
e.g. the PDP-7 co-opted for “Space Travel” by Ken in 1969.
DTL became TTL, digital IC’s grew larger, cheaper, faster - with “Mini computer”
manufacturers rolling out new models at ever better price-points, more rapidly than
’traditional’ mainframe vendors.
Minicomputers adopted “chipsets” to implement the CPU, leading in a few years to
single-chip “Microprocessors”, often with co-processors for expensive operations, like
floating point.
The invention of RISC led to a whole new class of mini-computers with single-chip
CPU's and a new class of system:
the Graphical Workstation [ SUN & SGI ] - not quite Bell’s lowest performance class,
one with significant new non-CPU capabilities.
Without UNIX, there couldn’t have been a RISC revolution, because there’d have been no
quality software for vendors to pick up: kernel, tools, Graphical UI and 3rd party
Software on these platforms.
The “Dot Boom” that ended in 2000 was only possible because of high-performance UNIX
servers for web, storage & database.
e-Bay started with Oracle on SUN servers. A solid, dependable system design.
Google didn’t invent Internet Search, but they did come up with the Internet-scale Data
Centre, creating highly available systems using low-cost, “less” reliabile commodity
hardware.
Is this a new Bell’s Law Class?
it’s more a system architecture and operational arrangement, implemented almost alone in
software.
Amazon leveraged their expertise and design of Internet-scale DataCentres into a massive
“Cloud” business - not bundled into its own products, but ‘rentable’ by customers by the
hour.
Netflix, when it changed from mailing DVD’s to streaming, based its business on renting
Amazon servers, storage & bandwidth.
We now have cheaper again computing services available, with zero Capital outlay and
scalable to unprecedented sizes.
It follows Bells’ Law, while extending it.
In 2007 when Apple redefined the Smartphone - using ARM (Acorn RISC Machine) and a variant
of Unix - they created a new class of computing device.
The device was designed to “Just Work” - near zero admin, self-configuring and a highly
reliable O/S, UI & Apps.
Critically, Apple never tried to maximise the “utilisation” of the CPU & its resources
- they put in fast CPU’s & aggressively managed power consumption to extend battery
life.
The Mainframe era economic model was inverted with the desktop computer - minimise wasted
User time, not Computer time.
The Smartphone took this “people first” approach to a new level.
For me, Apple’s most important invention - on top of “Just Works” platform - was the App
Store.
It builds on “The Cloud” and Internet services, providing an almost direct Software Vendor
to Client channel, using a secure & verified distribution system with embedded
payments.
Modern smartphone/ tablet system design, based around “Sandboxes” and a stringent control
layer, seems to contain “malevolent” Apps well enough (no security is perfect, but “Good
Enough” seems attainable).
Without the App Store and Sandboxed Apps, we couldn’t have 2B-4B smartphones.
We know from the MS-Windows PC & Server ecosystem [ and PHP/ Wordpress ] that
"Bad Actors” will organise and actively exploit system vulnerabilities, making large
fleets of exploitable devices unusable either because resources are co-opted and the
device is unresponsive, or it’s compromised and can’t be trusted.
Ironically, Moore’s Law couldn’t have proceeded as long and as quickly as it has since
1965 without the availability of Software to turn raw Silicon + Watts into functional,
useful systems.
Intel now owes a lot more business to Unix and its variants than to MS-Windows.
It’s not unreasonable IMHO to say that Unix and its variants “Changed the World” and are
now are the most prevalent O/S on the planet.
=======
Sorry for the long piece - I know that TUHS is not the forum for these observations not
confined to Early Unix.
I’d have moved this to COFF, but I’ve not been able to get onto that list so far.
regards
steve
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au
http://members.tip.net.au/~sjenkin