On Monday, May 19th, 2025 at 12:45 PM, Dan Cross <crossd(a)gmail.com> wrote:
On Mon, May 19, 2025 at 12:36 PM Noel Chiappa
jnc(a)mercury.lcs.mit.edu wrote:
[snip]
It wasn't just AT&T, IBM & DEC that got run over by commodity DRAM &
CPU's, it was the entire Minicomputer Industry, effectively extinct by
1995.
Same thing for the work-station industry (with Sun being merely the most
notable example). I have a tiny bit of second-hand personal knowldge in this
area; my wife works for NASA, as a structural engineer, and they run a lot of
large computerized mathematical models. In the 70's, they were using CDC
7600's; they moved along through various things as technology changed (IIRC,
at one point they had SGI machines). These days, they seem to mostly be using
high-end personal computers for this.
Some specialized uses (various forms of CAD) I guess still use things that
look like work-stations, but I expect they are stock personal computers
with special I/O (very large displays, etc).
So I guess now there are just supercomputers (themselves mostly built out of
large numbers of commodity CPUs), and laptops. Well, there is also cloud
computing, which is huge, but that also just uses lots of commodity CPUs.
The CPU ISAs may be largely shared, but computing has bifurcated
into two distinct camps: those machines intended for use in
datacenters, and those intended for consumer use by end-users.
CPUs intended for datacenters tend to be characterized by large
caches, lower average clock speeds, wider IO and memory bandwidth.
Those intended for consumer use tend to have high clock speeds, a bit
less cache, and support for comparatively fewer IO devices/less
memory. On the end-user side, you've got a further split between
laptops/desktop machines and devices like phones, tablets, and so on.
In both cases, the dominant IO buses used are PCIe and its variants
(e.g., on the data center side you've got CXL), NVMe for storage is
common in lots of places, everything supports Ethernet more or less
(even WiFi uses the ethernet frame format), and so on. USB seems
ubiquitous for peripherals on the end-user side.
In short, these machines may be called "personal computers" and they
may be PCs in the sense of being used primarily by one user, but
contemporary data center machines have more in common with mainframes
and high-end servers than the original PCs, and consumer machines are
much closer architecturally to high end workstations than to
yesteryear's PCs.
My desktop machine is a Mac Studio with an ARM CPU; I call it a
workstation and I think that's pretty accurate. At work, one of our
EE's has a big x86 thing with some stupidly powerful graphics card
that he uses to do board layout. It's a workstation in every
recognizable sense, though it does happen to run Windows.
- Dan C.
This may be getting into the weeds a bit but don't forget industrial hardware, the
stuff where you approach the blurry demarcation between CPU and MCU. This for me is
always the third class when I'm discussing that sort of thing. Of course this also
means "operating systems" closer to standalone applications sitting on top of
some microkernel like (se)L4, but you did have VME-based workstations and UNIX versions
specifically for VME systems. For me these walk a line between true workstations and
minicomputers, but as usual that is from the perspective of having not been there. For
the record, one of the canonical UNIX development environments from AT&T for WE32x00
stuff was a WE32100 and support chips thrown on a VME module running System V/VME. From
what I know, VME is still quite common in industrial control. How much UNIX and
workalikes constitute the OS landscape in today's VME ecosystem eludes me.
Either way, I feel like this is a class of computers that frequently flies under the
radar, but if we're talking strictly consumer stuff, yeah VME very quickly loses
relevance.
- Matt G.