There is some playing around with the software model,
but basically we have a powerful, privately managed external "card" that does
the graphics, and we talk to it with one particular mechanism. During my work on Plan 9,
it got harder over time to try new things because the list of ways to talk to the card was
shrinking, and other than the blessed ones, slower and slower.
For example, a shared-memory frame buffer, which I interpret as the mechanism that
started this conversation, is now a historical artifact. Back when, it was a thing to play
with.
Ah, now it is clear.
This also has a relation to the point about what constitutes a "workstation" and
one of the comments was that it needs to have "integrated graphics". What is
integrated in this historical context -- is it a shared memory frame buffer, is it a
shared CPU, is it physically in the same box or just an integrated user experience? It
seems to me that it is not easy to delineate. Consider a Vax connected to a Tek raster
scan display, a Vax connected to a Blit, Clem’s Magnolia and a networked Sun-1. Which ones
are workstations? If shared memory is the key only Clem’s Magnolia and the Sun-1 qualify.
If it is a shared CPU only a standalone Sun-1 qualifies, but its CPU would be heavily
taxed when doing graphics, so standalone graphics was maybe not a normal use case. For now
my rule of thumb is that it means (in a 1980’s-1990’s context) a high-bandwidth path
between the compute side and display side, with enough total combined power to drive both
the workload and the display.
This is also what is underlying the question about the /dev/fbdev etc. devices. In Unix
exposing the screen as a file seems a logical choice, although with challenges on how to
make this efficient.
SunOS 1.1 had a /dev/fb device that accesses the shared memory frame buffer. It seems that
Sun made 4.2BSD mmap work specifically for this device only; only later on this became
more generic. The 1999 Linux /dev/fbdev device seems to have more or less copied this
SunOS design, providing the illusion of a shared memory frame buffer even when that was
not always the underlying hardware reality. Access to acceleration was through display
commands issued via ioctl.
Early Plan9 has the /dev/bit/screen device, which is read-only. Writing to the screen
appears to have been through display commands written to /dev/bit/bitblt. Later this was
refined into the /dev/draw/data and /dev/draw/ctl devices. No mmap of course, and it was
possible to remotely mount these devices. Considering these abstractions, did it matter
all that much how the screen hardware was organised?
The 1986 Whitechapel MG-1 / Oriel window system used a very different approach:
applications drew to private window bitmaps in main memory, which the kernel composited to
an otherwise user-inaccessible frame buffer upon request. In a way it resembles the
current approach. I’m surprised this was workable with the hardware speeds and memory
sizes of the era. However, maybe it did not work well at all, as they went out of business
after just a few years.