On Sat, Feb 25, 2023 at 9:40 PM Theodore Ts'o <tytso(a)mit.edu> wrote:
I think it's fair to say that in the very early
days of Linux, most of
the people who were using it were people who kernel hackers; and so we
didn't have all that many people who were interested in developing new
windowing systems. We just wanted to be able to have multiple xterms
and Emacs windows.
This is another important thing to bear in mind: this predates the
explosion of the world wide web; most people back then paradoxically
ran a lot more local software on their machines (applications weren't
de facto mediated by a web browser), but a lot of that software was
simpler. xterm and a text editor and a lot of folks were good to go.
In fact, support for X Windows predated the
development of a
networking stack; we had Unix domain sockets, so that was enough for
X, but we didn't have a working networking stack at that point! I
would be running X, and then running C-Kermit to download files from
MIT over a dialup modem.
!!
At that point, X windows wasn't *flaky* per se,
but remember that back
then monitors were very persnicky about exactly what resolutions and
frequences they would accept. And this was before monitors supported
EDID (Extended Display Identification Data), which allowed the X
server to figure out what the parameters were of the monitor. So that
meant that configuring the X server with the correct resolution,
frequencies, etc., was tricky. There were long and complex documents
explaining how to do it, and it was a very manual process. If you got
the calculations wrong, the image might not be stable, but that wasn't
a software bug so much as it was a configuration error.
Yeah, this: once you got something configured and working it wasn't
like it crashed all the time or anything like that. But getting it
working in the first place was challenging; it was a _far_ cry from
today, where it seems like most of the time, X "just works" out of the
box. Or even from most workstations of the era, which largely worked
with little or no tedious configuration (because the vendor had done
the hard work to bring X up on their hardware already).
But on x86, I recall that even slight perturbations in a system could
keep X from running. For example, one might have the right model of
xfree86-supported video card, but from a manufacturing run of cards
that did not work (because they used rev B of an internal component
instead of A, perhaps). Or the card might not work on a different
motherboard, etc.
Getting it working could be a real exercise in frustration.
There were programs (for example, the most famous was
the graphical
game "Tuxracer") which wrote directly to the frame buffer, but there
wasn't anyone who was interested in developing their own compositor.
We just wanted xterms and (later) Firefox to work!
Firefox? Wow, talk about a Johnny Come Lately. :-) I can still
remember compiling NCSA Mosaic on a SPARCstation 2. Those were the
days...very painful days....
- Dan C.
As far as discussion about what should and
shouldn't go into the
kernel, most people agreed that as much as possible, especially in
graphics land, should be out of the kernel. The fact that we didn't
have a lot of graphics specialists in the kernel development
community, and that in those early days the vast majority of Linux
boxen where single user machines just sealed the deal.