I don't know if this is what Rob was getting at, but I've also been thinking
about the relative homogenization of the display technologies field in recent years.
Massively-parallel vector engines calculating vertices, geometry, tesselation, and pixel
color after rasterization are pretty much the only game in town these days. Granted,
we're finding new and exciting ways to *use* the data these technologies generate,
but put pixels on a surface inside a VR headset, on a flat-panel monitor in front of my
face, or on a little bundle of plastic, glass, and silicon in my pocket, and the general
behavior is still the same: I'm going to give the card a bunch of vertices and data,
it's going to then convert that data into a bunch of parallel jobs that eventually
land as individual pixels on my screen.
One area I am actually interested to see how it develops though is hologram-type stuff.
VR is one thing, you're still creating what is ultimately a 2D image composed of
individual samples (pixels) that capture a sampling of the state of a hypothetical 3D
environment maintained in your application. With AR, you're a little closer, basing
depth on the physical environment around you, but I wouldn't be surprised if that
same depth is still just put in a depth buffer, pixels are masked accordingly, and the
application pastes a 2D rasterized image of the content over the camera display every
1/60th of a second (or whatever absurd refresh rates we're using now.) However, a
hologram must maintain the concept of 3D space, at least well enough that if multiple
objects were projected, they wouldn't simply disappear from view because one was
"behind" another from just one viewpoint, the way the painters algorithm for
most depth culling in 3D stuff works. Although I guess I just answered my own quandary,
that one would just disable depth culling and display the results before the rasterizer,
but still, there's implications beyond just sample this 3D "scene" into a
2D matrix of pixels.
Another area that I wish there was more development in, but it's probably too late,
is 2D acceleration ala 80's arcade hardware. Galaxian is allegedly the first, at
least as held in lore, to provide a chip in which the scrolling background was based on a
constantly updating coordinate into a scrollplane and the individual moving graphics were
implemented as hardware sprites. I haven't researched much further back on
scrollplane 2D acceleration, but I personally see it as still a very valid and useful
technology, one that has been overlooked in the wake of 3D graphics and GPUs. Given that
many, many computer users don't really tax the 3D capabilities of their machine,
they'd probably benefit from the lower costs and complexity involved with
accelerating the movement of windows and visual content via this kind of hardware instead
of GPUs based largely on vertex arrays. Granted, I don't know how a modern windowing
system leverages GPU technology to provide for window acceleration, but if it's using
vertex arrays and that sort of thing, that's potentially a lot of extra processing of
flat geometric stuff into vertices to feed a GPU, then in the GPU back into pixels, when
the 2D graphics already exist and could easily be moved around by coordinate using a
scrolling/spriting chip. Granted the "thesis" on my thoughts on 2D scrolling vs
GPU efficiency is half-baked, it's something I think about every so often.
Anywho, not sure if that's what you meant by things seeming to have rusted a bit, but
that's my general line of thought when the state of graphics/display technology comes
up. Anything that is just another variation on putting a buffer of vertex properties on a
2k+-core vector engine to spit out a bunch of pixels isn't really pushing into new
territory. I don't care how many more triangles it can do nor how many of those
pixels there are, and certainly not how close those pixels are to my eyeballs, it's
still the same technology just keeping up with Moore's Law. That all said, GPGPU on
the other hand is such an interesting field and I hope to learn more in the coming years
and start working compute shading into projects in earnest...
- Matt G.
------- Original Message -------
On Monday, March 6th, 2023 at 2:47 PM, Paul Ruizendaal via TUHS <tuhs(a)tuhs.org>
wrote:
I had read that paper and I have just read it again. I
am not sure I understand the point you are making.
My take away from the paper -- reading beyond its 1969 vista -- was that it made two
points:
1. We put ever more compute power behind our displays
2. The compute power tends to grow by first adding special purpose hardware for some core
tasks, that then develops into a generic processor and the process repeats
From this one could imply a third point: we always tend to want displays that are at the
high end of what is economically feasible, even if that requires an architectural wart.
More narrowly it concludes that the compute power behind the display is better organised
as being general to the system than to be dedicated to the display.
The wikipedia page with a history of GPU’s since 1970
(
https://en.wikipedia.org/wiki/Graphics_processing_unit) shows the wheel rolling decade
after decade.
However, the above observations are probably not what you wanted to direct my attention
to ...
> On 6 Mar 2023, at 09:57, Rob Pike robpike(a)gmail.com wrote:
>
> I would think you have read Sutherland's "wheel of reincarnation"
paper, but if you haven't, please do. What fascinates me today is that it seems for
about a decade now the bearings on that wheel have rusted solid, and no one seems
interested in lubricating them to get it going again.
>
> -rob
>
> On Mon, Mar 6, 2023 at 7:52 PM Paul Ruizendaal via TUHS tuhs(a)tuhs.org wrote:
> Thanks for this.
>
> My question was unclear: I wasn't thinking of the hardware, but of the software
abstraction, i.e. the device files living in /dev
>
> I’ve now read through SunOS man pages and it would seem that the /dev/fb file was
indeed similar to /dev/fbdev on Linux 15 years later. Not quite the same though, as
initially it seems to have been tied to the kernel part of the SunWindows software. My
understanding of the latter is still limited though. The later Linux usage is designed
around mmap() and I am not sure when that arrived in SunOS (the mmap call exists in the
manpages of 4.2BSD, but was not implemented at that time). Maybe at the time of the Sun-1
and Sun-2 it worked differently.
>
> The frame buffer hardware is exposed differently in Plan9. Here there are device
files (initially /dev/bit/screen and /dev/bit/bitblt) but these are not designed around
mmap(), which does not exist on Plan9 by design. It later develops into the /dev/draw/...
files. However, my understanding of graphics in Plan9 is also still limited.
>
> All in all, finding a conceptually clean but still performant way to expose the
frame buffer (and acceleration) hardware seems to have been a hard problem. Arguably it
still is.
>
> > On 5 Mar 2023, at 19:25, Kenneth Goodwin kennethgoodwin56(a)gmail.com wrote:
> >
> > The first frame buffers from Evans and Sutherland were at University of Utah,
DOD SITES and NYIT CGL as I recall.
> >
> > Circa 1974 to 1978.
> >
> > They were 19 inch RETMA racks.
> > Took three to get decent RGB.
> >
> > 8 bits per pixel per FB.
> >
> > On Sun, Mar 5, 2023, 10:02 AM Paul Ruizendaal via TUHS tuhs(a)tuhs.org wrote:
> > I am confused on the history of the frame buffer device.
> >
> > On Linux, it seems that /dev/fbdev originated in 1999 from work done by Martin
Schaller and Geert Uytterhoeven (and some input from Fabrice Bellard?).
> >
> > However, it would seem at first glance that early SunOS also had a frame buffer
device (/dev/cgoneX. /dev/bwoneX, etc.) which was similar in nature (a character device
that could be mmap’ed to give access to the hardware frame buffer, and ioctl’s to probe
and configure the hardware). Is that correct, or were these entirely different in nature?
> >
> > Paul