On Sat, Dec 31, 2022 at 3:02 PM Paul Ruizendaal <pnr(a)planet.nl> wrote:
On 31 Dec
2022, at 15:59, Dan Cross <crossd(a)gmail.com> wrote:
I think that BSD supported autoconfiguration on the VAX well before
OpenBoot; the OpenBSD man page says it dates from 4.1 (1981) and was
revamped in 4.4.
That is interesting. Are you referring to this:
https://www.tuhs.org/cgi-bin/utree.pl?file=4.1cBSD/a/sys/conf
https://www.tuhs.org/cgi-bin/utree.pl?file=4.1cBSD/usr/man/man8/config.8
Or is auto configuration something else?
Warner answered this better than I could. :-)
SBI was/is an
attempt to define a common interface to different
devices using RISC-V CPUs, but it's growing wildly and tending more
towards UEFI+ACPI than something like OpenBoot, which was much
simpler.
In general, the idea of a BIOS isn't terrible: provide an interface
that decouples the OS and hardware. But in practice, no one has come
up with a particularly good set of interfaces yet. Ironically,
BSD-style autoconfig might have been the best yet.
When it comes to abstracting devices, it would seem to me that virtio has gotten quite
some traction in the last decade.
Virtio is solving a very, very different problem: this is limited
paravirtualization of devices for virtual machines. The idea of virtio
devices is predicated on a hypervisor somewhere providing a matching
implementation that, in turn, plugs into whatever primitives the host
provides for device management. Put another way, devices don't expose
virtio interfaces directly (generally speaking), and you need
something somewhere to translate from the virtio interfaces a guest
sees to actual useful services that the guest makes use of via those
interfaces.
A host _could_ emulate something like an actual physical hardware
device (for some category of device) but it turns out that's often not
particularly efficient; for instance virtio-net tends to be somewhat
faster than an emulated e1000. Still, virtio has some problems; you
can program DMA to anywhere in the guest physical address space, which
can be expensive to handle without an exit from the guest context:
virtio-net isn't as fast as Google's gvnic.
Looking at both the systems of 40 years ago and the
Risc-V SoC’s of last year, I could imagine something like:
- A simple SBI, like the Berkeley Boot Loader (BBL) without the proxy kernel & host
interface
This, I think, is really the hard part. The issue is that systems are
configured a myriad of different ways depending on the platform,
vendor, particular combination of IO devices, etc. That makes figuring
out where the host OS image is and getting it loaded into memory a
real pain in the booty; not to mention on modern systems where you've
got to do memory training as part of initializing DRAM controllers and
the like. Not to mention power-sequencing, initializing IO buses and
all that business: who assigns PCI BARs? What if the kernel is on a
PCI device, like an NVMe SSD or similar? Turning on a modern CPU is
really a non-trivial exercise, let alone getting the OS loaded. Once
you get it into memory, though, _starting_ the kernel is comparatively
easy. But all the gritty details before that are how we ended up with
enormous systems like UEFI.
- Devices presented to the OS as virtio devices
But backed by what?
- MMU abstraction somewhat similar in idea to that in
SYSV/68 [1]
It's not clear to me what this would buy you; the MMU tends to be
architecturally defined, in a way that e.g. the boot process, IO bus
initialization, and even CPU topology discovery are not. And then
there are soft-TLBs (which are a bad idea, but that's a separate
discussion). I mean, most VM systems since SunOS 4 have a HAL and a
portable part already.
Together this might be a usable Unix BIOS that could
have worked in the early 80’s. One could also think of it as a simple hypervisor for only
one client.
I believe that AIX on the PC/RT did basically this. Certainly, this
was true on the 370. The PALcode stuff on the Alpha was similar in
terms of having a BIOS-like abstraction with specific implementations
for Windows or VMS/Unix.
But there's an argument that this is functionality that _belongs_ in
the OS, which was the traditional Unix approach. Could it have been
done differently? I guess so, but would that have been a good system
design? I don't know.
The remaining BBL functionality is not all that
different from the content in mch.s on 16-bit Unix (e.g. floating point emulation for
CPU’s that don’t have it in hardware). A virtio device is not all that different from the
interface presented by e.g. PDP-11 ethernet devices (e.g. DELUA), the MMU abstraction was
contemporary.
High end systems could have had device hardware that implemented the virtio interface
directly,
This is challenging with virtio because some devices basically rely on
the guest trapping into the host with a particular exit code to kick
off IO. I suppose you could do that with a shared semaphore, but then
you're burning cycles somewhere polling. You could interrupt yourself,
either synchronously or with a self-IPI, but then the interrupt
handler has to run the virtio handling code. In any event, it's no
longer a simple matter of writing to a device register somewhere to
make the "device" actually do something. Sadly, that's pretty baked
into the interface.
low end systems could use simpler hardware and use the
BIOS to present this interface via software.
But you've got to have some CPU run that software somewhere; either
you've got a satellite processor doing it or you're stealing cycles
from the OS. The latter is how a BIOS traditionally works and I
suspect that if we were doing this in the 1980s, having a dedicated IO
processor would have added non-trivial cost to a system, for little
gain: let's remember that Unix was _very_ success on a variety of
systems in that era without relying on a BIOS-type abstraction!
[1] From mmu.h in the SYSV/68 source tree:
/*
@(#)mmu.h 2.26
This is the header file for the UNIX V/68 generic
MMU interface. This provides the information that
is used by the various routines that call:
mmufork ()
mmuexec ()
mmuexit ()
mmuread ()
mmuwrite ()
mmuattach ()
mmudetach ()
mmuregs ()
mmualloc ()
mmuinit ()
mmuint ()
The above routines and secondary routines called
by them are contained in io/mmu1.c and io/mmu2.c.
*/