On Wed, Sep 18, 2024 at 8:32 PM George Michaelson <ggm(a)algebras.org> wrote:
I think I misunderstand a lot of this Dan, so I
won't try to prosecute
my case. I felt your answer went to the ring model separating user
space from kernel space, but not the ring model embedded in the CPU,
down in the chip register set and instructions.
Perhaps I don't understand what you mean, then. How are those things
different? Or, put another way, one requires the other: the thing that
separates userspace and kernel space _is_ the CPU's model of
protection domains, but these have different names on different CPUs.
Sure, they're numbered rings on x86 (0 being "kernel" mode, and 3
"user", with the rarely used 1 and 2 all but ignored outside of, I
think, OS/2 2.x), but the analogous concept is called "access modes"
of execution on VAX; RISC-V and ARM retain the "mode" nomenclature, as
do most RISC architectures (MIPS, Alpha, SPARC, etc), but "kernel
mode" (ring 0) is variously called "supervisor", "kernel",
"executive"
or "privileged" mode in different places (the VAX, of course, had 4
modes: kernel, executive, supervisor, and user; and the GE 645 had 9).
"trusted" computing was the idea across boot
you get to initialise
some state and then set a flag, a bit, which forces the kernel to live
within the constraint set encoded from that transition.
Do you mean something like ARM's trustzone? That's different, and
usually involves a "trusted executive" outside of the control of the
OS. But that doesn't a priori mean that the OS can't do funky things
like replace itself.
Loading a new
kernel imples writing into things which I believe(d) you had been
constrained not to do.
The chip level rings are meant to be absolutes.
Not really. What would that mean? How would loadable kernel modules
work in such a scenario? What about self-modifying code (when Linux
boots up, it will examine what machine it's running on and quite
possibly binary patch some hot execution paths to avoid branches)? If
one can't move between processor access modes in some controlled way,
how would IO or system calls work?
As for trust-zone-y type stuff, consider that part of the overall
security posture of a system may be to measure whatever is presented
to the kexec mechanism and verify that it's properly signed; if it is,
it's ok to boot it; if not, the operation fails. If the intent is
merely to prevent untrusted code from running in kernel mode, such a
mechanism (with a lot of details omitted) will more or less or that,
provided the lowest-level trusted bootstrap code made sure that the
first OS loaded was properly signed. This seems like a reasonable
design that shouldn't be prohibited.
Warm boot vs Cold Boot vs .. I dunno, a manual
transition through GPT
and BIOS states to change things?
This is something people forget: the BIOS is just code. There's
nothing magical about it. If you're talking about something like
Trustzone to ensure that the _BIOS_ is properly trusted, then that's a
separate matter, but mostly orthogonal to the kexec thing.
I don't do this for a living. I stress I very
probably completely
mis-understand what is a pledge to good faith and what is actually
meant to be enforced by hardware.
Hey, no problem: the goal of this list is to explore the history and
(one hopes) learn from it as well. These are subtle topics and I'm
sure we all, myself very much included, have much to learn here. I
think we're all approaching this with the same collegial spirit of
inquiry!
- Dan C.