On Aug 23, 2018, at 7:42 AM, ron(a)ronnatalie.com wrote:
The biggest issue we had was BSD didn't use void* at all. Had they
converted pointers to void*, which is in implementation necessarily the same
as char*,
C would have done the right thing. The problem is they did what I call
"Conversion by union." They would store a char* into one element of a
union and read it out of another typed pointer.
I haven't done much BSD kernel programming in last 15 years
but this is not my recollection. BSD used caddr_t, typedefed
to char*, sort of as void *. IIRC void* came in common use
after BSD unix first came about. Why use a union when a cast
will do? :-) The union trick is more likely to be used for
esoteric things (like getting at fl.pt. bytes) or for more
complex types or probably by people with lots of programming
experience in pascal and not so much in C (in Pascal you *had*
to use untagged variant records if you wanted to cheat!). In C,
all that void* does is allow you to avoid casts in some cases.
This works fine for a VAX where all pointers are the
same format, but fails
on word address machines (notably in our case the HEP where the target size
is encoded in the low order bits of the pointer).
Still, running around and fixing those was only a few hours work.
The DevSwitch is the beginnings of an object oriented philosophy. Alas,
the original UNIX used it only for mapping major dev numbers to functions.
It got some later use for other things like multi filesystemsupport.
{b,c}devsw are closer to an interface (or in C++ terminology
an abstract class that contains nothing but pure virtual
functions). Go got it right. I had to build something like
devsw for some IOT devices in Go and it worked very well. And
you don't need to manually stuff relevant functions in a
struct so that {bdev,cdev,line}sw can be properly initialized
as in C.
Not having this feature in C meant dev switches got a bit
impure (in 4.xBSD there was only a buf ptr for bdevsw and a
tty ptr in cdesw). In modern BSD the cdev struct contains all
sorts of cruft.
So in this sense C++ got it wrong, at least initially. Code
for each disk or network device would be device specific but
in so far as possible we want to access devices of a specific
class the same way. Classical OO would not help here.
[Aside: What I would really like is if Go interfaces can be
somehow turned into abstract data types by adding an algebraic
specification. For example, at the source level you should
be able to say don't allow read/write once device is closed.
Adding this sort of assertions/axioms at the interface level
can make your code less cluttered (no need to check at every
point if the device is still open) and safer (in case you forget
to check).]
The scary supportability thing in the kernel,
isn't so much typing, but the
failuer to "hide" implementation details of one part of the kernel from the
other.
Lack of modularity or more fine grained trust boundaries.
Apropos this point:
https://threatpost.com/researchers-blame-monolithic-linux-code-base-for-cri…
In an exhaustive study of critical Linux vulnerabilities, a
team of academic and government-backed researchers claim to
have proven that almost all flaws could be mitigated to less
than critical severity - and that 40 percent could be
completely eliminated - with an OS design based on a
verified microkernel. Though I tend to think the "verified"
part doesn't play as strong a role as the modularization
forced by microkernels.
[Ties in with the formal methods thread in here and in COFF!]