I'm not sure what "strong typing" gain you expect to find. With the
exception of the void* difference, C++ isn't much more "type strong" than C.
A lot of the type issues you can find on the Kernel just by turning up the
compiler warnings (or use lint).
The biggest issue we had was BSD didn't use void* at all. Had they
converted pointers to void*, which is in implementation necessarily the same
as char*,
C would have done the right thing. The problem is they did what I call
"Conversion by union."
I just put it as 'silent architectural assumptions' in system's programming. In Henry's 10 commandments, he referred to this as assuming 'all the world is a Vax.' (today the assumption is all the world is INTEL*64).
The DevSwitch is the beginnings of an object oriented philosophy. Alas,
the original UNIX used it only for mapping major dev numbers to functions.
It got some later use for other things like multi filesystem
support.
Fair enough -- again. As much as I love BSD, I'm quick to knock it (and now Linux) a few pegs for two issues in this light. At the time, adding something like the file system switch was engious and novel. It took a couple of different schemes in different UNIX kernels (Sun, Masscomp, Eight edition and later System V) and then a couple of arguments on how to do interposition to stack them to settle on the proper functions needed for the FS, but eventually we mostly have them. But that took years, we still don't have something like the Locus vproc work for the process layer, although it was added to Linux as well as BSD/Mach [Linux rejected it - see the OpenSSI.org work].
To me, if we had done the same thing with the process layer that we did to FS's we would be better off. But the reason is off course the lack of an indirection layer which object give you.
The scary supportability thing in the kernel, isn't so much typing, but the
failuer to "hide" implementation details of one part of the kernel from the
other.
Again, I agree. But this argument screams in favor of Dykstra THE style layering (e.g. micro-kernel) approach. I always thought it was a better way to go, but in the end, people always picked pure performance over the safety that "information hiding" gives you.
It's tough, I've been on both sides of this debate and have empathy for both positions. As a supplier, it is all about being as fast as possible, because the customers don't care how hard you have to work, they just want as much speed as possible (in fact the super computer folks would really rather no OS at all). So the 'less is more' monolithic / hack / architectural specific ideas make a lot of sense. But some of the things we are talking about here are ideas that aid us in the development side on making 'better' or 'cleaner' systems - ones that are more maintainable and easier to ensure are 'correct' - which often the custom will not pay for, or worse shun because it end up being a little slower.
Clem