On Wed, Nov 13, 2019, 12:15 PM <ron(a)ronnatalie.com> wrote:
BTW, I'm doing my first messing around with the Linux kernel these days;
if anyone knows the guts of the generic filesystem code I could use a bit
of help. Here's something that I came across on the way in
<sys/mount.h>:
enum
{
MS_RDONLY = 1, /* Mount read-only. */
#define MS_RDONLY MS_RDONLY
MS_NOSUID = 2, /* Ignore suid and sgid bits. */
#define MS_NOSUID MS_NOSUID
MS_NODEV = 4, /* Disallow access to device
special files.
*/
#define MS_NODEV MS_NODEV
...
};
Can anyone explain the value of this programming style? Is this just an
example of the result of how programming is taught today?
This really is more a C question than a UNIX one. The problem is that
the preprocessor macros are really kind of a kludge. Making things
either enums (or in later C/C++ const int definitions) is a lot cleaner.
The #define is just probably backwards a compatibility kludge (for people
using things like MS_RDONLY or whatever in other macros).
It lets the users of these interfaces conditionally use them as ifdef. A
pure enum interface doesn't let you do that. This makes it harder to write
portable code that is driven directly by what is defined.
While it seems purer to use enum, it is problematic. C++ doesn't let you
use it for bit fields due to special rules around enums that aren't there
to get in the way in C.
Conditional code is important, as are providing enough compat scaffolding
when sharing code between many systems, or when different compilers are
used. Macro processing accomplishes this rather well, though not without
other issues. In an ideal world, you could put other constructs into the
language to accomplish these goals... But none that have been good enough
to gain any traction at all....
Warner