Problems I have with C as a systems language is there is no certainty about representation
of a structure in memory and hence when it is written to disk. I will be happy to be
corrected but I remember this behaviour to be compiler dependant. Other languages such as
Bliss and to perhaps a lesser degree Pascal had implicit control of this. Having said that
I worked with C as a systems language many times.
On 3 Mar 2025, at 10:25 am, Dan Cross
<crossd(a)gmail.com> wrote:
On Sun, Mar 2, 2025 at 1:46 PM G. Branden Robinson
<g.branden.robinson(a)gmail.com> wrote:
At 2025-03-02T17:54:01+0000, segaloco via TUHS
wrote:
[snip]
If I'm understanding Bakul's point correctly, then what you're saying is
a defensible claim about many aspects of C but emphatically _not_ of its
type system. The language does _not_ give you the tools to implement a
rigorous type system as, Haskell programmers or type theorists would
conceive of it. I mean, you could write a Haskell compiler in C, but
you wouldn't be able to apply its type logic to C's own type system.
(Maybe you could do it for any types other than the primitive ones
already supported, with sufficient abuse of the preprocessor.)
This rigid inability was baked into C from day one. Everything that can
possibly be represented in a machine register with all bits clear shows
up as an integral zero literal.
'\0' == 0 == nullptr == struct { } == union { }
The CSRC staked a lot on the typeless B, and clung to that heritage.
I don't think it's an accident that C's designers expressed type
_aliasing_ with the keyword "typedef". Types simply were not taken
seriously as reasoning tools. C was a language for people who wanted to
get that crap out of the way so they could think about binary
representations. (Okay, octal.)
I'm sure I've said something like this before, but I do not think this
criticism is fair. Many of the design aspects of C were dictated by
the context in which it was developed, and if judging it in 2025, we
have to bear that in mind. Recall that it had to fit on a machine that
is laughably small by today's standards, and pretty small even in
1972; that imposed real constraints on its structure and semantics.
Did other systems-oriented languages in that era have significantly
richer type systems? Would a compiler for such a language even be
runnable on Unix in the very early 70s? That's a serious question.
Incidentally, the keyword for creating a type alias in Haskell is just
`type`. Same in SML, OCaml, and Rust. I don't know that `typedef` is
appreciably different, nor that it can credibly be cited as some sort
of "tell" about a cavalier attitude towards types.
C was aggressively oversold, I think more by by
its fans than its
designers.
For the sorts of things one used C for throughout the 70s and into the
80s, what were the alternatives?
Even its reputation as "portable
assembler" withstands
scrutiny only as long as all you want to port to are models in the
PDP-11 line. If you think that claim is crack-headed, review London &
Reiser's paper on porting Unix to the VAX-11/780. Even setting aside
the problems with nroff/troff and the Bourne shell, they pointed their
fingers at problems that a "portable assembly language" would already
have considered, like machines having differing alignment requirements,
I didn't read that in their paper. They pointed out four enhancements
to C, related to types, that they felt would enhance portability.
Interestingly, all four have been implemented in more modern dialects
of the language, though the alignment thing is still a bit perilous; I
can't quite tell whether they were referring to "packed" structures
(largely irrelevant when programs are written against a well-defined
ABI) or something closer to `alignas()`/`alignof()`. They did indicate
that alignment makes sharing _binary_ data between VAX and PDP-11
harder, but that's true of other aspects of product types as well.
or having the standard I/O library's binary
I/O functions write and
interpret some kind of header in the file expressing basic facts like
the endianness of various integer widths (big, little, or FP-11). Magic
numbers were good enough for object file formats, but not applications
programmers, I guess.
This really doesn't seem like it's the job of the IO library. For that
matter, I don't think more recent languages delegate this to their IO
routines, either. I'm also not sure that's what they said (at least,
that's not how I interpreted their comments).
I don't doubt that people could be massively
more productive in C than
in PDP-11 assembly language. But that claim doesn't establish anything
that wasn't already known about the utility of higher-level languages.
K&R were still trying to warn people in 1988 that C was not a hammer for
all nails.
"...C offers only straightforward, single-thread control flow: tests,
loops, grouping, and subprograms, but not multiprogramming, parallel
operations, synchronization, or coroutines." (_The C Programming
Language, 2nd edition, p. 2)
I guess we can add a general, flexible algebraic type system to the list
of missing features.
Sure. But then we're back where we started, talking about a language
that only has a passing resemblance to C.
- Dan C.