Thanks for your support but C89 didn't specify an encoding. In classic committee fashion, it refused to take a stand about anything that might limit adoption. The problem was that the API it offered was clumsy and made encoding errors hard to ignore. (Grepping a file for a string, do you really care if there is an irrelevant binary blob in the middle that isn't kosher UTF-8?) Also, it provided no support for printing "wide" characters. This is all covered in the paper cited above.*
The original UTF was compatible with ASCII but not robust if there was an alignment problem, and also used printable ASCII characters in multibyte sequences. You could find a '/' inside a Cyrillic character encoding, which broke Unix badly. That's why FSS-UTF, File-safe UTF, was the name given to Prosser's variant.
It's wrong to give us credit for properties we didn't introduce. But UTF-8 is more regular, simpler to encode and decode, and more robust than its predecessors. Most important, it did introduce the self-synchronization property, which was the key that opened the door for us at X-Open.
-rob
* In a classic Usenix whoops, the paper had an appendix that described UTF-8's encoding rigorously, but that was dropped when it was published in the conference proceedings. Perhaps that's why the RFC got in the mix and started some of the confusion about its origin.