On Wed, 3 Feb 2021, Peter Jeremy wrote:
I'm not sure that 16 (or any other 2^n) bits is
that obvious up front.
Does anyone know why the computer industry wound up standardising on
8-bit bytes?
Best reason I can think of is System/360 with 8-bit EBCDIC (Ugh! Who said
that "J" should follow "I"?). I'm told that you could coerce it
into
using ASCII, although I've never seen it.
Scientific computers were word-based and the number of
bits in a word is
more driven by the desired float range/precision. Commercial computers
needed to support BCD numbers and typically 6-bit characters. ASCII
(when it turned up) was 7 bits and so 8-bit characters wasted ⅛ of the
storage. Minis tended to have shorter word sizes to minimise the amount
of hardware.
Why would you want to have a 7-bit symbol? Powers of two seem to be
natural on a binary machine (although there is a running joke that CDC
boxes has 7-1/2 bit bytes...
I guess the real question is why did we move to binary machines at all;
were there ever any ternary machines?
-- Dave