According to Wikipedia:
The first modern, electronic ternary computer, Setun
<https://en.wikipedia.org/wiki/Setun>, was built in 1958 in the Soviet
Union at the Moscow State University
<https://en.wikipedia.org/wiki/Moscow_State_University> by Nikolay
Brusentsov <https://en.wikipedia.org/wiki/Nikolay_Brusentsov>,[4]
<https://en.wikipedia.org/wiki/Ternary_computer#cite_note-cmr-4>[5]
<https://en.wikipedia.org/wiki/Ternary_computer#cite_note-5> and it had
notable advantages over the binary
<https://en.wikipedia.org/wiki/Binary_numeral_system> computers that
eventually replaced it, such as lower electricity consumption and lower
production cost.[4]
<https://en.wikipedia.org/wiki/Ternary_computer#cite_note-cmr-4> In 1970
Brusentsov built an enhanced version of the computer, which he called
Setun-70.[4]
<https://en.wikipedia.org/wiki/Ternary_computer#cite_note-cmr-4> In the
United States, the ternary computing emulator Ternac
<https://en.wikipedia.org/wiki/Ternac> working on a binary machine was
developed in 1973.[6]
<https://en.wikipedia.org/wiki/Ternary_computer#cite_note-comp1974-6>:22
The ternary computer QTC-1 was developed in Canada.[7]
<https://en.wikipedia.org/wiki/Ternary_computer#cite_note-7>
Doesn't seem like they caught on otherwise, though.
Niklas
Den ons 3 feb. 2021 kl 21:10 skrev Dave Horsfall <dave(a)horsfall.org>:
On Wed, 3 Feb 2021, Peter Jeremy wrote:
I'm not sure that 16 (or any other 2^n) bits
is that obvious up front.
Does anyone know why the computer industry wound up standardising on
8-bit bytes?
Best reason I can think of is System/360 with 8-bit EBCDIC (Ugh! Who said
that "J" should follow "I"?). I'm told that you could coerce it
into
using ASCII, although I've never seen it.
Scientific computers were word-based and the
number of bits in a word is
more driven by the desired float range/precision. Commercial computers
needed to support BCD numbers and typically 6-bit characters. ASCII
(when it turned up) was 7 bits and so 8-bit characters wasted ⅛ of the
storage. Minis tended to have shorter word sizes to minimise the amount
of hardware.
Why would you want to have a 7-bit symbol? Powers of two seem to be
natural on a binary machine (although there is a running joke that CDC
boxes has 7-1/2 bit bytes...
I guess the real question is why did we move to binary machines at all;
were there ever any ternary machines?
-- Dave