Hello all,
I've recently been improving the AT&T/Teletype DMD 5620 simulator I wrote a few years ago. It can now run either the 8;7;3 or 8;7;5 firmware. It also now supports executing a local shell or connecting directly to a physical or virtual tty device. It runs natively on Linux or macOS with X11 or Wayland, but I would love help creating a Windows version if you're a Windows programmer (I am an occasional Windows user, but I am not at all knowledgeable about Windows programming).
Full details are available here: https://loomcom.com/3b2/dmd5620_emulator.html
The source code is here: https://github.com/sethm/dmd_gtk
Many thanks go to my friend Sark (@crtdude on Twitter) for tracking down the 8;7;3 firmware and dumping it for me. I'd also like to thank Mike Haertel for helping find bugs, providing feedback, and inspiring me to get it working with Research Unix in addition to SVR3.
Feedback, bug reports, and pull requests are all welcome!
-Seth
--
Seth Morabito
Poulsbo, WA
web(a)loomcom.com
Anecdote prompted by the advent of Burroughs in this thread:
At the 1968 NATO conference on Software Engineering, the discussion
turned to language design strategies. I noted that the design of Algol
68, for example, presupposed a word-based machine, whereupon Burroughs
architect Bob Barton brought the house down with the remark, "In the
beginning was the Word, all right--but it was not a fixed number of
bits!"
[Algol 68's presupposition is visible in declarations like "long long
long ... int". An implementation need support only a limited number of
"longs", but each supported variety must have a definite maximum
value, which is returned by an "environment enquiry" function. For
amusement, consider the natural idea of implementing the longest
variety with bignums.]
Doug
The error was introduced on 13 September 2005, by an anonymous user from an IP address allocated to Web Perception, a Californian ISP, and (currently) geolocated to Sonoma. The change comment was:
Changes - 386BSD factual errors corrected, potentially libelous statements removed, links updated, refocus on 386BSD history, authority-386BSD authors, published works, DMR refs
The same IP address was used for a series of edits over 2005-2006, to topics including 386BSD, Lynne Jolitz, William Jolitz, and Radiocarbon Dating.
I imagine it was simply a mistake.
d
> On 10 Sep 2022, at 12:26, Grant Taylor via COFF <coff(a)tuhs.org> wrote:
>
> On 9/9/22 8:05 PM, Greg 'groggy' Lehey wrote:
>> Done.
>
> Thank you!
>
>> Do you have an idea about how this error crept in?
>
> No, I do not.
>
> I came to this article after reading about the DDJ DVD archive on the geeks mailing list. I was sensitive to the emails about DDJ because I've been looking to acquire the issues (or at least articles) with the Porting Unix to the 386 articles in them.
>
> Now I have them! :-D
>
>
>
> --
> Grant. . . .
> unix || die
>
https://www.timeanddate.com/on-this-day/september/9
``Unix time or Unix epoch, POSIX time or Unix timestamp, is a time system
that measures the number of seconds since midnight UTC of January 1, 1970,
not counting leap seconds. At 01:46:40 UTC on September 9, 2001, Unix time
reached the billionth second timestamp.''
Hard to believe that it was that long ago...
-- Dave
Paul Winalski and Bakul Shah commented on bit addressable machines
on the TUHS list recently. From Blaauw and Brooks' excellent
Computer Architecture book
http://www.math.utah.edu/pub/tex/bib/master.html#Blaauw:1997:CAC
on page 98, I find
>> ...
>> The earliest computer with bit resolution is the [IBM 7030] Stretch.
>> The Burroughs B1700 (1972) and CDC STAR100 (1973) are later examples.
>>
>> Bit resolution is costly in format space, since it uses a maximum
>> number of bits for address and length specification. Sharpening
>> resolution from the byte to the bit costs the same as increasing
>> address-space size eight-fold.
>>
>> Since almost all storage realizations are organized as matrices,
>> bit resolution is also expensive in time or equipment.
>> ...
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
Doug McIlroy:
Bit-addressing is very helpful for manipulating characters
in a word-organized memory. The central idea of my ancient
(patented!) string macros that underlay SNOBOL was that it's
more efficient to refer to 6-bit characters as living at
bits 0,6,12,... of a 36-bit word than as being characters
0,1,2,... of the word. I've heard that this convention was
supported in hardware on the PDP-10.
====
Indeed it was. The DEC-10 had `byte pointers' as well as
(36-bit) word addresses. A byte pointer comprised an address,
a starting bit within the addressed word, and a length.
There were instructions to load and store an addressed byte
to or from a register, and to do same while incrementing
the pointer to the next byte, wrapping the start of the next
word if the remainder of the current word was too small.
(Bytes couldn't span word boundaries.)
Byte pointers were used routinely to process text. ASCII
text was conventionally stored as five 7-bit bytes packed
into each 36-bit word. The leftover bit was used by some
programs as a flag to mean these five characters (usually
the first of a line) were special, e.g. represented a
five-decimal-digit line number.
Byte pointers were used to access Sixbit characters as
well (each character six bits, so six to the word,
character set comprising the 64-character subset of
ASCII starting with 0 == space).
Norman Wilson
Toronto ON
(spent about four years playing with TOPS-10 before
growing up to play with UNIX)
Andrew Hume:
if i recall correctly, V1 of Unix had time measured in milliseconds.
were folks that sure that this would change before wrap-around?
====
Not milliseconds (which were infinitesimally small to the
computers of 1969!) but clock ticks, 60 per second.
Initially such times were stored in a pair of 18-bit PDP-7
words, giving a lifetime of about 36 years, so not so bad.
The PDP-11's 16-bit words made that a 32-bit representation,
or about two and a quarter years before overflow. Which
explains why the time base was updated a few times in early
days, then the representation changed to whole seconds, which
in 32 bits would last about as long as 36 bits of 60 Hz ticks.
The PDP-7 convention is documented only in the source code,
so far as I know. The evolution of time on the PDP-11 can
be tracked in time(II) in old manuals; the whole-seconds
representation first appears in the Fourth Edition.
Norman Wilson
Toronto ON
Not that old a timer, but once looked into old time
> From: Jim Capp
> See "The Preparation of Programs for an Electronic Digital Computer",
> by Maurice V. Wilkes, David J. Wheeler, and Stanley Gill
Blast! I looked in the index in my copy (ex the Caltech CS Dept Library :-),
but didn't find 'word' in the index!
Looking a little further, Turing's ACE Report, from 1946, uses the term
(section 4, pg. 25; "minor cycle, or word"). My copy, the one edited by
Carpenter and Doran, has a note #1 by them, "Turing seems to be the first
user of 'word' with this meaning." I have Brian's email, I can ask him how
they came to that determination, if you'd like.
There aren't many things older than that! I looked quickly through the "First
Draft on the EDVAC", 1945 (re-printed in "From ENIAC to UNIVAC", by Stein),
but did not see word there. It does use the term "minor cycle", though.
Other places worth checking are the IBM/Harvard Mark I, the ENIAC and ...
I guess therer's not much else! Oh, there was a relay machine at Bell, too.
The Atanasoff-Berry computer?
> From: "John P. Linderman"
> He claims that if you wanted to do decimal arithmetic on a binary
> machine, you'd want to have 10 digits of accuracy to capture the 10
> digit log tables that were then popular.
The EDVAC draft talks about needing 8 decimal digits (Appendix A, pg.190);
apparently von Neumann knew that that's how many digits one needed for
reasonable accuracy in differential equations. That is 27 "binary digits"
(apparently 'bit' hadn't been coined yet).
Noel
> Doug or anyone, why do bit pointers make sense? Why?
Bit-addressing is very helpful for manipulating characters
in a word-organized memory. The central idea of my ancient
(patented!) string macros that underlay SNOBOL was that it's
more efficient to refer to 6-bit characters as living at
bits 0,6,12,... of a 36-bit word than as being characters
0,1,2,... of the word. I've heard that this convention was
supported in hardware on the PDP-10.
In the IBM 7020 floats and ints were word-addressed. But
those addresses could be extended past the "decimal point"
to refer to bits. Bits were important. The computer was designed
in parallel with the Harvest streaming "attachment" for
NSA. Harvest was basically intended to gather statistics useful
in code-breaking, such as frequency counts and autocorrelations,
for data typically encoded in packed 5- to 8-bit characters. It
was controlled by a 20-word "setup" that specified operations on
rectangular and triangular indexing patterns in multidimensional
arrays. Going beyond statistics, one of the operations was SQML
(sequential multiple lookup) where each character was looked
up in a table that specified a replacement and a next table--a
spec for an arbitrary Turing machine that moved its tape at
byte-streaming speed!
Doug
> Well, you can imagine what happened when the leading digit changed
> from an ASCII "9" to an ASCII "1". Oops.
I first saw a time-overflow bug more than 60 years ago. Accounting
went haywire in the Bell Labs' comp center on day 256 of the year,
when the encoded output of a new time clock reached the sign bit.
Doug