Having just been given a Depraz mouse, I thought it would be fun to get it working on my modern computer. Since the DE9 connector is male rather than female as you usually see with serial mice, and given its age, I speculate that it might have a custom protocol; in any rate, plugging it into a USB-serial converter and and firing up picocom has given me nothing.
Does anyone have a copy of a manual for it, or more information on how to interface with it? If I knew how it was wired and what the protocol looked like, I expect I could make an adapter pretty trivially using a microcontroller.
Thanks,
john
What do folks think about event-driven programming as a substitute for threads in UI and process control settings?
I wrote the service processor code for the Sicortex Machines using libevent.a and I thought it was very lightweight and fairly easy to think about. (This was a thing running on ucLinux on a tiny 16 MB coldfire that managed the consoles and power supplies and temp sensors and JTAG access and booting and so forth.)
Tk (IIRC) has a straightforward event driven model for UI interactions.
Meanwhile, the dropbox plugin for my Mac has 120 threads running. WTF?
This was triggered by the fork/spawn discussion.
-Larry
(started with Unix at V6 on an 11/34)
> spawn() beats fork()[;] fork() should be deprecated
Spawn is a further complication of exec, which tells what signals and
file descriptors to inherit in addition to what arguments and
environment variables to pass.
Fork has a place. For example, Program 1 in
www.cs.dartmouth.edu/~doug/sieve/sieve.pdf forks like crazy and never
execs. To use spawn, the program would have to be split in three (or
be passed a switch setting).
While you may dismiss Program 1 as merely a neat demo, the same idea
applies in parallelizing code for use in a multiprocessor world.
Doug
Another issue I have run into is recursion, when a reference includes
another reference. This comes up in various forms:
Also published as ...
Errata available at ...
Summarized [or reviewed] in ...
Preprint available at ...
Often such a reference takes the form of a URL or a page in a journal
or in a proceedings. This can be most succinctly placed in situ --
formatted consistently with other references. If the reference
identifies an alternate mode of publication, it may lack %A or %T
fields.
Partial proposal. a %O field may contain a reference, with no further
recursion, The contained reference will be formatted in line in the %O
text unless references are accumulated and the contained reference is
not unique.
Doug
> Go gets us part of the way there, but cross-machine messaging is still a mess.
Shed a tear for Plan 9 (Pike yet again). While many of its secondary
innovations have been stuffed into Linux; its animating
principle--transparently distributable computing--could not overcome
the enormous inertia of installed BSD-model systems.
Doug
I have considerable sympathy with the general idea of formally
specifying and parsing inputs. Langsec people make a strong case
for doing so. The white paper,"A systematic approach to modern
Unix-like command interfaces", proposes to "simplify parsing by
facilitating the creation of easy-to-use 'grammar-based' parsers".
I'm not clear on what is meant by "parser". A plain parser is a
beast that builds a parse tree according to a grammar. For most
standard Unix programs, the parse tree has two kinds of leaves:
non-options and options with k parameters. Getopt restricts
k to {0,1}.
Aside from fiddling with argc and argv, I see little difference
in working with a parse tree for arguments that could be
handled by getopt and working with using getopt directly.
A more general parser could handle more elaborate grammatic
constraints on options, for example, field specs in sort(1),
requirements on presence of options in tar(1), or representation
of multiple parameters in cut(1).
In realizing the white paper's desire to "have the parser
provide the results to the program", it's likely that the mechanism
will, like Yacc, go beyond parsing and invoke semantic actions
as it identifies tree nodes.
Pioneer Yaccification of some commands might be a worthy demo.
Doug
> On 7/31/21, Michael Siegel <msi(a)malbolge.net> wrote:
> The TOPS-20 COMND JSYS implemented both of these features, and I
think that command
> completion was eventually added to the VMS command interpreter, too.
FYI, There is also a unix version of the COMND JSYS capability. It was
developed at Columbia University as part of their "mm" mail manager. It
is located in to the ccmd subdirectory in the mm.tar.gz file.
url: https://www.kermitproject.org/mm/ftp://ftp.kermitproject.org/kermit/mm/mm.tar.gz
-ron
Besides C-Kermit on Unix systems, the TOPS-20 command interface is
used inside the mm mail client, which I've been using for decades on
TOPS-20, VMS, and several flavors of Unix:
http://www.math.utah.edu/pub/mm
mm doesn't handle attachments, or do fancy display of HTML, and thus,
cannot do anything nasty in response to incoming mail messages.
I rarely need to extract an attachment, and I then save the message in
a temporary file and run munpack on it.
Here are some small snippets of its inline help:
MM] read (messages) ? message number
or range of message numbers, n:m
or range of message numbers, n-m
or range of message numbers, n+m (m messages beginning with n)
or "." to specify the current message
or "*" to specify the last message
or message sequence, one of the following:
after all answered before
current deleted flagged from
inverse keyword last longer
new on previous-sequence recent
seen shorter since subject
text to unanswered undeleted
unflagged unkeyword unseen
or "," and another message sequence
R] read (messages) flagged since yesterday
[message(s) appear here]
MM] headers (messages) since monday longer (than) 100000
[list of long messages here]
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> TUHS list (Rob Pike, Aug 2019)
>
> I think it was slightly later. I joined mid-1980 and VAXes to replace the
> 11/70 were being discussed but had not arrived. We needed to convert a lab
> into a VAX machine room and decide between BSD and Reiser, all of which
> happened in the second half of 1980.
>
> Reiser Unix got demand paging a little later, and it was spectacularly
> fast. I remember being gobsmacked when I saw a demo in early 1981.
>
> Dead ends everywhere.
I think I have figured out why 32V R3 was so fast (assuming my current understanding of how 32V R3 must have worked is correct).
Its VM subsystem tags each memory frame with its disk mirror location, be it in swap or in the file system. A page can quickly be found as they are hashed on device and block no. This is true both for pages in the working set and pages on the 2nd chance list. Effectively, most core is a disk cache.
In a unified buffer design, the buffer code would first look for an existing buffer header for the requested disk block, as in V7. If not found, it would check the page frame list for that block and if found it would connect the frame to an empty buffer header, increment its use count and move it to the working set. If not found there either, it would be loaded from disk as per usual. When a buffer is released, the frame use count would be decremented and if zero the page frame would be put back on the 2nd chance list and the buffer header would be marked empty. With this approach, up to 4MB of the disk could be cached in RAM.
Early in 1981 most binaries and files were a few dozen KB in size. All of the shell, editor, compiler tool chain, library files, intermediate files, etc. would have fitted in RAM all at once. In a developer focused demo and once memory was primed, the system would effectively run from RAM, barely hitting the disk, even with tens of concurrent logins. Also something like “ls -l /bin” would have been much faster on its second run.
It puts a comment from JFR in a clearer context:
<quote>
Strict LRU on 8,192 pages, plus Copy-On-Write, made the second reference to a page "blindingly fast".
<unquote>
So far I read this in context of the paging algorithm, and then it is hard to understand (is LRU really that much better than NRU?). In the context of a unified buffer and disk pages, it makes a lot more sense. Even the CoW part: as the (clean) data segment of executables would still be in core, they could start without reloading from disk - CoW would create copies as needed.
===
The interesting question now is: if this buffer unification was so impressive, why was it abandoned in SVr2-vax? I can think of 3 reasons:
1. Maybe there was a subtle bug that was hard to diagnose. “Research" opting for the BSD memory system “as it did not want to do the maintenance” suggests that there may have been lingering issues.
1b. A variation of this: JFR mentioned that his implementation of unified buffers broke conceptual layering. USG do not strike me as purists, but maybe they thought the code was too messy to maintain.
2. Maybe there was an unintended semantic issue (e.g. you can lock a buffer, but not a mmap ‘ed page).
3. Maybe it was hard to come up with a good sync() policy, making database work risky (and system crashes more devastating to the file system).
JFR mentioned that he did the design and implementation for 32V R3 in about 3 months, with 3 more months for bug fixing and polishing. That is not a lot of time for such a big and complex kernel mod (for its time).