Hi folks,
I'm finding it difficult to find any direct sources on the question in
the subject line.
Does anyone here have any source material they can point me to
documenting the existence of a port of BSD curses to Unix Version 7?
I know that curses made it into 2.9BSD for the PDP-11, but that's not
quite the same thing.
There are comments in System V Release 2's curses.h file[1][2] (very
different from 4BSD's[3]) that suggest some effort to accommodate
Version 7's terminal driver. So I would _presume_ that curses got
ported to Version 7. But that's System V, right when it started
diverging from BSD curses, and moreover, presumption is not evidence.
Even personal accounts/anecdotes would be helpful. Maybe some of you
_wrote_ curses applications for Version 7 machines.
Regards,
Branden
[1] System III apparently did not have curses at all. Both it and 4BSD
were released in 1980. System V Release 1 doesn't seem to, either.
[2] https://github.com/ryanwoodsmall/oldsysv/blob/master/sysvr2-vax/include/cur…
[3] https://minnie.tuhs.org/cgi-bin/utree.pl?file=4BSD/usr/include/curses.h
with my pedantic head on…
The “7th Edition” was the name of the Perkin Elmer port (nee Interdata), derived from Richard Miller’s work.
This was Unix Version 7 from the labs, with a v6 C compiler, with vi, csh, and curses from 2.4BSD (though we where never 100% sure about this version).
You never forget your first Unix :-)
-Steve
All,
I can't believe it's been 9 years since I wrote up my original notes on
getting Research Unix v7 running in SIMH. Crazy how time flies. Well,
this past week Clem found a bug in my scripts that create tape images.
It seem like they were missing a tape mark at the end. Not a showstopper
by any means, but we like to keep a clean house. So, I applied his fixes
and updated the scripts along with the resultant tape image and Warren
has updated them in the archive:
https://www.tuhs.org/Archive/Distributions/Research/Keith_Bostic_v7/
I've also updated the note to address the fixes, to use the latest
version of Open-SIMH on Linux Mint 21.3 "Virginia" (my host of choice
these days), and to bring the transcripts up to date:
https://decuser.github.io/unix/research-unix/v7/2024/05/23/research-unix-v7…
Later,
Will
Well this is obviously a hot button topic. AFAIK I was nearby when fuzz-testing for software was invented. I was the main advocate for hiring Andy Payne into the Digital Cambridge Research Lab. One of his little projects was a thing that generated random but correct C programs and fed them to different compilers or compilers with different switches to see if they crashed or generated incorrect results. Overnight, his tester filed 300 or so bug reports against the Digital C compiler. This was met with substantial pushback, but it was a mostly an issue that many of the reports traced to the same underlying bugs.
Bill McKeemon expanded the technique and published "Differential Testing of Software" https://www.cs.swarthmore.edu/~bylvisa1/cs97/f13/Papers/DifferentialTesting…
Andy had encountered the underlying idea while working as an intern on the Alpha processor development team. Among many other testers, they used an architectural tester called REX to generate more or less random sequences of instructions, which were then run through different simulation chains (functional, RTL, cycle-accurate) to see if they did the same thing. Finding user-accessible bugs in hardware seems like a good thing.
The point of generating correct programs (mentioned under the term LangSec here) goes a long way to avoid irritating the maintainers. Making the test cases short is also maintainer-friendly. The test generator is also in a position to annotate the source with exactly what it is supposed to do, which is also helpful.
-L
I'm surprised by nonchalance about bad inputs evoking bad program behavior.
That attitude may have been excusable 50 years ago. By now, though, we have
seen so much malicious exploitation of open avenues of "undefined behavior"
that we can no longer ignore bugs that "can't happen when using the tool
correctly". Mature software should not brook incorrect usage.
"Bailing out near line 1" is a sign of defensive precautions. Crashes and
unjustified output betray their absence.
I commend attention to the LangSec movement, which advocates for rigorously
enforced separation between legal and illegal inputs.
Doug
>> Another non-descriptive style of error message that I admired was that
>> of Berkeley Pascal's syntax diagnostics. When the LR parser could not
>> proceed, it reported where, and automatically provided a sample token
>> that would allow the parsing to progress. I found this uniform
>> convention to be at least as informative as distinct hand-crafted
>> messages, which almost by definition can't foresee every contingency.
>> Alas, this elegant scheme seems not to have inspired imitators.
> The hazard with this approach is that the suggested syntactic correction
> might simply lead the user farther into the weeds
I don't think there's enough experience to justify this claim. Before I
experienced the Berkeley compiler, I would have thought such bad outcomes
were inevitable in any language. Although the compilers' suggestions often
bore little or no relationship to the real correction, I always found them
informative. In particular, the utterly consistent style assured there was
never an issue of ambiguity or of technical jargon.
The compiler taught me Pascal in an evening. I had scanned the Pascal
Report a couple of years before but had never written a Pascal program.
With no manual at hand, I looked at one program to find out what
mumbo-jumbo had to come first and how to print integers, then wrote the
rest by trial and error. Within a couple of hours I had a working program
good enough to pass muster in an ACM journal.
An example arose that one might think would lead "into the weeds". The
parser balked before 'or' in a compound Boolean expression like 'a=b and
c=d or x=y'. It couldn't suggest a right paren because no left paren had
been seen. Whatever suggestion it did make (perhaps 'then') was enough to
lead me to insert a remote left paren and teach me that parens are required
around Boolean-valued subexpressions. (I will agree that this lesson might
be less clear to a programming novice, but so might be many conventional
diagnostics, e.g. "no effect".)
Doug
I just revisited this ironic echo of Mies van der Rohe's aphorism, "Less is
more".
% less --help | wc
298
Last time I looked, the line count was about 220. Bloat is self-catalyzing.
What prompted me to look was another disheartening discovery. The "small
special tool" Gnu diff has a 95-page manual! And it doesn't cover the
option I was looking up (-h). To be fair, the manual includes related
programs like diff3(1), sdiff(1) and patch(1), but the original manual for
each fit on one page.
Doug
> was ‘usage: ...’ adopted from an earlier system?
"Usage" was one of those lovely ideas, one exposure to which flips its
status from unknown to eternal truth. I am sure my first exposure was on
Unix, but I don't remember when. Perhaps because it radically departs from
Ken's "?" in qed/ed, I have subconsciously attributed it to Dennis.
The genius of "usage" and "?" is that they don't attempt to tell one what's
wrong. Most diagnostics cite a rule or hidden limit that's been violated or
describe the mistake (e.g. "missing semicolon") , sometimes raising more
questions than they answer.
Another non-descriptive style of error message that I admired was that of
Berkeley Pascal's syntax diagnostics. When the LR parser could not proceed,
it reported where, and automatically provided a sample token that would
allow the parsing to progress. I found this uniform convention to be at
least as informative as distinct hand-crafted messages, which almost by
definition can't foresee every contingency. Alas, this elegant scheme seems
not to have inspired imitators.
Doug