On 02/09/2017, Dave Horsfall <dave(a)horsfall.org> wrote:
> On Thu, 31 Aug 2017, Nemo wrote:
>
>> (By the way, why all the ^M is the files?)
>
> Err, because CR/LF was the line terminator?
Hhhmmm... This begs the historical question: When did LF replace CR/LF in UNIX?
N.
>
> --
> Dave Horsfall DTM (VK2KFU) "Those who don't understand security will
> suffer."
>
> When did LF replace CR/LF in UNIX?
Never. Unix always took LF as newline--an interpretation
blessed by the ASCII standard. The convention was
inherited from Multics.
Interpolation of CRs was the business of drivers, not file
formats.
As far as I know, the only CR/LF terminal that original
Unix dealt with was the model 33 console. That was identified
by the fact that the login name was received in all caps.
IIRC the TTY 37 conformed to Multics practice on the advice
of Joe Ossanna.
Because of the model 33, login names were case-insesitive.
Come to think of it, I don't know whether they still are
in general, though they must be for email to be delivered by
login name.
Doug
> As I recall, the original definition of ASCII suggested that the
> LF character was either "line feed" or "new line", and that if it
> *was* new-line, it would be stand-alone.
I have put a copy of the original ASCII standard (scanned images) at
http://www.cogsci.ed.ac.uk/~richard/ascii.tar
I don't remember where I got it from.
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> So we have some source (that you can have as well, Apache license)
> that handles not strings but vectors of strings (or structs or whatever).
>
> I need to write up how it works, it's sort of clever. It stores both
> the size of the vector (always a power of 2) in part of the bits of the
> first entry and the actual length in the other part. Because we can
> express powers of two in very little space we can support both values
> in a 32 bit unsigned with a max length of used space of around 134M.
I have something similar. It allocates space for two ints (number
allocated and used) at ((int *)array)[-1] and [-2].
Typical use is
LTVectorAndInit(char *, names);
while(...)
LTVectorPush(names, s);
for(i=0; i<LTVectorCount(names); i++)
... names[i] ...;
LTVectorFree(names);
-- Richard
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
> > p) Remove the '-a' option (the ASCII approximation output).
>
> I didn't even know this existed. Looking at what it spits out, I find
> myself wondering what good it is. Is this for Unix troff compatibility?
> For people who didn't even have glass TTYs and needed to imagine what
> the typeset output would look like?
Here's a classic use:
Since groff is not WYSIWYG, the experimental cycle is long:
edit - save - groff - view. In many cases that cycle can be
short-circuited by typing straight into groff -a.
doug
Call me old-fashioned, but I still think the papers in Volume 2
of the Seventh Edition manual are a good straightforward start.
There's a tutorial on troff, and papers introducing eqn, tbl,
and refer.
Norman Wilson
Toronto ON
Hi,
Can anyone point to an introduction to {t,r,g}roff / pic / tbl / etcetera?
I've respected them for years and with all the latest discussions about
them I'd like to try and learn something.
Any pointers would be appreciated.
--
Grant. . . .
unix || die
Steve Johnson:
I think of Alan Demer's comment: "There are two kinds of programming
languages, those that make it easy to write good programs, and those
that make it hard to write bad ones."
====
I'm (still) with Larry Flon on this one:
There does not now, nor will there ever, exist a programming language
in which it is the least bit hard to write bad programs.
-- SIGPLAN Notices, October 1975, p. 16.
There are certainly language that make it easier to avoid
trivial mistakes, like buffer overruns and pointer botches,
but the sort of nonsense Kernigan and Plaugher demonstrated
and discussed about the same time in The Elements of Programming
Style shows up in any language.
I'm afraid I see that nearly any time I look in source code.
To be fair, these days I rarely have the time to look at
someone else's source code unless it's buggy, but it is
nevertheless appalling what one finds in critical system
software like boot environments and authentication code.
There is no royal language to good programs. Programming
well takes discipline and skill and experience. Languages
like Pascal that prevent certain classes of sloppiness like
overrunning arrays and string buffers may be better for
teaching beginners, but only because that makes it easier
to focus on the real issues, such as how to structure a
program well and how to write clearly. I have yet to see
evidence that that actually happens.
Norman Wilson
Toronto ON
> From: Chris Torek
> termcap has entries for the number of NUL characters to insert after
> carriage return.
Actually, the stock terminal driver in V6 Unix (here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/dmr/tty.c
if anyone wants to see the actual code; it's in ttyoutput()) had some pretty
complex code to do just the right amount of delay (in clock ticks) for a large
number of different motion contral characters (TAB, etc, in addition to LF and
CR), and then uses the system timer to delay that amount of real time after
sending such a character (see the very bottom of ttstart()).
E.g. for a NL, it used the fact that it 'knew' which column the print head was
in to calculate the exact return time.
Clever, but alas, it did this by sticking 'characters' in the buffered output
stream with the high bit set, and the delay required in the low 0177 bits,
which the output start routine interpreted; as the code drolly notes, "thus
(unfortunately) restricting the transmission path to 7 bits". Which was a real
PITA recently when I tried to download a binary file to an LSI-11 from a V6
running in Ersatz-11! I had to tweak the TTY driver to allow 8-bit output...
Noel
Seek and ye shall find.
Ask and ye shall receive.
Brian Kernighan was kind enough to find for me everyone's favorite
Computing Sceince Technical Report, CSTR 100, "Why Pascal is Not
My Favorite Programming Language".
Attached is the file and his macros. This will not immediately
format using groff etc.; I hope to create a version that will, sometime
in the next few weeks.
In the meantime, Warren, please add to the archives when you are able.
Enjoy!
Arnold
Hi everybody. I'm new to this list as a side-effect of my question about
the provenance of strcmp and the convention of returning <0, 0, >0.
I had to learn Pascal as a freshman in college which was challenging coming
from BTL knowing C. Kept wondering how Pascal could be used for anything
useful. The answer that I later saw in industry was "by adding non-standard
extensions".
Language discussions often turn to the issue of whether programming languages
should prevent programmers from making mistakes or whether that's the job of
the programmer. This is, of course, independent of discussing the
expressiveness of languages.
I agree that a lot of "programming" today consists of trusting and bolting
together random library functions. That's not me; I often work on safety
critical devices and I'm not going to rely on libraries of unknown provenance
when building a medical device that I make be hooked up to it someday.
Years ago I inherited a project written in hodgepodge of programming languages
including ruby. My first reaction to ruby was "Wow, how do I get some of
what they're smoking because it's better than anything I have?" I eventually
asked Ward Cunningham about it because he was working for ruby house AboutUs
at the time. His answer went something like this:
Jon, you're an engineer and you understand engineering.
You know programming and programmers and understand programming.
Then, there are the people with whom we entrust our confidential credit card data.
That's what ruby is for.
This nicely summarized the current state of affairs in which the most critical
tasks are assigned to the least competent people. I see this as a management,
business, and political problem which can't be solved by different languages.
I have often made the statement that "I would never hire someone who had to use
glibc in order to implement a singly-linked list." I get push-back such as "Oh,
and people can create bugs rather than using the debugged library?" to which I
glibly respond "debugged library like OpenSSL?"
I am more than a little terrified by the "everybody must learn to code in high
school movement". What they're learning is something at a level akin to the
ruby example above. The goal is clearly to make "coding" a minimum wage job
and to many the distinction between "coding" and engineering is lost. I've
spoken with many kids in the "future engineer" category who are frustrated at
the lack of depth in the curriculum. I'd summarize it as teaching people to
program without teaching them anything about computers.
Anyway, I have been volunteering to teach technology to kids for years as
karmic payback to my BTL explorer scout advisors Carl Christensen, Heinz
Lycklama, and Hans Lie. Not to mention all of the amazing people that I met
there when my dedication to hitchhiking up the the Labs after school and
talking people into signing me in turned into a series of summer jobs.
I'm in the process of turning my class notes into a book. The goal of the
book is to teach kids enough about computers that they can understand what
their code is actually doing and why one can write better code with an
understanding of the hardware and environment.
The book is in the editing phase so it's beyond wholesale changes. But I'm
curious as to what you all think should be in such a book as I'll find a way
to wedge in anything important that I missed.
Thanks,
Jon
And let's not forget Alan Perlis's:
"A language that doesn't affect the way you think about programming, is not worth knowing."
https://en.m.wikiquote.org/wiki/Alan_Perlis
-------- Original message --------
From: Toby Thain
Date:02/09/2017 18:00 (GMT+02:00)
To: Dan Cross
Cc: The Eunuchs Hysterical Society , quad
Subject: Re: [TUHS] Why Pascal is Not My Favorite Programming Language - Unearthed!
...
Finally, a favourite quote:
"Programs must be written for people to read, and only incidentally
for machines to execute" -- Hal Abelson
https://twitter.com/old_sound/status/903919515884544000
--Toby
>
> - Dan C.
>
> From: "Jeremy C. Reed"
> I don't know the key for "v" but maybe means "very distant host"
Yes. See:
http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/L77Dec.jpg
and look at the MIT-44 IMP (upper right center). It's listed as having a
PDP-11, with the /v, and that machine (LL-ASG, 1/44) was definitely on a VDH
(it was not in Tech Sq). (A VDH was basically an IMP modem interface
hardware-wise, but made to look like a host at a high level within the IMP
software.)
> He also told me the Unix v6 Arpanet code was from San Diego.
Err, he may have gotten it from San Diego, but they didn't write the code, it
was written at UIll. See:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=SRI-NOSC
which contains a copy of the code, which came to me via NOSC in SD.
Noel
I did hear back from Lou Katz - user #1. Indeed the first version that escaped the labs was the 4th edition.
Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
> On Sep 1, 2017, at 7:16 PM, Clem cole <clemc(a)ccc.com> wrote:
>
> Interesting. If O'Malley had a connection wonder what it was connected too on both sides. It had to be to lbl but the Vdh was a piece of shit even in the ingvax days. The first version was even worse. On the ucb side I wonder. It would not have been Unix because UofI did the original arpanet code and that was for v6. There was never a pdp10 at ucb so I wonder if it was one of the CDC machines which were the primary systems until Unix came to be.
>
> Sent from my PDP-7 Running UNIX V0 expect things to be almost but not quite.
>
>>> On Sep 1, 2017, at 5:59 PM, Jeremy C. Reed <reed(a)reedmedia.net> wrote:
>>>
>>> On Fri, 1 Sep 2017, Clem Cole wrote:
>>>
>>> So it means that UCB was hacking privately without taking to Katz@ NYU, or
>>> the Columbia and Harvard folks for a while. I need to ask Lou what he
>>> remembers. UCB was not connected to the Arpanet at this point (Stanford
>>> was), so it's possible Ken's sabbatical openned up some channels that had
>>> not existed. [UCB does not get connected until ing70 gets the
>>> vdh-interface up the hill to LBL's IMP as part of the Ingress project and
>>> that was very late in the 70s - not long before I arrived].
>>
>> Allman told me that Mike O'Malley had an ARPA connection at UCB that was
>> axed a few years before the INGRES link. So yes, I think no Arpanet
>> connection during the early BSD development work. (Losing this
>> connection may have had some controversy, but I don't know the details.)
>>
>> Fabry told me that O'Malley used Unix for his (EECS) Artificial
>> Intelligence research projects before he discovered it (so before the
>> October 1973 Symposium).
>>
>> RFC 402 of Oct 1972 has a ARPA network participant Michael O'Malley of
>> University of Michigan Phonetics Laboratory. Also this draft report at
>> http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=35…
>> about the ARPA speech recognition project lists M. H. O'Malley at UCB
>> and says the principle investigator from Univ. of Michigan moved to UCB.
>> (I never go ahold of him to see if had any other relevance to my BSD
>> story.)
>>
>>
I was recently asked a question that I was not sure the answer, so I
thought I would pass it to this group of folks.
What was the first edition that actually left Murray Hill and where did it
go?
My own first encounter was the Fifth edition but I know that was later.
I'm pretty sure both Harvard and MIT had it before CMU. I'm thinking
Fourth edition went to Harvard and some other places ??NYU?? ??MIT?? Did
anything earlier than Fifth ever leave Murray Hill?
I don't think UCB or UofI got it until the Sixth edition. I believe there
was some earlier commercial site ??CU in NYC maybe?? may have been in there
also but I have no idea what version that was.
Doug do you know?
Warren any ideas?
Tx
Clem
> If Unix was written in Pascal I would've happily continued using Pascal!
Amusing in the context of Brian's piece, which essentially says if Unix
could have been written in Pascal, then Pascal wouldn't have been Pascal.
doug\
Hi All.
I have created a Git repo:
https://github.com/arnoldrobbins/cstr100
I set it up with the original MS macros and a Makefile to create
a PDF.
THANKS again to Brian Kernighan for finding the document and sharing it.
Arnold
Hi.
For a project I'm working on, I wonder if the Bell Labs alumni present
can tell me what was the default group for files for the researchers?
That is, what group did ls -l show?
In particular, in the late 90s - ~ V10 time frame.
Much thanks,
Arnold
> As a *roff fan, I'd love, love, love to see the original roff sources.
> Especially anything that uses pic/eqn/chem/etc.
I have the source for CSTR 155, a trilogy on raster ellipses. It uses
interesting preprocessors that are, alas, mostly lost: ideal, prefer, eqn.
(If anybody has ideal, I'd love to get it. Even its author, Chris Van Wyk,
doesn't have it.)
Doug
Doug McIlroy's mentioning `ideal' prompts me to ask something
I've long wanted to ask. The only use I ever saw of ideal was
in Peter Weinberger's memo about his C B-tree package. Was ideal
used for anything other than that?
Noel Hunt
Hi All.
I have made a tarball of all the Bell Labs CSTRs that I could
file: http://www.skeeve.com/cstr.tar.gz.
It's just under ten megs. Warren, can we get this into the archives?
Thanks,
Arnold
> From: Arthur Krewat
> That's not RUNOFF on TOPS-10 (or one it's predecessors) is it?
No, RUNOFF on CTSS - _much_ earlier - CTSS RUNOFF was in '64, which was
the year the PDP-6 came out.
Noel
> From: John Labovitz
> I found the earliest succession of roff predecessors went something like this:
> TJ-2 -> RUNOFF (capitals) -> runoff (lowercase) -> rf -> roff
Did you manage to find out from Jerry if he knew of/anything about TJ2 when he did
RUNOFF?
Noel
On Aug 17, 2017, arnold(a)skeeve.com wrote:
> I remember reading an article somewhere on the history of the first
> FORTRAN compiler. The guys doing it wanted it to succeed, and they
> were fighting the mentality that high level languages could not possibly
> be as efficient as hand-coded assembly, so they put a lot of work into
> the optimization of the generated code.
>
> It worked so well that the results that came out of the compiler
> sometimes suprised the compiler writers! They then would have to
> dive into the compiler sources to figure out how it was done.
>
> I don't remember where I read this article. If the story rings a
> bell with anyone, let me know.
In his paper "The history of FORTRAN I, II and III” presented at the First ACM SIGPLAN conference on History of Programming Languages (1978), John Backus said:
> It was an exciting period; when later on we began to get fragments of compiled programs out of the system, we were often astonished at the surprising transformations in the indexing operations and in the arrangement of the computation which the compiler made, changes which made the object program efficient but which we would not have thought to make as programmers ourselves (even though, of course, Nelson or Ziller could figure out how the indexing worked, Sheridan could explain how an expresssion had been optimized beyond recognition, and Goldberg or Sayre could tell us how section 5 had generated additional indexing operations). Transfers of control appeared which corresponded to no source statement, expressions were radically rearranged, and the same DO statement might produce no instructions in the object program in one context, and in another it would produce many instructions in different places in the program.
The paper is available here, courtesy of ACM: http://www.softwarepreservation.org/projects/FORTRAN/index.html .