>
> When I met my future wife I was 21, and she wanted me to grow a beard, so
> I did. Since then I have occasionally asked coworkers who have complained
> about shaving why *they* don't grow beards: the most common answer is "My
> wife doesn't want me to."
>
Moved to COFF ... while bearded UNIX folks do seem to be a common thread, I
think we are stretching Warren's patience a tad. So ... I have sort of a
different story.
I had shaved in off and on during college and in the first few years I was
working but had grown it back before grad school. I still was not sure I
liked having it, and as I got close to finishing, I mentioned to my
officemates at UCB that I'd shave it when Newton (our advisor) signed my
thesis as a signal to everyone I was done.
So the day I came into the office clean-shaven, Peter Moore looks up and
remarked, 'now I know why you wore one.'
So, I showed up at Masscomp without it and was quickly ostracized as so
many of the SW team had some sort of facial hair, I quickly grew it back.
Roll forward 20ish years and my wife egged me into shaving it off one
summer weekend. Our then 5-year-old daughter cried -- she wanted her
Daddy back. I've had it ever since.
That said, 20 years later she and her mother both claim I would look
younger if I shaved it. But at this point, I kinda like not having to
shave my neck and lower chin every day if I don't want to; so I have
ignored them.
Redirecting to COFF. COBOL has really nothing to do with Unix.
On Thursday, 7 January 2021 at 20:25:56 -0500, Nemo Nusquam wrote:
> On 01/07/21 17:56, Stuart Remphrey wrote (in part):
>>> Dave, who's kept his COBOL knowledge a secret in every job
>>
>> Indeed! [...]; but especially COBOL: apart from everything else, too
>> much like writing a novel to get anything done.
>
> As long as we are bashing COBOL, I recall that someone -- name forgotten
> -- wrote a parody that contained statements such as "Verily, let the
> noble variable N be assigned the value known as one".
Heh. In 1973 I was once required to abandon assembler, the language
of Real Programmers, and write a program in COBOL (in fact, a database
front end to COBOL). I took revenge in the label names. From
http://www.lemis.com/grog/src/GOPU
INVOKE SCHEMA KVDMS COPYING COMMON ALL
RECORD COMMON DELIVERY-AREA IS PUFFER
OVERLAY PUFFER WITH ALL
ERROR RECOVERY IS HELL
ROLLBACK IS IMPOSSIBLE.
...
MAKE-GOPU. IF ERROR-STATUS IS NOT EQUAL TO '000307', GO TO
HELL.
Admire that manifest constant.
And yes, this program went into production.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
We lost Rear Admiral "Amazing" Grace Hopper on this day in 1992; amongst
other things she gave us COBOL and ADA, and was allegedly responsible for
the term "debugging" when she removed a moth from a relay on the Harvard
Mk I and taped it to the log.
-- Dave
Moving to COFF since this is really not UNIX as much as programming
philosophy.
On Thu, Dec 17, 2020 at 9:36 AM Larry McVoy <lm(a)mcvoy.com> wrote:
> So the C version was easier for me to understand. But it sort of
> lost something, I didn't really understand Steve's version, not at any
> deep level. But it made more sense, somehow, than the C version did.
>
I'm not too hard on Steve as herein lies the dichotomy that we call
programming. Looking back the BourneGOL macros were clearly convenient
for him as the original author and allow him to express ideas that he had
well in his source. They helped him to create the original and were
comforting in the way he was used to. Plus, as Larry notes, the action of
transpiling loses that (BTW -- look some time at comments in the C version
of advent and you can still vestiges of the original Fortran).
But the problem is that when we create a new program, we can easily forget
that it might live forever[1] - particularly if you are a researcher trying
to advance and explore a set of ideas (which of course is what Steve was at
the time). And as has been noted in many other essays, the true cost of SW
is in the maintenance of it, not the original creation. So making
something easy to understand, particularly in the future without the
context, starts to become extremely attractive - particularly when it has a
long life and frankly impact beyond what is was originally considered.
It's funny, coming across BourneGOL help to validate/teach/glue into me an
important concept when programming for real -> the idea of "least
astonishment" or "social acceptance" of your work. Just because you
understand it and like it might not be the same for your sisters and
brothers in the community. There is no such thing as a private program.
The moment a program leaves your desk/terminal, it will be considered and
analyzed by others.
So back to the time and seeing BourneGOL for the first time, please
consider that in the mid-70s, I was coming to C from BLISS, SAIL, Algol-W
as my HLLs, so I was used to BEGIN/END style programming and bracketing
lining up 4 spaces under the next line with B/E in the same column. The
White Book did not yet exist, but what would become 'one-true bracing
style' later described in K&R was used in the code base for Fifth and Sixth
Edition. When I first saw that, it just looked wrong to me. But I was
coming from a different social setting and was using a different set of
social norms to evaluate this new language and the code written in it.
At some point I took CMU's SW engineering course where we had to swap code
3 different times with other groups for the team projects, and I had come
to realize how important making things be understood by the next team was.
So, I quickly learned to accept K&R style and like Ron and Larry cursed
Steve a little. And while I admire Steve for his work and both ADB and
Bourne Shell were tools I loved and used daily, when I tried to maintain
them I had wished that Steve had thought about those that would come after
- but I do accept that was not on his radar.
That lesson has served me well for many years as a professional and it's a
lesson I try to teach with my younger engineers in particular. It's not
about being 100% easy for you now, it is about being easy for someone other
than you that has to understand your code in the future. Simply use the
social norms of the environment you live and work ("do as the Romans" if
you will). Even if it is a little harder now, learn the community norms,
and use them.
FWIW: You can actually date some of my learnings BTW with fsck (where we
did not apply this rule). Ted and I have come from MTS and
TSS respectively (*i.e.* IBM 360), which you remember from this first few
versions had all errors in UPPER CASE (we kept that style from the IBM
system -- not the traditional UNIX style). For many years after its success
and the program spreading like wildfire within the UNIX community, I would
run it on a system and be reminded of my failure to learn that lesson yet.
Clem
[1] BTW: the corollary to living forever, is that the worst hacks you do
seem to be the ones that live the longest.
ᐧ
https://www.youtube.com/watch?v=GWr4iQfc0uw
Abstract of the talk @ ICFP 2020
Programming language implementations have features such as threads, memory management, type safety, and REPLs that duplicate some of the work done by the underlying operating system. The objective of hosting a language on bare metal is to unify the language implementation and operating system to have a lean system with no redundancy for running applications.
This seems to be the OS:
https://github.com/udem-dlteam/mimosa
The Mimosa operating system consists of a minimal kernel built on C++ and Scheme. It contains a Scheme implementation of a hard drive (ATA) driver, keyboard (PS2), serial (8250 UART), FAT32 filesystem and a small real time clock manager. The project was built to experiment with developement of operating system using a high level functional language to study the developement process and the use of Scheme to build a fairly complex system.
On Dec 16, 2020, at 8:08 PM, John Cowan <cowan(a)ccil.org> wrote:
>
> Sometimes I wonder what would have happened if A68 had become the medium-level language of Unix, and Pascal had become the language of non-Unix, instead of both of them using C.
Funny how we seem to rehash the same things over the years!
In a 1988 comp.lang.misc thread when I expressed hope that "a major
subset of Algol 68 with a new and concise syntax (sort of like C's)
can make a very elegant, type safe and well rounded language.", Piet
van Oostrum[1] commented the combination of dynamic arrays *and*
unions forced the use of GC in Algol68. Either feature by themselves
wouldn't have required GC! The larger point being that compiler
complexity is "almost exponential" (his words) to the number of
added features. Piet and others also wrote that both Pascal and C
had left out a lot of the hard things in A68. So I doubt A68 or a
subset would have replaced C or Pascal in 70s-80s.
[My exposure to Algol68 was when I had stumbled upon Brailsford and
Walker's wonderful "Introductory Algol 68 programming" @ USC. After
having used PL/I, Pascal & Fortran the regularity of A68 was quite
enticing but AFAIK no one used A68 at USC. I must admit I still like
it more than modern languages like Java, Go, Rust, C++, ...]
[1] Piet had implemented major parts of both A68 and A60.
Sorta relevant to both groups...
Augusta Ada King-Noel, Countess of Lovelace (and daughter of Lord Byron),
was born on this day in 1815; arguably the world's first computer
programmer and a highly independent woman, she saw the potential in
Charles Babbage's new-fangled invention.
J.F.Ossanna was given unto us on this day in 1928; a prolific programmer,
he not only had a hand in developing Unix but also gave us the ROFF
series.
Who'ld've thought that two computer greats would share the same birthday?
-- Dave
I like a challenge although it wasn't really much of it. A simple arpa
imp in yahoo spilled the beans :-)
"The Interface Message Processor (IMP) was the packet switching node
used to interconnect participant networks to the ARPANET from the late
1960s to 1989. It was the first generation of gateways, which are
known today as routers.[1][2][3] An IMP was a ruggedized Honeywell
DDP-516 minicomputer with special-purpose interfaces and software.[4]
In later years the IMPs were made from the non-ruggedized Honeywell
316 which could handle two-thirds of the communication traffic at
approximately one-half the cost.[5] An IMP requires the connection to
a host computer via a special bit-serial interface, defined in BBN
Report 1822. The IMP software and the ARPA network communications
protocol running on the IMPs was discussed in RFC 1, the first of a
series of standardization documents published by the Internet
Engineering Task Force (IETF)."
https://en.wikipedia.org/wiki/Interface_Message_Processor
Cheers,
uncle rubl
From: Dave Horsfall <dave(a)horsfall.org>
To: Computer Old Farts Followers <coff(a)tuhs.org>
Cc:
Bcc:
Date: Wed, 9 Dec 2020 13:41:11 +1100 (EST)
Subject: Re: [COFF] ARPAnet now 4 nodes
On Sat, 5 Dec 2020, Noel Chiappa wrote:
> The ARPAnet reached four nodes on this day in 1969 .. the nodes were > UCSB, UCLA, SRI, and Utah.
Yeah; see the first map here:
http://www.chiappa.net/~jnc/tech/arpageo.html
Yep; I know that first map well :-) For the newbies here, the ARPAnet
was the predecessor of the Internet (no, it didn't spring from the
brow of Zeus, nor Billy Gates), and what we now call "routers" were
then IMPs (look it up).
Missing maps gratefully received!
Indeed; history needs to be kept alive, lest it die.
-- Dave
> The ARPAnet reached four nodes on this day in 1969 ..
> the nodes were UCSB, UCLA, SRI, and Utah.
Yeah; see the first map here:
http://www.chiappa.net/~jnc/tech/arpageo.html
Missing maps gratefully received!
Noel
The ARPAnet reached four nodes on this day in 1969; at least one "history"
site reckoned the third node was connected in 1977 (and I'm still waiting
for a reply to my correction). Well, I can believe that perhaps there
were only three left by then...
According to my notes, the nodes were UCSB, UCLA, SRI, and Utah.
-- Dave
Dan Cross wrote in
<CAEoi9W63J0HKbWUk8wrGSkCdyzzaV-F6km-q+K-H2+kvURWWdQ(a)mail.gmail.com>:
|On Tue, Dec 1, 2020 at 3:40 PM Bakul Shah <bakul(a)iitbombay.org> wrote:
|
|> On Dec 1, 2020, at 12:20 PM, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
|>> Never without my goto:, and if it is only to break to error
|>> handling and/or staged destruction of local variables after
|>> initialization failures. Traumatic school impression, finding
|>> yourself locked in some PASCAL if condition, and no way to go to.
|>
|> Pascal had goto.
Hm, i did not receive Bakul's mail. Well i did not use it long
enough. I think this came up in the past already, it could have
been it was a mutilated version, there definetely was no goto in
this DOS-looking UI with menu bar, with menu entries for
compilation plus, help screen etc etc. Borland Pascal, Borland
dBASE it must have been then. Didn't i say "maybe the teacher had
an option to turn it on" or something :) Yeah, i do not know, but
there was no goto, definetely.
|Pascal also had to go. (Thanks...I'm here all week.)
Ah, and all the many-page program listings in Delphi, what a waste
of paper. Whether anyone really typed them out, not me.
|You can even do a non-local goto!
Help.
|> In Go you don't need goto for the sort of thing you and McVoy
|> talked about due to its defer statement and GC. Now granted
|> GC may be too big of a hammer for C/C++ but a future C/C++
|> add defer gainfully as the defer pattern is pretty common.
|> For example, mutex lock and unlock.
Terrible just as pthread_cleanup_push/pop, and that can be
entirely local-to-scope. Terrible even if there would be
"closure"s that could be used as arguments instead of a function
pointer. gcc supports/ed computed goto's, which would also be
nice in that respect. And some kind of ISO _Xy() which could be
used in conditionals dependent on whether the argument is
a computed goto, a "closure" or a function pointer (or a member
function pointer).
I always hated that C++ is not ISO C plus extensions, so your
"C/C++" is not true for a long time...
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Is it just me, or did console messages really wake up the screen saver on
BSDi (aka BSD/OS)? That old box has long since gone to $HEAVEN (along
with the company itself; thank you WinDriver) but I'm getting annoyed at
having to tap a key on FreeBSD to see the console, which I don't recall
having to do on BSDi.
-- Dave
The world's first computer programmer (and a mathematician, when that was
deemed unseemly for a mere woman), we lost her in 1852 from uterine
cancer.
-- Dave
[Redirecting to COFF]
On Monday, 23 November 2020 at 8:42:34 -0500, Noel Chiappa wrote:
>> On Mon, Nov 23, 2020 at 12:28 PM Erik E. Fair <fair-tuhs(a)netbsd.org> wrote:
>
>> The Honeywell DDP-516 was the computer (running specialized software
>> written by Bolt, Bernanek & Newman (BBN)) which was the initial model of
>> the ARPANET Interface Message Processors (IMP).
>
> The IMPs had a lot of custom interface hardware; sui generis serial
> interlocked host interfaces (so-called 1822), and also the high-speed modem
> interfaces. I think there was also a watchdog time, IIRC (this is all from
> memory, but the ARPANET papers from JCC cover it all).
I worked with a DDP-516 at DFVLR 46 years ago. My understanding was
that the standard equipment included two different channel interfaces.
One, the DMC (Direct Multiplexer Control, I think) proved to be just
what I needed for my program, a relatively simple tape copy program.
The input tape was analogue, unbuffered, and couldn't be stopped, so
it was imperative to accept all data as it came in from the ADC.
But the program didn't work. According to the docco, the DMC should
have reset when the transfer was complete (maybe depending on
configuration parameters), but it didn't. We called in Honeywell
support, who scratched their heads and went away, only to come back
later and say that it couldn't be fixed.
I worked around the problem in software by continually checking the
transfer count and restarting when the count reached 0. So the
program worked, but I was left wondering whether this was a design
problem or a support failure. Has anybody else worked with this
feature?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
I'm currently reviewing a paper about Unix and Linux, and I made the
comment that in the olden days the normal way to build an OS image for
a big computer was from source. Now I've been asked for a reference,
and I can't find one! Can anybody help?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
On 2020-Nov-06 10:07:21 -0500, Clem Cole <clemc(a)ccc.com> wrote:
>Will, I do still the same thing, but the reason for 72 for email being that
>way is still card-based. In FORTRAN the first column defines if the card
>is new (a blank), a comment (a capital C), no zero a 'continuation' of the
>last card. But column 73-80 were 'special' and used to store sequence #s
>(this was handy when you dropped your card deck, card sorters could put it
>back into canonical order).
Since no-one has mentioned it, the reason why Fortran and Cobol ignore
columns 73-80 goes back to the IBM 711 card reader - which could read any
(but usually configured for the first) 72 columns into pairs of 36-bit words
in an IBM 701.
--
Peter Jeremy
On Tuesday, 10 November 2020 at 16:52:58 -0700, Adam Thornton wrote:
> If 4.3BSD is old enough, the System Administrator's Manual (e.g.
> http://bitsavers.informatik.uni-stuttgart.de/pdf/isi/bsd/490197C_Unix_4.3BS…)
> section 4.2 _et seq_.
>
> On Tue, Nov 10, 2020 at 4:11 PM Greg 'groggy' Lehey <grog(a)lemis.com> wrote:
>
>> I'm currently reviewing a paper about Unix and Linux, and I made the
>> comment that in the olden days the normal way to build an OS image for
>> a big computer was from source. Now I've been asked for a reference,
>> and I can't find one! Can anybody help?
>
> How olden days do you mean?
Sorry, I wasn't very clear. I was thinking commercial systems of the
1960s and 1970s, not any form of Unix.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
Moving to COFF.
below.
On Fri, Nov 6, 2020 at 10:40 AM Will Senn <will.senn(a)gmail.com> wrote:
> Clem,
>
> It figures. I should have known there was a reason for the shorter lines
> other than display. Conventions are sticky and there appears to be a
> generation gap. I use single spaces between sentences, but my ancestors
> used 2... who knows why? :).
>
You never use a real typewriter. Double-space allows you to edit
(physically) the document if need be. This was how I did everything
before I had easy computer access.
I went to college with an electric typewriter and all my papers were done
on it in the fall of my freshman year (until I got access to UNIX). I did
have an CS account for the PDP-10 and they had the XGP, but using it for
something like your papers was somewhat frowned upon. However, the UNIX
boxes we often bought 'daisy wheel' typewriters that had RS-232C
interfaces. Using nroff, I could then do my papers and run it off in the
admin's desk at night.
Clem
[Coff, etc]
On Saturday, 7 November 2020 at 0:29:01 +0100, Steffen Nurpmeso wrote:
> Greg 'groggy' Lehey wrote in
> <20201106225422.GD99027(a)eureka.lemis.com>:
>> On Friday, 6 November 2020 at 7:46:57 -0800, Chris Torek wrote:
>>> In typesetting, especially when doing right-margin justification,
>>> we have "stretchy spaces" between words. The space after end-of-
>>> sentence punctuation marks is supposed to be about 50% larger than
>>> the width of the between-words spaces, and if the word spaces get
>>> stretched, so should the end-of-sentence space.
>>
>> FWIW, this is the US convention. Other countries have different
>> conventions. My Ausinfo style manual states
>>
>> There is no need to increase the amount of punctuation ... at the
>> end of a sentence.
>>
>> I believe that this also holds for Germany. I'm not sure that the UK
>> didn't have different rules again.
>
> Yes, the DUDEN of Germany says for typewriters that the punctuation
> characters period, comma, semicolon, colon, question- and
> exclamation mark are added without separating whitespace. The next
> word follows after a space ("Leerschritt", "void step").
Thanks for the confirmation. Where did you find that? I checked the
yellow Duden (âRichtlinien für den Schriftsatzâ) before sending my
previous message, but I couldn't find anything useful.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
Exactly -- just re-read Will's question. 2 spaces after punctuation is a
fix-size typeface solution to the 1.5 typographer layout.
I was referring to why typed papers were traditionally double spaced
between the lines.
On Fri, Nov 6, 2020 at 11:02 AM Chris Torek <torek(a)elf.torek.net> wrote:
> >I use single spaces between sentences, but my ancestors
> >used 2... who knows why? :).
>
> Typewriters.
>
> In typesetting, especially when doing right-margin justification,
> we have "stretchy spaces" between words. The space after end-of-
> sentence punctuation marks is supposed to be about 50% larger than
> the width of the between-words spaces, and if the word spaces get
> stretched, so should the end-of-sentence space. Note that this is
> all in the variable-pitch font world.
>
> Since typewriters are fixed-pitch, the way to emulate the
> 1.5-space-wide gap is to expand it to 2.
>
> Chris
>
[ Moving to COFF (if your MUA respects "Reply-To:") ]
On Fri, 6 Nov 2020, Larry McVoy wrote:
> But I'm pretty old school, I write in C, I debug a lot with printf and
> asserts, I'm kind of a dinosaur.
You've never experienced the joy of having your code suddenly working when
inserting printf() statements? Oh dear; time to break out GDB...
-- Dave
[Following clemc's example and moving to COFF]
On Friday, 6 November 2020 at 7:19:24 -0800, Chris Torek wrote:
>> I'm lazy.
>
> I am too, but I still use a big screen: I just fit a lot of smaller
> windows in it.
Agreed. There's a second issue here: for reading text, 70 to 80 n
widths is optimal. For reading computer output, it should be much
wider. I've compromised by fitting two 120 character wide xterms on
my monitors, left and right. I still display only 70-80 characters
for text.
> I'd like to have a literal wall screen, especially if I'm in an
> interior, windowless (as in physical glass windows) room, so that
> part of the wall could be a "window" showing a view "outside" (real
> time, or the ocean, or whatever) and other parts of the wall could
> be the text I'm working on/with, etc.
The issue there is perspective. I could do that (modulo cost) in my
office, but I'd have a horizontal angle of about 90°, and that's
uncomfortable.
> (But I'll make do with these 27" 4k displays. :-) )
Yes, that's about the widest I find comfortable, and it took me a
while to adapt.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
I'd be curious to hear from the folks a few years older than I (I started
in the later 60s with the GE-635), but my own experiences of having lived
through some of it, I personally think it was more to do with all of the
systems of the time switching from cards to the Model 28 and later the 33
then Unix or AT&T. Unix was just one of the systems that we used at the
time of the transition from cards. But the other timesharing systems of
those days began to transition to the tty's requirements.
On Fri, Nov 6, 2020 at 12:27 PM Stephen Clark <sclark46(a)earthlink.net>
wrote:
> On 11/6/20 12:13 PM, Adam Thornton wrote:
> > I’m going to chime in on pro-80-columns here, because with the text a
> comfortable size to read (although this is getting less true as my eyes
> age), I can read an entire 80-column line without having to sweep my eyes
> back and forth.
> >
> > I can’t, and never could, do that at 132.
> >
> > As a consequence, I read much, much faster with 80-column-ish text
> blocks.
> >
> > I also think there is something to the “UNIX is verbal” and “UNIX nerds
> tend to be polyglots often with a surprising amount of liberal arts
> background of one kind or another,” argument. That may, however, merely be
> confirmation bias.
> >
> > Adam
> May have had to do with the first terminal commonly used with UNIX.
>
> The Model 33 printed on 8.5-inch (220 mm) wide paper, supplied on
> continuous
> 5-inch (130 mm) diameter rolls and fed via friction (instead of, e.g.,
> tractor
> feed). It printed at a fixed 10 characters per inch, and supported
> 74-character
> lines,[13] although 72 characters is often commonly stated.
>
>
Hey all, I was browsing my small corner of the fediverse, when I came
across a post that said:
> @pastelpunkbandit@hellsite.site
> i wonder if people from the 70s would make fun of us for still using vi
It got me wondering -- what /was/ the view of the future of computing,
by people working deeply with the systems of the time? I know that
people worked on what they felt was the future -- and returned bearing
the gifts of Smalltalk. Prolog, etc ad nausem. Surely there was the
expectation that things would be improved, but what form did those
expectations take?
Incidentally, if there /were/ jokes about people using $program in the
future -- I think that would be of interest too :)
Thanks!
--
"Too enough never much is!"