At the risk of releasing more heat than light (so if you feel
compelled to flame by this message, please reply just to me
or to COFF, not to TUHS):
The fussing here over Python reminds me very much of Unix in
the 1980s. Specifically wars over editors, and over BSD vs
System V vs Research systems, and over classic vs ISO C, and
over embracing vendor-specific features (CSRG and USG counting
as vendors as well as Digital, SGI, Sun, et al) vs sticking
to the common core. And more generally whether any of the
fussing is worth it, and whether adding stuff to Unix is
progress or just pointless complication.
Speaking as an old fart who no longer gets excited about this
stuff except where it directly intersects something I want to
do, I have concluded that nobody is entirely right and nobody
is entirely wrong. Fancy new features that are there just to
be fancy are usually a bad idea, especially when they just copy
something from one system to a completely different one, but
sometimes they actually add something. Sometimes something
brand new is a useful addition, especially when its supplier
takes the time and thought to fit cleanly into the existing
ecosystem, but sometimes it turns out to be a dead end.
Personal taste counts, but never as much as those of us
brandishing it like to think.
To take Python as an example: I started using it about fifteen
years ago, mostly out of curiousity. It grew on me, and these
days I use it a lot. It's the nearest thing to an object-
oriented language that I have ever found to be usable (but I
never used the original Smalltalk and suspect I'd have liked
that too). It's not what I'd use to write an OS, nor to do
any sort of performance-limited program, but computers and
storage are so fast these days that that rarely matters to me.
Using white space to denote blocks took a little getting used
to, but only a little; no more than getting used to typing
if ...: instead of if (...). The lack of a C-style for loop
occasionally bothers me, but iterating over lists and sets
handles most of the cases that matter, and is far less cumbersome.
It's a higher-level language than C, which means it gets in the
way of some things but makes a lot of other things easier. It
turns out the latter is more significant than the former for
the things I do with it.
The claim that Python doesn't have printf (at least since ca. 2.5,
when I started using it) is just wrong:
print 'pick %d pecks of %s' % (n, fruit)
is just a different spelling of
printf("pick %d pecks of %s\n", n, fruit)
except that sprintf is no longer a special case (and snprintf
becomes completely needless). I like the modern
print(f'pick {n} pecks of {fruit}')
even better; f strings are what pushed me from Python 2 to
Python 3.
I really like the way modules work in Python, except the dumbass
ways you're expected to distribute a program that is broken into
modules of its own. As a low-level hacker I came up with my own
way to do that (assembling them all into a single Python source
file in which each module is placed as a string, evaled and
inserted into the module table, and then the starting point
called at the end; all using documented, stable interfaces,
though they changed from 2 to 3; program actually written as
a collection of individual source files, with a tool of my
own--written in Python, of course--to create the single file
which I can then install where I need it).
I have for some years had my own hand-crafted idiosyncratic
program for reading mail. (As someone I know once said,
everybody writes a mailer; it's simple and easy and makes
them feel important. But in this case I was doing it just
for myself and for the fun of it.) The first edition was
written 20 years ago in C. I rewrote it about a decade ago
in Python. It works fine; can now easily deal with on-disk
or IMAP4 or POP3 mailboxes, thanks both to modules as a
concept and to convenient library modules to do the hard work;
and updating in the several different work and home environments
where I use it no longer requires recompiling (and the source
code need no longer worry about the quirks of different
compilers and libraries); I just copy the single executable
source-code file to the different places it needs to run.
For me, Python fits into the space between shell scripts and
awk on one hand, and C on the other, overlapping some of the
space of each.
But personal taste is relevant too. I didn't know whether I'd
like Python until I'd tried it for a few real programs (and
even then it took a couple of years before I felt like I'd
really figured out out to use it). Just as I recall, about
45 years ago, moving from TECO (in which I had been quite
expert) to ed and later the U of T qed and quickly feeling
that ed was better in nearly every way; a year or so later,
trying vi for a week and giving up in disgust because it just
didn't fit my idea of how screen editors should work; falling
in love with jim and later sam (though not exclusively, I still
use ed/qed daily) because they got the screen part just right
even if their command languages weren't quite such a good match
for me.
And I have never cottoned on to Perl for, I suspect, the same
reason I'd be really unhappy to go back to TECO. Tastes
evolve. I bet there's a lot of stuff I did in the 1980s that
I'd do differently could I have another go at it.
The important thing is to try new stuff. I haven't tried Go
or Rust yet, and I should. If you haven't given Python or
Perl a good look, you should. Sometimes new tools are
useless or cumbersome, sometimes they are no better than
what you've got now, but sometimes they make things easier.
You won't know until you try.
Here endeth today's sermon from the messy office, which I ought
to be cleaning up, but preaching is more fun.
Norman Wilson
Toronto ON
I’m re-reading Brian Kernighan’s book on Early Unix (‘Unix: A History & Memoir’)
and he mentions the (on disk) documentation that came with Unix - something that made it stand out, even for some decades.
Doug McIlroy has commented on v2-v3 (1972-73?) being an extremely productive year for Ken & Dennis.
But as well, they wrote papers and man pages, probably more.
I’ve never heard anyone mention keyboard skills with the people of the CSRC - doesn’t anyone know?
There’s at least one Internet meme that highly productive coders necessarily have good keyboard skills,
which leads to also producing documentation or, at least, not avoiding it entirely, as often happens commercially.
Underlying this is something I once caught as a random comment:
The commonality of skills between Writing & Coding.
Does anyone has any good refs for this crossover?
Is it a real effect or a biased view.
That great programmers are also “good writers”:
takes time & focus, clarity of vision, deliberate intent and many revisions, chopping away the cruft that’s isn’t “the thing” and “polishing”, not rushing it out the door.
Ken is famous for his brevity and succinct statements.
Not sure if that’s a personal preference, a mastered skill or “economy in everything”.
steve j
=========
A Research UNIX Reader: Annotated Excerpts from the Programmer's Manual, 1971-1986
M.D. McIlroy
<https://www.cs.dartmouth.edu/~doug/reader.pdf>
<https://archive.org/details/a_research_unix_reader/page/n13/mode/2up>
pg 10
3.4. Languages
CC (v2 page 52)
V2 saw a burst of languages:
a new TMG,
a B that worked in both core-resident and software-paged versions,
the completion of Fortran IV (Thompson and Ritchie), and
Ritchie's first C, conceived as B with data types.
In that furiously productive year Thompson and Ritchie together
wrote and debugged about
100,000 lines of production code.
=========
Programming's Dirtiest Little Secret
Wednesday, September 10, 2008
<http://steve-yegge.blogspot.com/2008/09/programmings-dirtiest-little-secret…>
It's just simple arithmetic. If you spend more time hammering out code, then in order to keep up, you need to spend less time doing something else.
But when it comes to programming, there are only so many things you can sacrifice!
You can cut down on your documentation.
You can cut down on commenting your code.
You can cut down on email conversations and
participation in online discussions, preferring group discussions and hallway conversations.
And... well, that's about it.
So guess what non-touch-typists sacrifice?
All of it, man.
They sacrifice all of it.
Touch typists can spot an illtyperate programmer from a mile away.
They don't even have to be in the same room.
For starters, non-typists are almost invisible.
They don't leave a footprint in our online community.
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
The discussion about the 3B2 triggered another question in my head: what were the earliest multi-processor versions of Unix and how did they relate?
My current understanding is that the earliest one is a dual-CPU VAX system with a modified 4BSD done at Purdue. This would have been late 1981, early 1982. I think one CPU was acting as master and had exclusive kernel access, the other CPU would only run user mode code.
Then I understand that Keith Kelleman spent a lot of effort to make Unix run on the 3B2 in a SMP setup, essentially going through the source and finding all critical sections and surrounding those with spinlocks. This would be around 1983, and became part of SVr3. I suppose that the “spl()” calls only protected critical sections that were shared between the main thread and interrupt sequences, so that a manual review was necessary to consider each kernel data structure for parallel access issues in the case of 2 CPU’s.
Any other notable work in this area prior to 1985?
How was the SMP implementation in SVr3 judged back in its day?
Paul
I do not know whether it is right, and i am surely not the right
person to mourn in public, but i wanted to forward this.
I only spoke to him once, he mailed me in private, but i could not
help him supporting Uganda, even though i admired what he was
doing, and often looked.
Local capacity building, and (mutual) respect for local culture
and practice, despite all sharp spikes and edges the storm causes
over time, that is surely the way to do it right.
--- Forwarded from Bram Moolenaar <Bram(a)Moolenaar.net> ---
Sender: vim_announce(a)googlegroups.com
To: vim_announce(a)googlegroups.com
Subject: Message from the family of Bram Moolenaar
Message-Id: <20230805121930.4AA8F1C0A68(a)moolenaar.net>
Date: Sat, 5 Aug 2023 13:19:30 +0100 (WEST)
From: Bram Moolenaar <Bram(a)Moolenaar.net>
Reply-To: vim_announce(a)googlegroups.com
List-ID: <vim_announce.googlegroups.com>
Dear all,
It is with a heavy heart that we have to inform you that Bram Moolenaar passed away on 3 August 2023.
Bram was suffering from a medical condition that progressed quickly over the last few weeks.
Bram dedicated a large part of his life to VIM and he was very proud of the VIM community that you are all part of.
We as family are now arranging the funeral service of Bram which will take place in The Netherlands and will be held in the Dutch lanuage. The extact date, time and place are still to be determined.
Should you wish to attend his funeral then please send a message to funeralbram(a)gmail.com. This email address can also be used to get in contact with the family regarding other matters, bearing in the mind the situation we are in right now as family.
With kind regards,
The family of Bram Moolenaar
--
--
You received this message from the "vim_announce" maillist.
For more information, visit http://www.vim.org/maillist.php
---
You received this message because you are subscribed to the Google Groups "vim_announce" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vim_announce+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/vim_announce/20230805121930.4AA8F1C0A68%4….
-- End forward <20230805121930.4AA8F1C0A68(a)moolenaar.net>
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Doug McIlroy:
This reminds me of how I agonized over Mike Lesk's refusal to remove
remote execution from uucp.
====
Uux, the remote-execution mechanism I remember from uucp, had
rather better utility than the famous Sendmail back-door: it
was how uucp carried mail, by sending a file to be handed to
mailer on the remote system. It was clearly dangerous if
the remote site accepted any command, but as shipped in V7
only a short list of remote commands was allowed: mail rmail
lpr opr fsend fget. (As uucp was used to carry other things
like netnews, the list was later extended by individual sites,
and eventually moved to a file so reconfiguration needn't
recapitulate compilation).
Not the safest of mechanisms, but at least in V7 it had a use
other than Mike fixing your system for you.
Is there some additional history here? e.g. was the list of
permitted commands added after arguments about safety, or
some magic command that let Mike in removed? Or was there a
different remote-execution back door I don't remember and don't
see in a quick look at uuxqt.c?
Norman Wilson
Toronto ON
> From: Will Senn
> when did emacs arrive in unix and was it a full fledged text editor
> when it came or was it sitting on top of some other subssystem
Montgomery Emacs was the first I knew of; it started on PDP-11 UNIX.
According to:
https://github.com/larsbrinkhoff/emacs-history/blob/sources/docs/Montgomery…
Montgomery Emacs started in 1980 or so; here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/emacs/emacs.doc
is a manual from May, 1981.
It had pretty full EMACS functionality, but the editor was not written in an
implementation language of any kind (like the original, and like much later
GNU Emacs); it was written in C. It did have macros for extensions, but they
were written in Emacs commands, so, like the TECO that the original was
written in, their source looks kind of like line noise. (Does anyone young
even know what line noise looks like any more? I feel so old - and I'm a
youngster compared to McIlroy!)
> Was TECO ever on unix?
I don't think it was widespread, but there was a TECO on the PDP-11 UNIXes at
MIT; until Montgomery Emacs arrived, it was the primary editor used on those
machines.
Not that most people used TECO commands for editing; early on, they added '^R
mode' to the UNIX TECO, similar to the one on ITS TECO, and a macro package
was written for it (in TECO - so again, the source looks like line noise);
the command set was like a stripped down EMACS - about a dozen command
characters total; see the table about a page down here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/teco/help
All the source, and documentation, such as it is, it available, here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/teco/
but don't even think about running it. It's written in MACRO-11, and it used
a version of that hacked at MIT to run on UNIX. To build new versions of
that, you need a special linker - written in BCPL. So you also need the UNIX
BCPL compiler.
Noel
> From: Rob Pike
> There was a guy in production at Google using Unix TECO as his main
> editor when I joined in 2002.
Do you happen to know which version it was, or what it was written in?
It must have been _somebody_'s re-implementation, but I wonder who or where
(or why :-).
Noel
Steve thank you for the recollections, that is precisely the sort of story I was hoping to hear regarding your Interdata work. I had found myself quite curious why it would have wound up on a shelf after the work involved, and that makes total sense. That's a shame too, it sounds like the 8/32 could've picked up quite some steam, especially beating the VAX to the punch as a UNIX platform. But hey, it's a good thing so much else precipitated from your work!
Also, those sorts of microarchitectural bugs keep me up at night. For all the good in RISC-V there are also now maaaaany fabs with more license than ever to pump out questionable ICs. Combine that with questionable boards with strange bus architectures, and gee our present time sure does present ripe opportunities to experiment with tackling those sorts of problems in software. Can't say I've had the pleasure but it would be nice to still be able to fix stuff with a wire wrap in the field...
- Matt G.
P.S. TUHS cc as promised, certainly relevant information re: Interdata 8/32 UNIX.
------- Original Message -------
On Friday, August 4th, 2023 at 6:17 PM, scj(a)yaccman.com <scj(a)yaccman.com> wrote:
> Sorry for the year's delay in responding... I wrote the compiler for the Interdata, and Dennis and I did much of the debugging. The Interdata had much easier addressing for storage: the IBM machine made you load a register, and then you had a limited offset from that register that you could use. I think IBM was 10 bits, maybe 12. But all of it way too small to run megabyte-sized programs. The Interdata allowed a larger memory offset and pretty well eliminated the offsets as a problem. I seem to recall some muttering from Dennis and Ken about the I/O structure, which was apparently somewhat strange but much less weird than the IBM.
>
> Also, IBM and Interdata were big-endian, and the PDP was little-endian. This gave Dennis and Ken some problems since it was easy to get the wrong endian, which blew gaskets when executed or copied into the file system. Eventually, we got the machine running, and it was quite nice: true 32-bit computing, it was reasonably fast, and once we got the low-level quirks out (including a famous run-in with the "you are not expected to understand this" code in the kernel, which, it turned out, was a prophecy that came true. On the whole, the project was so successful that we set up a high-level meeting with Interdata to demo and discuss cooperation. And then "the bug" hit. The machine would be running fine, and then Blam! it has lept into low memory and aborted with no hint as to what or where the fault was.
>
> We finally tracked down the problem. The Interdata was a microcode machine. And older Unix system calls would return -1 if they failed. In V7, we fixed this to return 0, but there was still a lot of user code that used the old convention. When the Interdata saw a request to load -1 it first noticed that the integer load was not on an address divisible by 4, and jumped to a location in the microcode and executed a couple of microinstructions. But then it also noticed that the address was out of range and entered the microcode again, overwriting the original address that caused the problem and freezing the machine with no indication of where the problem was. It took us only a day or two to see what the problem was, and it was hardware, and they would need to fix it. We had our meeting with Interdata, gave a pretty good sales pitch on Unix, and then said that the bug we had found was fatal and needed to be fixed or the deal was off. The bottom line, they didn't want to fix the bug in the hardware. They did come out with a Unix port several years later, but I was out of the loop for that one, and the Vax (with the UCB paging code) had become the machine of choice...
>
> ---
>
> On 2023-07-25 16:23, segaloco via COFF wrote:
>
>> So I've been studying the Interdata 32-bit machines a bit more closely lately and I'm wondering if someone who was there at the time has the scoop on what happened to them. The Wikipedia article gives some good info on their history but not really anything about, say, failed follow-ons that tanked their market, significant reasons for avoidance, or anything like that. I also find myself wondering why Bell didn't do anything with the Interdata work after springboarding further portability efforts while several other little streams, even those unreleased like the S/370 and 8086 ports seemed to stick around internally for longer. Were Interdata machines problematic in some sort of way, or was it merely fate, with more popular minis from DEC simply spacing them out of the market? Part of my interest too comes from what influence the legacy of Interdata may have had on Perkin-Elmer, as I've worked with Perkin-Elmer analytical equipment several times in the chemistry-side of my career and am curious if I was ever operating some vague descendent of Interdata designs in the embedded controllers in say one of my mass specs back when.
>>
>> - Matt G.
>>
>> P.S. Looking for more general history hence COFF, but towards a more UNIXy end, if there's any sort of missing scoop on the life and times of the Bell Interdata 8/32 port, for instance, whether it ever saw literally any production use in the System or was only ever on the machines being used for the portability work, I'm sure that could benefit from a CC to TUHS if that history winds up in this thread.
> Most of the time I'd rather not have to care whether the thing
> I'm printing is a string, or a pointer, or an integer, or whatever:
> I just want to see its value.
> Go has %v for exactly this. It's very nice for debugging.
Why so verbose? In Basic, PRINT required no formatting directives at all.
Doug
> From: Clem Cole
> first two subsystems for the 11 that ran out of text space were indeed
> vi and Pascal subsystems
Those were at Berkeley. We've established that S-I&D were in V6 when it was
released in May, 1975 - so my question is 'what was Bell doing in 1975 that
needed more than 64KB?'
The kernel, yeah, it could definitely use S-I&D on a larger system
(especially when you remember that stock V6 didn't play any tricks with
overlays, and also dedicated one segment - the correct term, used in the 1972
-11/45 processor manual - to the user structure, and one to the I/O page,
limiting the non-S-I&D kernel to 48KB). But what user commands?
It happens that I have a complete dump of one of the MIT systems, so I had a
look to see what _we_ were running S-I&D on. Here's the list from /bin (for
some reason that machine doesn't have a /usr/bin):
a68
a86
c86
emacs
lisp
ndd
send
teco
The lisp wasn't a serious use; I think the only thing we ever used it for was
'doctor'. So, two editors, a couple of language tools, an email tool (not
sure why that one got it - maybe for creating large outgoing messages). (The
ndd is probably to allow the biggest possible buffers.)
Nothing in /etc, and in /lib, just lint1 and lint2 (lint, AFAICT, post-dates
V6). Not a lot.
So now I'm really curious what Bell was using S-I&D for. (If I weren't lazy,
I'd pull the V6 distro - which is only available as RK images, and individual
files, alas - and look in /bin and everywhere and see if I can find anything.
I suspect not, though.)
Anyone have any guesses/suggestions? Maybe some custom applications?
Noel