Moving to COFF since while this is a UNIX issue its really attitude,
experience and perspective.
On Thu, Dec 30, 2021 at 8:01 PM Rob Pike <robpike(a)gmail.com> wrote:
> Grumpy hat on.
>
> Sometimes the Unix community suffers from the twin attitudes of a)
> believing if it can't be done perfectly, any improvement shouldn't be
> attempted at all and b) it's already done as well as is possible anyway.
>
> I disagree with both of these positions, obviously, but have given up
> pushing against them.
>
> We're in the 6th decade of Unix and we still suffer from unintended,
> fixable consequences of decisions made long long ago.
>
> Grumpy hat off.
>
> -rob
>
While I often agree with you and am a huge fan of your work both written
and programming, I am going to take a different position:
I am very much into researching different solutions and love exploring them
and seeing how to apply the lessons, but *just because we can change a
change*, *does not always mean we should*. IMOI: *Economics has to play
into equation*.
I offer the IPv4 to IPv6 fiasco as an example of a change because we could
(and we thought it would help - hey I did in the early 1990s), but it
failed for economic reasons. In the end, any real change has to take into
account some level of economics.
The examples of the differences in the shell is actually a different issue
-- that was territorial and not economics -- each vendor adding stuff that
helped them (and drove IVS/end users of multiple platforms crazy). The
reality with SunOS sh vs Ultrix sh vs HP-UX sh vs System V (att sh) was yet
another similar but different -- every manufacturer messed with a V7
derivative sh was a little different -- including AT&T, Korn et al. For
that matter you (Rob) created a new syntax command with Plan9 [although you
did not try to be and never claimed to be V7 compatible -- to your point
you break things where you thought it matters and as a researcher I accept
that]. But because all the manufacturers were a little different, it was
exactly why IEEE said -- wait a minute -- let's define a base syntax which
will work everywhere and it is something we can all agree and if we all
support it -- great. We did that, and we call that POSIX (and because it
was designed by compromise and committee - like a camel it has some humps).
*But that does mean compromise -- some agreed 'sh' basics needs to keep the
base level.*
The problem Ted and Larry describes is real ... research *vs.* production.
So it begs the question, at what time does it make it sensible/ (worth
it/economically viable) to move on?
Apple famously breaks things and it drives me bonkers because many (most I
would suggest) of those changes are hardly worth it -- be it my iPhone or
my Mac. I just want to use the darned thing BTW: Last week, the clowns at
Telsa just rolled out a new UI for my Model S --- ugh -- because they could
(now I'm fumbling trying deal with the climate system or the radio -- it
would not do bad if they had rolled out a the new UI on a simulator for my
iPad so I could at least get used to it -- but I'm having to learn it live
-- what a PITA -- that really makes me grumpy).
What I ask for this august body to consider is that before we start looking
at these changes is to ask what we are really getting in return when a new
implementation breaks something that worked before. *e.g.* I did not think
systemd bought end users much value able, must like IPv6 in practice, it
was thought to solve many problems, but did not buy that much and has
caused (continues to cause) many more.
In biolog every so often we have an "ice age" and kill a few things off and
get to start over. That rarely happens in technology, except when a real
Christianen style disruption takes place -- which is based on economics --
a new market values the new idea and the old market dies off. I believe
that from the batch/mainframe 1960s/early 70s world, Unix was just that --
but we got to start over because the economics of 'open systems' and the
>>IP<< being 'freely available' [which compared to VMS and other really
proprietary systems] did kill them off. I also think that the economics
of completely free (Linux) ended up killing the custom Unix diversions.
Frankly, if (at the beginning) Plan9 has been a tad easier/cheaper/more
economical for >>everyone<< in the community obtain (unlike original Unix
release time, Plan9 was not the same rules because AT&T was under different
rules and HW cost rules had changed things), it >>might<< have been the
strong strain that killed off the old. If IPv6 has been (in practice)
cheaper to use than IPv4 [which is what I personally thought the ISP would
do with it - since it had been designed to help them] and not made as a
premium feature (i.e they had made it economically to change), it might
have killed of IPv4.
Look at 7 decades of Programming Language design, just being 'better' is
not good enough. As I have said here and many other places, the reality is
that Fortran still pays the salary of people like me in the HPC area [and I
don't see Julia or for that matter, my own company's pretty flower - Data
Parallel C++ making inroads soon]. It's possible that Rust as a system
programming language >>might<< prove economical to replace C. I personally
hope Go makes the inroads to replace C++ in user space. But for either to
do that, there has to be an economical reason - no brainer style for
management.
What got us here was a discussion of the original implementation of
directory files, WRT links and how paths are traversed. The basic
argument comes from issues with how and when objects are named. Rob, I
agree with you, that just because UNIX (or any other system) used a scheme
previously does not make the end-all. And I do believe that rethinking
some of the choices made 5-6 decades ago is in order. But I ask the
analysis of the new verse the old takes into account, how to mitigate the
damage done. If its economics prove valuable, the evolution to using it
will allow a stronger strain to take over, but just because something new
vs. the old, does not make it valuable.
Respectfully ....
Happy new year everyone and hopefully 2022 proves a positive time for all
of you.
Clem
Moving to COFF, perhaps prematurely, but...
It feels weird to be a Unix native (which I consider myself: got my first
taste of Irix and SVR3 in 1989, went to college where it was a Sun-mostly
environment, started running Linux on my own machines in 1992 and never
stopped). (For purposes of this discussion, of course Linux is Unix.)
It feels weird the same way it was weird when I was working for Express
Scripts, and then ESRX bought Medco, and all of a sudden we were the 500-lb
Gorilla. That's why I left: we (particularly my little group) had been
doing some fairly cool and innovative stuff, and after that deal closed, we
switched over entirely to playing defense, and it got really boring really
fast. My biggest win after that was showing that Pega ran perfectly fine
on Tomcat, which caused IBM to say something like "oh did we say $5 million
a year to license Websphere App Server? Uh...we meant $50K." So I saved
them a lot of money but it sucked to watch several months' work flushed
down the toilet, even though the savings to the company was many times my
salary for those months.
But the weird part is similar: Unix won. Windows *lost*. Sure, corporate
desktops still mostly run Windows, and those people who use it mostly hate
it. But people who like using computers...use Macs (or, sure, Linux, and
then there are those weirdos like me who enjoy running all sorts of
ancient-or-niche-systems, many of which are Unix). And all the people who
don't care do computing tasks on their phones, which are running either
Android--a Unix--or iOS--also a Unix. It's ubiquitous. It's the air you
breathe. It's no longer strange to be a Unix user, it means you use a
21st-century electronic device.
And, sure, it's got its warts, but it's still basically the least-worst
thing out there. And it continues to flabbergast me that a typesetting
system designed to run on single-processor 16-bit machines has, basically,
conquered the world.
Adam
P.S. It's also about time, he said with a sigh of relief, having been an
OS/2 partisan, and a BeOS partisan, back in the day. Nice to back a
winning horse for once.
On Thu, Dec 30, 2021 at 6:46 PM Bakul Shah <bakul(a)iitbombay.org> wrote:
> ?
>
> I was just explaining Ts'o's point, not agreeing with it. The first
> example I
> gave works just fine on plan9 (unlike on unix). And since it doesn't allow
> renames, the scenario T'so outlines can't happen there! But we were
> discussing Unix here.
>
> As for symlinks, if we have to have them, storing a path actually makes
> their
> use less surprising.
>
> We're in the 6th decade of Unix and we still suffer from unintended,
> fixable consequences of decisions made long long ago.
>
>
> No argument here. Perhaps you can suggest a path for fixing?
>
> On Dec 30, 2021, at 5:00 PM, Rob Pike <robpike(a)gmail.com> wrote:
>
> Grumpy hat on.
>
> Sometimes the Unix community suffers from the twin attitudes of a)
> believing if it can't be done perfectly, any improvement shouldn't be
> attempted at all and b) it's already done as well as is possible anyway.
>
> I disagree with both of these positions, obviously, but have given up
> pushing against them.
>
> We're in the 6th decade of Unix and we still suffer from unintended,
> fixable consequences of decisions made long long ago.
>
> Grumpy hat off.
>
> -rob
>
>
> On Fri, Dec 31, 2021 at 11:44 AM Bakul Shah <bakul(a)iitbombay.org> wrote:
>
>> On Dec 30, 2021, at 2:31 PM, Dan Cross <crossd(a)gmail.com> wrote:
>> >
>> > On Thu, Dec 30, 2021 at 11:41 AM Theodore Ts'o <tytso(a)mit.edu> wrote:
>> >>
>> >> The other problem with storing the path as a string is that if
>> >> higher-level directories get renamed, the path would become
>> >> invalidated. If you store the cwd as "/foo/bar/baz/quux", and someone
>> >> renames "/foo/bar" to "/foo/sadness" the cwd-stored-as-a-string would
>> >> become invalidated.
>> >
>> > Why? Presumably as you traversed the filesystem, you'd cache, (path
>> > component, inode) pairs and keep a ref on the inode. For any given
>> > file, including $CWD, you'd know it's pathname from the root as you
>> > accessed it, but if it got renamed, it wouldn't matter because you'd
>> > have cached a reference to the inode.
>>
>> Without the ".." entry you can't map a dir inode back to a path.
>> Note that something similar can happen even today:
>>
>> $ mkdir ~/a; cd ~/a; rm -rf ~/a; cd ..
>> cd: no such file or directory: ..
>>
>> $ mkdir -p ~/a/b; ln -s ~/a/b b; cd b; mv ~/a/b ~/a/c; cd ../b
>> ls: ../b: No such file or directory
>>
>> You can't protect the user from every such case. Storing a path
>> instead of the cwd inode simply changes the symptoms.
>>
>>
>>
>
(Moving to COFF, tuhs on bcc.)
On Tue, Dec 28, 2021 at 01:45:14PM -0800, Greg A. Woods wrote:
> > There have been patches proposed, but it turns out the sticky wicket
> > is that we're out of signal numbers on most architectures.
>
> Huh. What an interesting "excuse"! (Not that I know anything useful
> about the implementation in Linux....)
If recall correctly, the last time someone tried to submit patches,
they overloaded some signal that was in use, and it was NACK'ed on
that basis. I personally didn't care, because on my systems, I'll use
GUI program like xload, or if I need something more detailed, GKrellM.
(And GKreelM can be used to remotely monitor servers as well.)
> > SIGLOST - Term File lock lost (unused)
> > SIGSTKFLT - Term Stack fault on coprocessor (unused)
>
> If SIGLOST were used/needed it would seem like a very bad system design.
It's used in Solaris to report that the client NFSv4 code could not
recover a file lock on recovery. So that means one of the first
places to look would be to see if Ganesha (an open-source NFSv4
user-space client) isn't using SIGLOST (or might have plans to use
SIGLOST in the feature).
For a remote / distributed file system, Brewer's Theorem applies
--- Consistency, Availability, Partition tolerance --- chose any
two, but you're not always going to be able to get all three.
Cheers,
- Ted
On my Windows 11 notebook with WSL2 + Linux I got as default
rubl@DESKTOP-NQR082T:~$ echo $PS1
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@
\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
rubl@DESKTOP-NQR082T:~$ uname -a
Linux DESKTOP-NQR082T 5.10.74.3-microsoft-standard-WSL2+ #4 SMP Sun Dec 19
16:25:10 +07 2021 x86_64 x86_64 x86_64 GNU/Linux
rubl@DESKTOP-NQR082T:~$
--
The more I learn the better I understand I know nothing.
On 12/22/21, Adam Thornton <athornton(a)gmail.com> wrote:
> MacOS finally pushed me to zsh. So I went all the way and installed
> oh-my-zsh. It makes me feel very dirty, and I have a two-line prompt (!!),
> but I can't deny it's convenient.
>
> tickets/DM-32983 ✗
> adam@m1-wired:~/git/jenkins-dm-jobs$
>
> (and in my terminal, the X glyph next to my git branch showing the status
> is dirty is red while the branch name is green)
>
> and if something doesn't exit with rc=0...
>
> adam@m1-wired:~/git/jenkins-dm-jobs$ fart
> zsh: command not found: fart
> tickets/DM-32983 ✗127 ⚠️
> adam@m1-wired:~/git/jenkins-dm-jobs$
>
> Then I also get the little warning glyph and the rc of the last command in
> my prompt.
>
> But then I'm also now using Fira Code with ligatures in my terminal, so
> I've pretty much gone full Red Lightsaber.
I try to keep my prompt as simple as possible. For years I have been using:
moon $
That 's it. No fancy colors, not even displaying current working
directory. I have an alias 'p' for that.
--Andy
On 2021-12-23 11:00, Larry McVoy wrote:
> On Thu, Dec 23, 2021 at 03:29:18PM +0000, Dr Iain Maoileoin wrote:
>>> Probably boomer doing math wrong.
>> I might get flamed for this comment, but is a number divided by a number not
>> arithmetic.?? I cant see any maths in there.
> That's just a language thing, lots of people in the US call arithmetic
> math. I'm 100% positive that that is not just me.
Classes in elementary grades are called "math classes" (but then there
is Serre's book).
N.
-tuhs +coff
On Thu, Dec 23, 2021 at 11:47 AM Dr Iain Maoileoin <
iain(a)csp-partnership.co.uk> wrote:
I totally agree. My question is about language use (or drift) - nothing
> else. In Scotland - amongst the young - "Arithmetic" is now referred
> to as "Maths". I am aware of the transition but cant understand what
> caused it to happen! I dont know if other countries had/have the same
> slide from a specific to a general - hence the questions - nothing deeper.
>
Language change is inexplicable in general. About all we know is that some
directions of change are more likely than others: we no more know *why*
language changes than we know *why* the laws of physics are what they are.
Both widening (_dog_ once meant 'mastiff') and narrowing (_deer_ once meant
'animal') are among the commonest forms of semantic change.
In particular, in the 19C _arithmetic_ meant 'number theory', and so the
part concerned with the computation of "ambition, distraction,
uglification, and derision" (Lewis Carroll) was _elementary arithmetic_.
(Before that it was _algorism_.) When _higher arithmetic_ got its own
name, the _elementary_ part was dropped in accordance with Grice's Maxim of
Quantity ("be as informative as you can, giving as much information as
necessary, but no more"). This did not happen to _algebra_, which still
can mean either elementary or abstract algebra, still less to _geometry_.
In addition, from the teacher's viewpoint school mathematics is a
continuum, including the elementary parts of arithmetic, algebra, geometry,
trigonometry, and in recent times probability theory and statistics, for
which there is no name other than _ mathematics_ when taken collectively.
> In lower secondary school we would go to both Arithmetic AND also to
> Maths classes.
>
What was taught in the latter?
-tuhs +coff
On Wed, Dec 22, 2021 at 1:30 AM <jason-tuhs(a)shalott.net> wrote:
> As a vendor or distributor, you would care. Anyone doing an OS or other
> software distribution (think the BSDs, of course;
There is no legal reason why the BSDs can't distribute GPLed software;
indeed, they did so for many years. Their objection is purely ideological.
> but also think Apple or
> Microsoft) needs to care.
Apple and Microsoft can buy up, outspend, out-lawyer, or just outwait
anyone suing them for infringement. Their only reasons for not doing so
are reputational.
> Anyone selling a hardware device with embedded
> software (think switches/routers; think IOT devices; think consumer
> devices like DVRs; etc) needs to care.
Only if they are determined to infringe. Obeying the GPL's rules (most
often for BusyBox) is straightforward, and the vast majority of infringers
(per the FSF's legal team) are not aware that they have done anything wrong
and are willing to comply once notified, which cures the defect (much less
of a penalty than for most infringements). The ex-infringers do not seem
to consider this a serious competitive disadvantage. GPL licensors are
generous sharers, but you have to be willing to share yourself.
I saw this dynamic in action while working for Reuters; we were licensing
our health-related news to websites, and I would occasionally google for
fragments of our articles. When I found one on a site I didn't recognize,
I'd pass the website to Sales, who would sweetly point out that
infringement could cost them up to $15,000 per article, and for a very
reasonable price.... They were happy to sign up once they were made aware
that just because something is available on the Internet doesn't mean you
can republish it on your site.
GPL (or similar "virally"
> licensed) software carries legal implications for anyone selling or
> distributing products that contain such software; and this can be a
> motivation to use software with less-restrictive license terms.
Only to the victims of FUD. Reusing source code is one thing: repackaging
programs is another.
I'll say no more about this here.
MacOS finally pushed me to zsh. So I went all the way and installed
oh-my-zsh. It makes me feel very dirty, and I have a two-line prompt (!!),
but I can't deny it's convenient.
tickets/DM-32983 ✗
adam@m1-wired:~/git/jenkins-dm-jobs$
(and in my terminal, the X glyph next to my git branch showing the status
is dirty is red while the branch name is green)
and if something doesn't exit with rc=0...
adam@m1-wired:~/git/jenkins-dm-jobs$ fart
zsh: command not found: fart
tickets/DM-32983 ✗127 ⚠️
adam@m1-wired:~/git/jenkins-dm-jobs$
Then I also get the little warning glyph and the rc of the last command in
my prompt.
But then I'm also now using Fira Code with ligatures in my terminal, so
I've pretty much gone full Red Lightsaber.
Adam
On Wed, Dec 22, 2021 at 7:41 AM Norman Wilson <norman(a)oclsc.org> wrote:
> Thomas Paulsen:
>
> bash is clearly more advanced. ksh is retro computing.
>
> ====
>
> Shell wars are, in the end, no more interesting than editor wars.
>
> I use bash on Linux systems because it's the least-poorly
> supported of the Bourne-family shells, besides which bash
> is there by default. Ksh isn't.
>
> I use ksh on OpenBSD systems because it's the least-poorly
> supported of the Bourne-family shells, besides which kh
> is there by default. Bash isn't.
>
> I don't actually care for most of the extra crap in either
> of those shells. I don't want my shell to do line editing
> or auto-completion, and I find the csh-derived history
> mechanisms more annoying than useful so I turn them off
> too. To my mind, the Research 10/e sh had it about right,
> including the simple way functions were exported and the
> whatis built-in that told you whether something was a
> variable or a shell function or an external executable,
> and printed the first two in forms easily edited on the
> screen and re-used.
>
> Terminal programs that don't let you easily edit input
> or output from the screen and re-send it, and programs
> that abet them by spouting gratuitous ANSI control
> sequences: now THAT's what I call retro-computing.
>
> Probably further discussion of any of this belongs in
> COFF.
>
> Norman Wilson
> Toronto ON
>
Chalk this up to "pointless hack" but I know many COFF readers (and
presumably some multicians) are also ham radio enthusiasts, so perhaps some
folks will find this interesting. I have succeeded in what I suspect may be
a first: providing a direct interface from AX.25 amateur packet radio
connections to a Multics installation (and TOPS-20).
I've been interested in packet radio for a while and have run an AX.25
station at home for some time, and I have configured things so that
incoming radio connections to a particular SSID proxy into telnet to a Unix
machine on my AMPRNet subnet. I don't run the traditional AX.25 "node"
software, but can log directly into a timesharing machine in my basement,
which is pretty cool.
Some recent upgrades provided an opportunity for a project interfacing
"retro" computer instances with packet radio. AX.25 is a slow medium: 1200
BAUD (this is on 2m) packed switched over a high-loss, high-latency RF
path. While my Unix machine does all right, it occurs to me that systems
designed in the teletype era might actually be better suited to that kind
of communications channel.
So I set up a DPS8/M emulator and configured the packet node to forward an
SSID to Multics. After some tweaking to clean up a bizarre number of ASCII
NUL characters coming from the emulator (I suspect a bug there; I'm going
to email those folks about that), things are working pretty well: I can
connect into the system interactively and even use qedx to write PL/1
programs. To my knowledge, no one has done this with Multics before. A
small session transcript follows at the end of this message (sorry, no
PL/1). It's not fast, so one definitely comes to appreciate the brevity of
expression in the interface.
While I was at it, I also installed TOPS-20 on an emulated DECSYSTEM-20 and
got it talking over AX.25 as well. Now, I'd like to set up an interface
reminiscent of a PAD or TIP allowing access to all of these machines,
muxing a single SSID. Sadly I have no idea what the user interface for
those things looked like: if anyone has pointers I can use to craft some
software, I'd be happy to hear about it!
Pointless perhaps, but fun!
- Dan C.
PS: I'm happy to set folks up with accounts, if they'd like. Shoot me an
email with your call sign. If you're in the greater Boston area, try KZ2X-1
and KX2X-3 on 145.090 MHz.
###CONNECTED TO NODE BROCK(W1MV-7) CHANNEL A
Welcome to BROCK (W1MV-7) in Brockton, Mass
ENTER COMMAND: B,C,J,N, or Help ? C KZ2X-3
###LINK MADE
Trying 44.44.107.8...
Connected to sim.kz2x.ampr.org.
Escape character is 'off'.
HSLA Port
(d.h001,d.h002,d.h003,d.h004,d.h005,d.h006,d.h007,d.h008,d.h009,d.h010,d.h011,d.h012,d.h013,d.h014,d.h015,d.h016,d.h017,d.h018,d.h019,d.h020
,d.h021,d.h022,d.h023,d.h024,d.h025,d.h026,d.h027,d.h028,d.h029,d.h030,d.h031)?
Attached to line d.h001
Multics MR12.7: KZ2X Multics (Channel d.h001)
Load = 6.0 out of 90.0 units: users = 6, 12/21/21 1718.0 est Tue
login KZ2X
Password:
You are protected from preemption until 17:18.
KZ2X.Ham logged in 12/21/21 1718.6 est Tue from ASCII terminal "none".
Last login 12/21/21 1717.0 est Tue from ASCII terminal "none".
No mail.
r 17:18 0.376 54
ls
Segments = 5, Lengths = 4.
r w 1 KZ2X.profile
r w 1 start_up.ec
r w 1 hello.pl1
0 KZ2X.mbx
r w 1 KZ2X.value
r 17:19 0.022 0
who -a -lg
Multics MR12.7; KZ2X Multics
Load = 7.0 out of 90.0 units; users = 7, 2 interactive, 5 daemons.
Absentee users = 0 background; Max background absentee users = 3
System up since 12/21/21 0922.8 est Tue
Last shutdown was at 12/21/21 0917.8 est Tue
Login at TTY Load User ID
12/21/21 09:22 cord 1.0 IO.SysDaemon
09:22 bk 1.0 Backup.SysDaemon
09:22 prta 1.0 IO.SysDaemon
09:22 ut 1.0 Utility.SysDaemon
09:22 vinc 1.0 Volume_Dumper.Daemon
16:41 none 1.0 Cross.SysEng
17:18 none 1.0 KZ2X.Ham
r 17:19 0.036 0
logout
KZ2X.Ham logged out 12/21/21 1722.9 est Tue
CPU usage 0 sec, memory usage 0.2 units, cost $0.12.
###DISCONNECTED BY KZ2X-3 AT NODE BROCK
OK, this is my last _civil_ request to stop email-bombing both lists with
trafic. In the future, I will say publicly _exactly_ what I think - and if
screens still had phosphor, it would probably peel it off.
I can see that there are cases when one might validly want to post to both
lists - e.g. when starting a new discusson. However, one of the two should
_always_ be BCC'd, so that simple use of reply won't generate a copy to
both. I would suggest that one might say something like 'this discussion is
probably best continued on the <foo> list' - which could be seeded by BCCing
the _other_.
Thank you.
Noel
Probably time to move this to COFF, but
along the line of Fission for Program Comprehension....
I wonder how many of you don't know about Don Lancaster.
Pioneer in home computing back when that meant something, inventor of a
very low cost 1970s video terminal (the TV Typewriter), tremendously
skilled hacker, brilliant guy.
Also still alive, lives a couple hours away from me in Safford, AZ, and has
been doing fantastic research on Native American hanging canals for the
last couple decades.
Anyway: he wrote a magnificent piece on how to understand a (6502) program
from its disassembly, which reminded me of Gibbons's work:
https://www.tinaja.com/ebooks/tearing_rework.pdf
I don't think Don ever had a lot of crossover with the more academic world
of Unix people, but he's one of my heroes and I have learned a hell of a
lot from his works.
Adam
The ARPAnet reached four nodes on this day in 1969; at least one "history"
site reckoned the third node was connected in 1977 (and I'm still waiting
for a reply to my correction). Well, I can believe that perhaps there
were only three left by then...
According to my notes, the nodes were UCSB, UCLA, SRI, and Utah.
-- Dave
In keeping with the list charters, I'm moving this to COFF.
On Thursday, 2 December 2021 at 11:30:35 -0500, John Cowan wrote:
> On Thu, Dec 2, 2021 at 12:45 AM Henry Bent <henry.r.bent(a)gmail.com> wrote:
>
>> The Byte article (the scan of which I am very grateful for; not having to
>> go trawling through the stacks at the Oberlin College library is always a
>> plus) claims that the tools have been implemented on:
>>
>> Tandem
>
> That would be me; at least I registered it with Addison-Wesley,
> although someone else may have implemented it independently.
I recall something about this, but I didn't find very much in my
collection of old email messages. The most promising was:
Date: 87-11-06 09:47
From: LEHEY_GREG
To: ANDERSON_KEN @CTS
Subject: ?? Is there a "make"-like utility for Tandem ??
In Reply to: 87-11-05 18:59 FROM ANDERSON_KEN @CTS
3:?? Is there a "make"-like utility for Tandem ??
No, but I'd LOVE to have one. Ask Dick Thomas - in his spare time, he
converts software tools to Tandem.
Did you have contact with Dick?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
Moving to COFF as this is less UNIX and more computer architecture and
design style...
On Tue, Nov 30, 2021 at 3:07 AM <pbirkel(a)gmail.com> wrote:
> Given a random logic design it's efficient to organize the ISA encoding
> to maximize its regularity.
Probably also of some benefit to compilers in a memory-constrained
> environment?
>
To be honest, I think that the regularity of the instruction set is less
for the logic and more for the compiler. Around the time the 11 was being
created Bill Wulf and I think Gordan as co-author, wrote a paper about how
instruction set design, the regularity, lack of special cases, made it
easier to write a code optimizer. Remember a couple of former Bell and
Wulf students were heavily involved in the 11 (Strecker being the main one
I can think of off the top of my head).
Also remember that Gordan and the CMU types of those days were beginning to
create what we now call Hardware Description Languages (HDL). Gordon
describes in "Bell and Newell" (the definitive Computer Structures book of
the 1970s) his Processor-Memory-Switch (PMS) diagrams. The original 11
(which would become the 11/20) was first described as a set of PMS
diagrams. PMS of course, beget the Instruction Set Processor Language
(ISPL) that Mario created a couple of years later. While ISPL was after
the 11 had been designed, ISPL could synthesize a system using PDP-16 RTM
modules. A later version from our old friend from UNIX land, Ted Kowalski
[his PhD thesis actually], that could spit out TTL from the later ISPS
simulator and compiler [the S being simulation]. ISPS would beget VHDL,
which beget today Verilog/System Verilog.
IIRC it was a lecture Gordon Gordan gave us WRT to microcode *vs.* direct
logic. He offered that microcode had the advantage that you could more
easily update things in the field, but he also felt that if we could catch
the errors before you released the HW to the world, and if we could then
directly synthesize, that would be even better - no errors/no need to
update. That said, by the 11/40, DEC started to microcode the 11's,
although as you point out the 11/34 and later 11/44, where more direct
logic than the 11/40 - and of course Wulf would created the 11/40e - which
writeable control store so they add some instructions and eventually build
C.mmp.
Over to COFF...
On 2021-11-23 02:57, Henry Bent wrote:
> On Mon, 22 Nov 2021 at 21:31, Mary Ann Horton <mah(a)mhorton.net
> <mailto:mah@mhorton.net>> wrote:
>
> PL/I was my favorite mainframe programming language my last two
> years as
> an undergrad. I liked how it incorporated ideas from FORTRAN,
> ALGOL, and
> COBOL. My student job was to enhance a PL/I package for a History
> professor.
>
>
> What language were the PL/I compilers written in?
From AFIPS '69 (Fall): "The Multics compiler is the only PL/1 compiler
written in PL/1 [...]"
HOPL I has a talk on the early history of PL/1 (born as NPL) but nothing
on the question.
N.
>
> Wikipedia claims that IBM is still developing a PL/I compiler, which I
> suppose I have no reason to disbelieve, but I'm very curious as to who
> is using it and for what purpose.
>
> -Henry
Moving to COFF where this probably belongs because its less UNIX and more
PL oriented.
On Tue, Nov 23, 2021 at 3:00 AM Henry Bent <henry.r.bent(a)gmail.com> wrote:
> What language were the PL/I compilers written in?
>
I don't know about anyone else, but the VAX PL/1 front-end was bought by
DEC from Freiburghouse (??SP??) in Framingham, MA. It was written in PL/1
on a Multics system. The Front-end was the same one that Pr1me used
although Pr1me also bought their Fortran, which DEC did not. [FWIW: The
DEC/Intel Fortran Front-End was written in Pascal -- still is last time I
talked to the compiler folks].
I do not know what the Freiburghouse folks used for a compiler-compiler
(Steve or Doug might ), but >>I think<< it might not have used one.
Culter famously led the new backend for it and had to shuttle tapes from
MIT to ZKO in Nashua during the development. The backend was written in a
combination of PL/1, BLISS32 and Assembler. Once the compiler could self
host, everything moved to ZKO.
That compiler originally targeted VMS, but was moved to Unix/VAX at one
point as someone else pointed out.
When the new GEM compilers were about 10-15 years later, I was under the
impressions that the original Freiburghouse/Culter hacked front-end was
reworked to use the GEM backend system, as GEM used BLISS, and C for the
runtimes and a small amount of Assembler as needed for each ISA [And I
believe it continues to be the same from VSI folks today]. GEM based PL/1
was released on Alpha when I was still at DEC, and I believe that it was
released for Itanium a few years later [by Intel under contract to
Compaq/HP]. VSI has built a GEM based Intel*64 and is releasing/has
released VMS for same using it; I would suspect they moved PL/1 over also
[Their target customer is the traditional DEC VMS customer that still has
active applications and wants to run them on modern HW]. I'll have to ask
one of my former coworkers, who at one point was and I still think is, the
main compiler guy at VSI/resident GEM expert.
> Wikipedia claims that IBM is still developing a PL/I compiler, which I
> suppose I have no reason to disbelieve, but I'm very curious as to who is
> using it and for what purpose.
>
As best I can tell, commercial sites still use it for traditional code,
just like Cobol. It's interesting, Intel does neither but we spend a ton of
money on Fortran because so much development (both old and new) in the
scientific community requires it. I answered why elsewhere in more
detail: Where
is Fortran used these days
<https://www.quora.com/Where-is-Fortran-used-these-days/answers/87679712>
and Is Fortran still alive
<https://www.quora.com/Is-Fortran-still-alive/answer/Clem-Cole>
My >>guess<< is that PL/1 is suffering the same fate as Cobol, and fading
because the apps are being/have been slowly rewritten from custom code to
using COTS solutions from folks like Oracle, SAS, BAAN and the like. Not
so for Fortran and the reason is that the math has not changed. The core
of these codes is the same was it was in the 1960s/70s when they were
written. A friend of mine used to be the Chief Metallurgist for the US Gov
at NIST and as Dr. Fek put it so well: * "I have over 60 years worth of
data that we have classified and we understand what it is telling us. If
you magically gave me new code to do the same thing as what we do with our
processes that we have developed over the years, I would have to reclassify
all that data. It's just not economically interesting." *I personally
equate it to the QWERTY keyboard. Just not going to change. *i.e.* *"Simple
economics always beats sophisticated architecture."*
[-TUHS, +COFF]
On Tue, Nov 23, 2021 at 3:00 AM Henry Bent <henry.r.bent(a)gmail.com> wrote:
> On Mon, 22 Nov 2021 at 21:31, Mary Ann Horton <mah(a)mhorton.net> wrote:
>
>> PL/I was my favorite mainframe programming language my last two years as
>> an undergrad. I liked how it incorporated ideas from FORTRAN, ALGOL, and
>> COBOL. My student job was to enhance a PL/I package for a History
>> professor.
>>
>
> What language were the PL/I compilers written in?
>
The only PL/I compiler I have access to is, somewhat ironically, the
Multics PL/1 compiler. It is largely self-hosting; more details can be
found here: https://multicians.org/pl1.html (Note Doug's name appears
prominently.)
Wikipedia claims that IBM is still developing a PL/I compiler, which I
> suppose I have no reason to disbelieve, but I'm very curious as to who is
> using it and for what purpose.
>
I imagine most of it is legacy code in a mainframe environment, similarly
to COBOL. I can't imagine that many folks are considering new development
in PL/1 other than in retro/hobbyist environments and some mainframe shops
where there's a heavy existing PL/I investment.
- Dan C.
I recently had a discussion with some colleagues on the topic of
shells. Two people whom I respect both told me that Microsoft's
Powershell runs rings round the Bourne shell.
Somehow that sounds like anathema to me, but it's not beyond the
bounds of possibility. Before I waste time investigating, can anybody
here give me some insights?
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
On Wed, Nov 17, 2021 at 3:24 PM Rob Pike <robpike(a)gmail.com> wrote:
> Perl certainly had its detractors, but for a few years there it was the
> lingua franca of system administration.
>
It's still what I reach for first when I need to write a state machine that
processes a file made up of lines with some--or some set of--structures.
The integration of regexps is far, far, far superior to what Python can do,
and I adore the while(<>) construct. Maintaining other people's Perl
usually sucks, but it's a very easy way to solve your own little problems.
Adam
On 2021-11-16 09:57, Douglas McIlroy wrote:
> The following remark stirred old memories. Apologies for straying off
> the path of TUHS.
>
>> I have gotten the impression that [PL/I] was a language that was beloved by no one.
> As I was a designer of PL/I, an implementer of EPL (the preliminary
> PL/I compiler used to build Multics), and author of the first PL/I
> program to appear in the ACM Collected Algorithms, it's a bit hard to
> admit that PL/I was "insignificant". I'm proud, though, of having
> conceived the SIGNAL statement, which pioneered exception handling,
> and the USES and SETS attributes, which unfortunately sank into
> oblivion. I also spurred Bud Lawson to invent -> for pointer-chasing.
> The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
>
> After the ACM program I never wrote another line of PL/I.
> Gratification finally came forty years on when I met a retired
> programmer who, unaware of my PL/I connection, volunteered that she
> had loved PL/I above all other programming languages.
My first language was actually PL/C (and the computer centre did not
charge for runs in PL/C). I needed to use PL/I for some thesis-related
work and ran into the JLC wall -- no issues with the former, many issues
with the latter. One of the support people, upon learning that I was
using PL/I, said: "PL/I's alright!"
N.
>
> Doug
Moving to COFF ...
On Tue, Nov 16, 2021 at 10:50 AM Adam Thornton <athornton(a)gmail.com> wrote:
> I'm not even sure how much of this you can lay at the feet of teachers: I
> would argue that we see a huge efflorescence of essentially self-taught
> programming cobbled together from (in the old days) the system manuals a
>
Ouch ... this is exactly my point. In my experience in ~55 years of
programming, with greater than 45 of those being paid to do it, the best
programmers I know and have worked with were taught/mentored by a master --
not self-taught. As I said, I had to be re-educated once I got the CMU.
My Dad had done the best he knew, but much of what he taught me was
shortcuts and tricks because that is what he knew 🠪 he taught me syntax,
not how to think. I know a lot of programmers (like myself) that were
self-taught or introduced to computing by novices to start and that
experience get them excited, but all of them had real teachers/mentors who
taught them the true art form and helped them unlearn a lot of crap that
they had picked up or miss-interpreted.
Looking at my father as a teacher, he really had never been taught to think
like a programmer. In the late 1950s he was a 'computer' [see the movie
"Hidden Figures"]. He was taught FORTRAN and BASIC and told to implement
things he had been doing by hand (solving differential equations using
linear algebra). The ideas we know and loved about structured
programming and* how to do this well* were still being invented by folks
like Doug and his sisters and brothers in the research community. It's no
surprise that my Dad taught me to 'hack' because he and I had nothing to
compare to. BTW: this is not to state all HS computer teachers are bad,
but the problem is that most people that are really good at programming are
actually quite rare and they tend to end up in research or industry -- not
teaching HS. Today, the typical HS computer teacher (like one of my
nieces) takes a course or two at UMASS in the teacher's college. They are
never taught to program or take the same courses the kids in science and
engineering take 🠪 BTW I also think this is why we see so much of the
popular press talking about 'coding' not programming. They really think
learning to program is learning the syntax of a specific programming
language.
When I look at the young people I hire (and mentor) told, it's not any
different. BTW: Jon and I had a little bit of a disagreement when he
wrote his book. He uses Javascript for a lot of his examples - because of
exactly what you point out 🠪 Javascript today, like BASIC before it, has a
very high "on-screen results" factor with little work by the user. Much
is being done behind the covers to make that magic happen. I tend to
believe that creates a false sense of knowledge/understanding.
To Jon's credit, he tries to bridge that in his book. As I said, I
thought I knew a lot more about computers until I got to CMU. Boy was I in
for an education. That said, I was lucky to be around some very smart
people who helped steer me.
Clem
ᐧ
>From TUHS (to Doug McIlroy):
"Curious what you think of APL"
I'm sure what Doug thinks of APL is unprintable. Unless, of course, he has
the special type ball.
<rimshot>
On Tue, Nov 16, 2021 at 8:23 AM Richard Salz <rich.salz(a)gmail.com> wrote:
>
> The former notation C(B(A)) became A->B->C. This was PL/I's gift to C.
>>
>
> You seem to have a gift for notation. That's rare. Curious what you think
> of APL?
>