[redirecting to COFF]
On Wednesday, 15 April 2020 at 18:19:57 +1000, Dave Horsfall wrote:
> On Wed, 15 Apr 2020, Don Hopkins wrote:
>
>> I love how in a discussion of how difficult it was to publish a book on
>> Unix with the correct punctuation characters 42 years ago, we still
>> can???t even quote the title of the book in a discussion about Unix
>> without the punctuation characters degrading and mutating each round
>> trip.
>
> Well, I'm not the one here using Windoze...
Arguably Microsoft does it better than Unix. Most of the issues are
related to the character encoding. And as Don's headers say:
X-Mailer: Apple Mail (2.3608.60.0.2.5)
I agree with Don. I use mutt, which has many advantages, but sane
character encoding isn't one of them.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
So it could be the lack of televised sports getting to me in these shelter-in-place days, but, I mean, sure, I guess I’ll throw in some bucks for a pay-per-view of a Pike/Thompson cage match. FIGHT!
Followups set.
> On Apr 18, 2020, at 6:28 PM, Rob Pike <robpike(a)gmail.com> wrote:
> It wasn't my intention.
> On Sun, Apr 19, 2020 at 11:12 AM Ken Thompson <ken(a)google.com> wrote:
>>
>> you shouldn't have shut down this discussion.
>> On Sat, Apr 18, 2020 at 3:27 PM Rob Pike <robpike(a)gmail.com> wrote:
>>> ``because''.
So I imagine that most readers of this list have heard that a number of US
states are actively looking for COBOL programmers.
If you have not, the background is that, in the US, a number of
unemployment insurance systems have mainframe backends running applications
mostly written in COBOL. Due to the economic downturn as a result of
COVID-19, these systems are being overwhelmed with unprecedented numbers of
newly-unemployed people filing claims. The situation is so dire that the
Governor of New Jersey mentioned it during a press conference.
This has led to a number of articles in the popular press about this
situation, some rather sensational: "60 year old programming language
prevents people filing for unemployment!" E.g.,
https://www.cnn.com/2020/04/08/business/coronavirus-cobol-programmers-new-j…
On the other hand, some are rather more measured:
https://spectrum.ieee.org/tech-talk/computing/software/cobol-programmers-an…
I suspect the real problem is less "COBOL and mainframes" and more
organizational along the lines of lack of investment in training,
maintenance and modernization. I can't imagine that bureaucrats are
particularly motivated to invest in technology that mostly "just works."
But the news coverage has led to a predictable set of rebuttals from the
mainframe faithful on places like Twitter; they point out that COBOL has
been updated by recent standards in 2002 and 2014 and is being unfairly
blamed for the present failures, which arguably have more to do with
organizational issues than technology. However, the pendulum seems to have
swung too far with their arguments in that they're now asserting that COBOL
codebases are uniformly masterworks. I don't buy that.
I find all of this interesting. I don't know COBOL, nor all that much about
it, save for some generalities about its origin and Grace Hopper's
involvement in its creation. However, in the last few days I've read up on
it a bit and see very little to recommend it: the type and scoping rules
are a mess, things like the 'ALTER' statement and the ability to cascade
procedure invocations via the 'THRU' keyword seem like a recipe for
spaghetti code, and while they added an object system in 2002, it doesn't
seem to integrate with the rest of the language coherently and I don't see
it doing anything that can't be done in any other OO language. And of
course the syntax is abjectly horrible. All in all, it may not be the cause
of the current problems, but I don't know why anyone would be much of a fan
of it and unless you're already sitting on a mountain of COBOL code (which,
in fairness, many organizations in government, insurance and finance
are...) I wouldn't invest in it.
I read an estimate somewhere that there are something like 380 billion
lines of COBOL out there, and another 5 billion are written annually
(mostly by body shops in the BRIC countries?). That's a lot of code; surely
not all of it is good.
So....What do folks think? Is COBOL being unfairly maligned simply due to
age, or is it really a problem? How about continued reliance on IBM
mainframes: strategic assets or mistakes?
- Dan C.
(PS: I did look up the specs for the IBM z15. It's an impressive machine,
but without an existing mainframe investment, I wouldn't get one.)
Hello:
Im apologize for the OT but Im desperately looking for the answer to a
question regarding BitKeeper and, since I heard about its vitues and
also about it becoming open source in this list, I thought you guys
wouldn't mind helping me find the answer to my problem.
I've been using Git for some time now, but after hearing here about BK,
Im trying it out to see if I can change to it. At the moment I'm quite
happy with it and I very much like it's smaller than Git.
In Git I use a remote SSH "original" repo instead of a public (HTTP)
service, since I want to have distributed copies (several workstations
and laptops) and a "non interactive" copy on a server of mine. I'm not
sure I am making a lot of sense, since I'm not very expert on Git either
(I just use a few basic functions) and English isn't my first language.
What I actually need/want is to create a remote SSH repo from my local
(working) repo. I haven't found how to do that neither in the
documentation nor the Users Forum. Alas I havent found a mailing list to
subscribe to and ask for help. That's why I'm asking you guys for help.
Again sorry for the OT and thank you so much for reading until here. :)
Cheers,
Ángel
Way back then, when dinosaurs strode the earth and System/360 reigned
supreme, we were taught to slash our zeros and sevens (can't quite find
the glyphs for them right now) in order to distinguish them from Oscars
and Ones for the benefit of the keypunch girls (yes, really; I had the
hots for one of them at one time, but someone else took her) on those
green sheets.
In the time-mean I also saw a slash through "Z" (Zulu, Zed, Zee) in order
to distinguish it from a "2" (FIGURES TWO); WTF? And you *never* slashed
your ones, lest thou ended up with the contents of an 029 chad-box over
thine head...
These days, I merely slash my sevens by habit and it doesn't even raise an
eyebrow; even Oz Pigeon Post's mail mangler seems to accept it[*].
What're other bods' experiences?
[*]
Don't even mention Redfern, OK? Just don't... And as for getting off at
Redfern, well, I did a few times when I had a contract job around there.
-- Dave, bored at home
Remember I learned slashing my zeroes, small bar in sevens and zets,
curly letter X.
This was when a studied Mathematics fro a while at the University of
Technology in Delft, the Netherlands.. I think it had to do with
avoiding misunderstandings in scribbled formulars in a time we still
did a lot of real writing semester papers. This was around mid-70tish
for me.
Anybody feel like a text/voice chat on the ClassicCmp Discord server in about 13 hours, say 2200 UTC?
#coff and the General voice channel.
I'll pop on for an hour but start whenever you feel like.
Cheers, Warren
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
On Thu, 19 Mar 2020, Mike Markowski wrote:
>> I've been using my trusty HP-42S for so long that I can hardly remember
>> how to use a "normal" calculator :-)
>
> When my classmate's calculator died during an engineering exam, he asked
> if he could borrow my spare. I handed him my HP 32s and after a minute
> he whispered, "Where's the equals key?" He gave my calculator back.
> :-)
I did that to a financial controller in a previous life; she was not
amused... Hey, it was the only calculator that I had! I could see her
helplessly looking for the "=" key, then I took pity on her.
-- Dave
+COFF
On 3/20/20 8:03 AM, Noel Chiappa wrote:
> Maybe I'm being clueless/over-asking, but to me it's appalling that
> any college student (at least all who have _any_ math requirement at
> all; not sure how many that is) doesn't know how an RPN calculator
> works.
I'm sure that there are some people, maybe not the corpus you mention,
that have zero clue how an RPN calculator works. But I would expect
anybody with a little gumption to be able to poke a few buttons and
probably figure out the basic operation, or, ask if they are genuinely
confused.
> It's not exactly rocket science, and any reasonably intelligent
> high-schooler should get it extremely quickly; just tell them it's
> just a representational thing, number number operator instead of
> number operator number.
I agree that RPN is not rocket science. And for basic single operation
equations, I think that it's largely interchangeable with infix notation.
However, my experience is, as the number of operations goes up, RPN can
become more difficult to use. This is likely a mental shortcoming on my
part. But it is something that does take tractable mental effort for me
to do.
For example, let's start with Pythagorean Theorem
a² + b² = c²
This is relatively easy to enter in infix notation on a typical
scientific calculator.
However, I have to stop and think about how to enter this on an RPN
calculator. I'll take a swing at this, but I might get it wrong, and I
don't have anything handy to test at the moment.
[a] [enter]
[a] [enter]
[multiply]
[b] [enter]
[b] [enter]
[multiply]
[add]
[square root] # to solve for c
(12 keys)
Conversely infix notation for comparison.
[a]
[square]
[plus]
[b]
[square]
[square root]
(6 keys)
As I type this, I realize that I'm using higher order operations
(square) in infix than I am in RPN. But that probably speaks to my
ignorance of RPN.
I also realize that this equation does a poor job at demonstrating what
I'm trying to convey. — Or perhaps what I'm trying to convey is
incorrect. — I had to arrange sub-different parts of the equation so
that their results ended up together on the stack for them to be the
targets of the operation. I believe this (re)arrangement of the
equation is where most of my mental load / objection comes from with
RPN. I feel like I have to process the equation before I can tell the
calculator to compute the result for me. I don't feel like I have this
burden with infix notation.
Aside: I firmly believe that computers are supposed to do our bidding,
not the other way around. s/computers/calculators/
> I know it's not a key intellectual skill, but it does seem to me to
> be part of comon intellectual heritage that everyone should know,
> like musical scales or poetry rhyming. Have you ever considered
> taking two minutes (literally!) to cover it briefly, just 'someone
> tried to borrow my RPN calculator, here's the basic idea of how they
> work'?
I'm confident that 80% of people, more of the corpus you describe, could
use an RPN calculator to do simple equations. But I would not be
surprised if many found that the re-arrangement of equations to being
RPN friendly would simply forego the RPN calculator for simpler
arithmetic operations.
I think some of it is a mental question: Which has more mental load,
doing the annoying arithmetic or re-arranging to use RPN.
I believe that for the simpler of the arithmetic operations, RPN is
going to be more difficult.
All of this being said, I'd love to have someone lay out points and / or
counterpoints to my understanding.
--
Grant. . . .
unix || die
Moving to COFF ...
On Fri, Mar 20, 2020 at 1:24 PM Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> Would you humor me with an example of what you mean by "thinking on the
> fly"? Either I'm not understanding you or we think differently.
>
I'll take a stab at it in a minute.
But first, I never cared either way. In college, I had an SR50 and my GF
had an HP45. I would say, between my EE friends we were probably split
50/50 between TI and HP. Generally, it was the RPN centric crew were
fiercely loyal as in the editor wars but would grab whichever was near me
when we all were working a problem set; but I knew a couple of folks that
hated RPN too.
It's possible, because of my undiagnosed dyslexia at the time, but I would
grab the closest calculator, pause to see which is was and then start
entering things as needed. But like Jon -- if I had the TI in my hands, I
found myself copying the equation. I was trying to pay attention to what
button I was pressing to check for any keystroke entry errors. Both types
had all of the same math functions so there was little difference in the
number of strokes, other than not needing parentheses on HP and how you
entered the calculation. With the HP, I was more aware of that equation I
was calculating because I was having to make sure I entered it in the
proper order so I could get the right answer. In my case, I was probably
a tad more careful because I was being forced to thinking in terms of
precedence - but I was thinking about the equation. Whereas with the TI I
was just hitting the button per the equation on the paper. I typed a tad
faster on the TI than the HP because I was not thinking as much but ... I
probably made more typing errors there because I thought less about what I
was doing.
Clem
Aside: I'm sending this reply to TUHS where the message that I'm
replying to came from. But i suspect that it should migrate to COFF,
which I'm CCing.
On 3/20/20 5:48 AM, paul(a)guertin.net wrote:
> I teach math in college, and I use an RPN calculator as well (it's
> just easier).
Would you please elaborate on "it's just easier"?
I'm asking from a point of genuine curiosity. I've heard many say that
RPN is easier, or that it takes fewer keys, or otherwise superior to
infix notation. But many of the conversations end up somewhat devolving
into religious like comments about preferences, despite starting with
honest open-minded intentions. (I hope this one doesn't similarly devolve.)
I've heard that there are fewer keys to press for RPN, but the example
equations presented have been effectively he same.
I've heard that RPN is mentally easier. But I apparently don't know
enough RPN to be able to think in RPN natively to evaluate myself.
I dabble with RPN, including keeping my main calculator app on my smart
phone in RPN mode.
So I am genuinely interested in understanding why you say that RPN is
just easier.
> Sometimes, during an exam, a student who forgot to bring their
> calculator will ask if they can borrow mine. I always say "sure, but
> you'll regret it" and hand them the calculator. After wasting one or
> two minutes, they give it back.
~chuckle~
> (Note that I always make sure no calculator is needed for my exams,
> but it's department policy to authorise non programmable calculators,
> and it seems to reassure students to have the calculator on the desk,
> so I don't mind.) >
ACK
--
Grant. . . .
unix || die
As many of you may be aware, Bruce D. Evans <bde(a)freebsd.org> died in
mid-December. I am currently looking through his digital estate on
behalf of his family and the FreeBSD Project.
I have discovered that he kept an extensive collection of 5¼" floppy
disks. I haven't looked through them but they appear to include
things like OS-9 and Hitachi Peach files (and presumably Minix stuff,
though I haven't found any of his Minix work). He also has a
selection of newletters from an Australian Peach users group. Is
there any interest in this material from a historicial perspective?
--
Peter Jeremy
On 3/8/20 9:39 PM, Warner Losh wrote:
> floppy controller supports the full range of crazy that once roamed
> the earth
Does anyone have any knee jerk reaction to the idea of putting a 5¼"
floppy drive on a USB-to-Floppy (nominally 3½") adapter?
Do I want to avoid tilting at this windmill?
Am I better off installing the 5¼" floppy inside the computer and
connecting directly to the motherboard?
I'm only wanting to pull files off of 5¼" disks. At most I'll want to
dd the disks to an image.
That being said, I wonder if I should also be collecting any different
types of images. (This may mean mobo instead of USB.)
Thank you for any pro-tips that you can provide.
--
Grant. . . .
unix || die
Moving to COFF ....
On Tue, Mar 17, 2020 at 10:58 AM Larry McVoy <lm(a)mcvoy.com> wrote:
> As much as I don't care for Forth, man do I wish it had become the standard
> for boot proms, it might not be my cup of tea but I could make it do what
> I needed it to do.
Amen bro... Sun did a nice job on that. Although the Alpha Boot ROMs
were pretty good too. At least they were UNIX like and were extensible like
the Sun boot ROMs. HP's were better than a PC BIOS, but they were pretty
awful.
> Can't say the same for UEFI, I disable that crap.
>
Well, it beats the crap out of IBM's BIOS, but that bar is very low. UEFI
was sort of a 'camel' (a horse designed by committee) and too many people
peed on it. Intel created EFI to try to fix BIOS and then people went
nuts. Apple's version is the best of them, but as you say, they all suck
if you have seen anything better. A big problem IMO is that EFI tried to
be somewhat compatible. In the end, they were not, so you got the worst of
both (new interfaces and legacy functionality).
Server systems that support IPMT have Minux under the covers in
coprocessor, which using a coprocessor is also how Apple runs UEFI. With
IMPT, it is sort of sad more of it is not really exposed, but you need the
added cost of the coprocessor. Plus it adds a new security domain, which
many people complain about. I try to know as little about it as possible
to get my work done, but exposing more of that interface might help.
Hi all, I'm looking for an interactive tool to help students learn the
Unix/Linux command line. I still remember the "learn" tool. Is there an
equivalent for current systems?
I have tried to forward-port the old learn sources to current Linux but
my patience ran out :-)
Thanks in advance for any tips/pointers.
Cheers, Warren
Given the recent discussion of pipes and networking ... I'm passing this
along for those that might not have seen it.
---------- Forwarded message ---------
From: Jack Haverty via Internet-history
Date: Tue, Mar 10, 2020 at 1:30 PM
Subject: Re: [ih] NCP and TCP implementations
To: *Internet-History*
The first TCP implementation for Unix was done in PDP-11 assembly
language, running on a PDP-11/40 (with way too little memory or address
space). It was built using code fragments excerpted from the LSI-11
TCP implementation provided by Jim Mathis, which ran under SRI's
home-built OS. Jim's TCP was all written in PDP-11 assembler. The code
was cross-compiled (assembled) on a PDP-10 Tenex system, and downloaded
over a TTY line to the PDP-11/40. That was much easier and faster than
doing all the implementation work on the PDP-11.
The code architecture involved putting TCP itself at the user level,
communicating with its "customers" using Unix InterProcess
Communications facilities (Rand "Ports"). It would have been
preferable to implement TCP within the Unix kernel, but there was simply
not enough room due to the limited address space available on the 11/40
model. Later implementations of TCP, on larger machines with twice the
address space, were done in the kernel. In addition to the Berkeley BSD
work, I remember Gurwitz, Wingfield, Nemeth, and others working on TCP
implementation for the PDP-11/70 and Vax.
The initial Unix TCP implementation was for TCP version 2 (2.5 IIRC), as
was Jim's LSI-11 code. This 2.5 implementation was one of the players
in the first "TCP Bakeoff" organized by Jon Postel and carried out on a
weekend at ISI before the quarterly Internet meeting. The PDP-11/40 TCP
was modified extensively over the next year or so as TCP advanced
through 2.5, 2.5+, 3, and eventually stabilized at TCP4 (which it seems
we still have today, 40+ years later!)
The Unix TCP implementation required a small addition to the Unix kernel
code, to add the "await" and "capac" system calls. Those calls were
necessary to enable the implementation of user-level code where the
traditional Unix "pipeline" model of programming
(input->process->process...->output) was inadequate for use in
multi-computer programming (such as FTP, Telnet, etc., - anywhere where
more than one computer was involved).
The code to add those new system calls was written in C, as was almost
all of the Unix OS itself. The new system calls added the functionality
of "non-blocking I/O" which did not previously exist. It involved very
few lines of code, since there wasn't room for very many more
instructions, and even so it required finding more space by shortening
many of the kernel error messages to save a few bytes here and there.
Randy Rettberg and I did that work, struggling to understand how Unix
kernel internals worked, since neither of us had ever worked with Unix
before even as a user. We did not try to "get it right" by making
significant changes to the basic Unix architecture. That came later
with the Berkeley and Gurwitz efforts. The PDP-11/40 was simply too
constrained to support such changes, and our mission was to get TCP
support on the machine, rather than develop the OS.
I think I speak authoritatively here, since I wrote and debugged that
first Unix TCP code. I still have an old, yellowing listing of that
first Unix TCP.
FWIW, if there's interest in why certain languages were chosen, there's
a very simple explanation of why the Unix implementation was done in
assembler rather than C, the native language of Unix. First, Jim
Mathis' code was in assembler, so it was easy to extract large chunks
and paste them into the Unix assembler implementation. Second, and
probably most important, was that I was very accustomed to writing
assembler code and working at the processor instruction level. But I
barely knew C existed, and was certainly not proficient in it, and we
needed the TCP working fast for use in other projects. The choice was
very pragmatic, not based at all on technical issues of languages or
superiority of any architecture.
/Jack Haverty
On 3/9/20 11:14 PM, vinton cerf via Internet-history wrote:
> Steve Kirsch asks in what languages NCP and TCP were written.
>
> The Stanford first TCP implementation was done in BCPL by Richard Karp.
> Another version was written for PDP-11/23 by Jim Mathis but not clear in
> what language. Tenex was probably done in C at BBN. Was 360 done in PL/1??
> Dave Clark did one for IBM PC (assembly language/??)
>
> Other recollections much appreciated.
>
> vint
--
Internet-history mailing list
Internet-history(a)elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
I started using 'cpio' in the 80tish and still use it, especially
transferring files and complete directories between various UNIX
versions like SCOUNIX 3.2V4.2, Tru64, HP-UX 11i..
The main option I use with cpio is (of course) "-c" and only occasionally "-u"
Hi,
while exploring the gopherspace (YES! Still existing,
growing community) I found this gopher page:
gopher://pdp11.tk/1
which can be reached with Lynx for example.
Unfortunately I cannot evaluate the items there, but
may be it is worth a look by someone knowledgeable.
Cheers!
mcc
I work at an astronomy facility. I get to do some fun dumpster diving.
I recently have pulled out of the trash a plugboard with a male and a
female D-Sub 52 connector. 3 rows of pins, 17-18-17. I took the
connectors off the board: there's nothing back there, so this thing only
ever existed so you could plug the random cable you found into it and its
friends to see what the cable fit.
I can't find much evidence that a 52-pin D-Sub ever existed.
Is this just Yet Another Physics Experiment thing where, hey, if your
instrument already costs three million dollars, what's a couple of grand
for machining custom connectors? Or was it once a thing?
(also posting to cc-talk)
Adam
Not UNIX, not 52-pin, but old, old and serial
Your mission, should you choose to accept it, is to save data from a
computer that should have died aeons ago
...
Tap into the serial line – what could be simpler?
Alas, the TI was smart enough to spot the absence of the rattly old
beast ("the software wouldn't print without some of the seldom-used
serial control lines functioning," explained Aaron) so the customer
was asked to bring in the printer as well.
https://www.theregister.co.uk/2020/02/24/who_me/
---------- Forwarded message ---------
From: Adam Thornton <athornton(a)gmail.com>
Date: Tue, Feb 11, 2020 at 5:56 PM
Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
To: Clem Cole <clemc(a)ccc.com>
As someone working in this area right now....yeah, and that’s why
traditional HPC centers do not deliver the services we want for projects
like the Vera Rubin Observatory’s Legacy Survey Of Space and Time.
Almost all of our scientist-facing code is Python though a lot of the
performance critical stuff is implemented in C++ with Python bindings.
The incumbents are great at running data centers like they did in 1993.
That’s not the best fit for our current workload. It’s not generally the
compute that needs to be super-fast: it’s access to arbitrary slices
through really large data sets that is the interesting problem for us.
That’s not to say that that lfortran isn’t cool. It really really is, and
Ondrej Cestik has done amazing work in making modern FORTRAN run in a
Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
compiled languages) are astounding.
But...HPC is no longer the cutting edge. We are seeing a Kuhnian paradigm
shift in action, and, sure, the old guys (and they are overwhelmingly guys)
who have tenure and get the big grants will never give up FORTRAN which
after all was good enough for their grandpappy and therefore good enough
for them. But they will retire. Scaling out is way way cheaper than
scaling up.
On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc(a)ccc.com> wrote:
> moving to COFF
>
> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike(a)gmail.com> wrote:
>
>> My general mood about the current standard way of nerd working is how
>> unimaginative and old-fashioned it feels.
>>
> ...
>>
>> But I'm a grumpy old man and getting far off topic. Warren should cry,
>> "enough!".
>>
>> -rob
>>
>
> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
> words. But I caution that just, because something is old-fashioned, does
> not necessarily make it wrong (much less bad).
>
> I ask you to take a look at the Archer statistics of code running in
> production (Archer large HPC site in Europe):
> http://archer.ac.uk/status/codes/
>
> I think there are similar stats available for places like CERN, LRZ, and
> of the US labs, but I know of these so I point to them.
>
> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
> C++ @ 8%, Python @ 1% and all the others at 1%.
>
> Why is that? The math has not changed ... and open up any of those codes
> and what do you see: solving systems of differential equations with linear
> algebra. It's the same math my did by hand as a 'computer' in the 1950s.
>
> There is not 'tensor flows' or ML searches running SPARK in there. Sorry,
> Google/AWS et al. Nothing 'modern' and fresh -- just solid simple science
> being done by scientists who don't care about the computer or sexy new
> computer languages.
>
> IIRC, you trained as a physicist, I think you understand their thinking. *They
> care about getting their science done.*
>
> By the way, a related thought comes from a good friend of mine from
> college who used to be the Chief Metallurgist for the US Gov (NIST in
> Colorado). He's back in the private sector now (because he could not
> stomach current American politics), but he made an important
> observation/comment to me a couple of years ago. They have 60+ years of
> metallurgical data that has and his peeps have been using with known
> Fortran codes. If we gave him new versions of those analytical programs
> now in your favorite new HLL - pick one - your Go (which I love), C++
> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
> to reconfirm previous results. They are not going to do that. It's not
> economical. They 'know' how the data works, the types of errors they
> have, how the programs behave* etc*.
>
> So to me, the bottom line is just because it's old fashioned does not make
> it bad. I don't want to write an OS in Fortran-2018, but I can wrote a
> system that supports code compiled with my sexy new Fortran-2018 compiler.
>
> That is to say, the challenge for >>me<< is to build him a new
> supercomputer that can run those codes for him and not change what they are
> doing and have them scale to 1M nodes *etc*..
>
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>