Moving to COFF where this discussion really belongs ...
On Sun, Jun 7, 2020 at 2:51 PM Nemo Nusquam <cym224(a)gmail.com> wrote:
> On 06/07/20 11:26, Clem Cole wrote (in part):
> > Neither language is used for anything in production in our world at this
> point.
>
> They seem to be used in some worlds: https://blog.golang.org/10years and
> https://www.rust-lang.org/production
Nemo,
That was probably not my clearest wording. I did not mean to imply either
Go or Rust was not being used by any imagination.
My point was that in SW development environments that I reside (HPC, some
startups here in New England and the Bay Area, as well as Intel in general)
-- Go and Rust both have smaller use cases compared to C and C++ (much less
Python and Java for that matter). And I know of really no 'money' project
that relies yet on either. That does not mean I know of no one using
either; as I know of projects using both (including a couple of my own),
but no one that had done anything production or deployed 'mission-critical'
SW with one or the other. Nor does that mean it has not happened, it just
means I have not seen been exposed.
I also am saying that in my own personal opinion, I expect it too, in
particular in Go in userspace code - possibly having a chance to push out
Java and hopefully pushing out C++ a bit.
My response was to an earlier comment about C's popularity WRT to C++. I
answered with my experience and I widened it to suggest that maybe C++ was
not the guaranteed incumbent as the winner for production. What I did not
say then, but I alluded too was the particularly since nothing in nearly 70
years has displaced Fortran, which >>is<< still the #1 language for
production codes (as you saw with the Archer statistics I pointed out).
Reality time ... Intel, IBM, *et al,* spend a lot of money making sure
that there are >>production quality<< Fortran compilers easily available.
Today's modern society is from Weather prediction to energy, to param,
chemistry, and physics. As I have here and in other places, over my
career, Fortran has paid me and my peeps salary. It is the production
enabler and without a solid answer to having a Fortran solution, you are
unlikely to make too much progress, certainly in the HPC space.
Let me take this in a slightly different direction. I tend to use the
'follow the money' as a way to root out what people care about.
Where firms spend money to create or purchase tools to help their staff?
The answer is in tools that give them return that they can measure. So
using that rule: What programming languages have the largest ecosystems
for tools that help find performance and operation issues? Fortran, C, C++
have the largest that I know. My guess would be Java and maybe
JavaScript/PHP would be next, but I don't know of any.
If I look at Intel, were do we spend money on the development tools: C/C++
and Fortran (which all use a common backend) are #1. Then we invest in
other versions of the same (GCC/LLVM) for particularly things we care
about. After that, it's Python and used to be Java and maybe some
JavaScript. Why? because ensuring that those ecosystems are solid on
devices that we make is good for us, even it means we help some of our
competitors' devices also. But our investment helps us and Fortran,
C/C++ is where people use our devices (and our most profitable versions in
particular), so it's our own best interest to make sure there are tools to
bring out the best.
BTW: I might suggest you take a peek at where other firms do the same
thing, and I think you'll find the follow the money rule is helpful to
understand what people care the most about.
Hi
I'm wondering if anybody knows what happened with this? (Besides the
fact it crashed and burnt. I mean, where are the source trees?)
lowendmac.com/2016/nutek-mac-clones/
It was a Macintosh clone that reverse-engineered the Mac interface and
appearance, using the X Window System as the GUI library - what in
Macs of that era was locked into the Mac system BIOS.
As such, it's an interesting use of the X Window System.
Wesley Parish
Cc: to COFF, as this isn't so Unix-y anymore.
On Tue, May 26, 2020 at 12:22 PM Christopher Browne <cbbrowne(a)gmail.com>
wrote:
> [snip]
> The Modula family seemed like the better direction; those were still
> Pascal-ish, but had nice intentional extensions so that they were not
> nearly so "impotent." I recall it being quite popular, once upon a time,
> to write code in Modula-2, and run it through a translator to mechanically
> transform it into a compatible subset of Ada for those that needed DOD
> compatibility. The Modula-2 compilers were wildly smaller and faster for
> getting the code working, you'd only run the M2A part once in a while
> (probably overnight!)
>
Wirth's languages (and books!!) are quite nice, and it always surprised and
kind of saddened me that Oberon didn't catch on more.
Of course Pascal was designed specifically for teaching. I learned it in
high school (at the time, it was the language used for the US "AP Computer
Science" course), but I was coming from C (with a little FORTRAN sprinkled
in) and found it generally annoying; I missed Modula-2, but I thought
Oberon was really slick. The default interface (which inspired Plan 9's
'acme') had this neat graphical sorting simulation: one could select
different algorithms and vertical bars of varying height were sorted into
ascending order to form a rough triangle; one could clearly see the
inefficiency of e.g. Bubble sort vs Heapsort. I seem to recall there was a
way to set up the (ordinarily randomized) initial conditions to trigger
worst-case behavior for quick.
I have a vague memory of showing it off in my high school CS class.
- Dan C.
Hi all, I have a strange question and I'm looking for pointers.
Assume that you can multiply two 8-bit values in hardware and get a 16-bit
result (e.g. ROM lookup table). It's straightforward to use this to multiply
two 16-bit values:
AABB *
CCDD
----
PPPP = BB*DD
QQQQ00 = BB*CC
RRRR00 = AA*DD
SSSS0000 = AA*CC
--------
32-bit result
But if the hardware can only provide the low eight bits of the 8-bit by
8-bit multiply, is it still possible to do a 16-bit by 16-bit multiply?
Next question, is it possible to do 16-bit division when the hardware
can only do 8-bit divided by 8-bit. Ditto 16-bit modulo with only 8-bit
modulo?
Yes, I could sit down and nut it all out from scratch, but I assume that
somewhere this has already been done and I could use the results.
Thanks in advance for any pointers.
Warren
** Back story. I'm designing an 8-bit TTL CPU which has 8-bit multiply, divide
and modulo in a ROM table. I'd like to write subroutines to do 16-bit and
32-bit integer maths.
On Sun, May 17, 2020 at 12:24 PM Paul Winalski <paul.winalski(a)gmail.com>
wrote:
> On 5/16/20, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> >
> > Why was there no byte or "mem" type?
>
> These days machine architecture has settled on the 8-bit byte as the
> unit for addressing, but it wasn't always the case. The PDP-10
> addressed memory in 36-bit units. The character manipulating
> instructions could deal with a variety of different byte lengths: you
> could store six 6-bit BCD characters per machine word,
Was this perhaps a typo for 9 4-bit BCD digits? I have heard that a reason
for the 36-bit word size of computers of that era was that the main
competition at the time was against mechanical calculator, which had
9-digit precision. 9*4=36, so 9 BCD digits could fit into a single word,
for parity with the competition.
6x6-bit data would certainly hold BAUDOT data, and I thought the Univac/CDC
machines supported a 6-bit character set? Does this live on in the Unisys
1100-series machines? I see some reference to FIELDATA online.
I feel like this might be drifting into COFF territory now; Cc'ing there.
or five ASCII
> 7-bit characters (with a bit left over), or four 8-bit characters
> (ASCII plus parity, with four bits left over), or four 9-bit
> characters.
>
> Regarding a "mem" type, take a look at BLISS. The only data type that
> language has is the machine word.
>
> > +getfield(buf)
> > +char buf[];
> > +{
> > + int j;
> > + char c;
> > +
> > + j = 0;
> > + while((c = buf[j] = getc(iobuf)) >= 0)
> > + if(c==':' || c=='\n') {
> > + buf[j] =0;
> > + return(1);
> > + } else
> > + j++;
> > + return(0);
> > +}
> >
> > so here the EOF was different and char was signed 7-bit it seems.
>
> That makes perfect sense if you're dealing with ASCII, which is a
> 7-bit character set.
To bring it back slightly to Unix, when Mary Ann and I were playing around
with First Edition on the emulated PDP-7 at LCM+L during the Unix50 event
last USENIX, I have a vague recollection that the B routine for reading a
character from stdin was either `getchar` or `getc`. I had some impression
that this did some magic necessary to extract a character from half of an
18-bit word (maybe it just zeroed the upper half of a word or something).
If I had to guess, I imagine that the coincidence between "character" and
"byte" in C is a quirk of this history, as opposed to any special hidden
meaning regarding textual vs binary data, particularly since Unix makes no
real distinction between the two: files are just unstructured bags of
bytes, they're called 'char' because that was just the way things had
always been.
- Dan C.
On May 14, 2020, at 10:32 AM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I'm being a whiney grumpy old man,
I’ve been one of those since I was, like, 20. I am finally growing into it. It’s kinda nice.
Adam
[redirecting to COFF]
On Wednesday, 15 April 2020 at 18:19:57 +1000, Dave Horsfall wrote:
> On Wed, 15 Apr 2020, Don Hopkins wrote:
>
>> I love how in a discussion of how difficult it was to publish a book on
>> Unix with the correct punctuation characters 42 years ago, we still
>> can???t even quote the title of the book in a discussion about Unix
>> without the punctuation characters degrading and mutating each round
>> trip.
>
> Well, I'm not the one here using Windoze...
Arguably Microsoft does it better than Unix. Most of the issues are
related to the character encoding. And as Don's headers say:
X-Mailer: Apple Mail (2.3608.60.0.2.5)
I agree with Don. I use mutt, which has many advantages, but sane
character encoding isn't one of them.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
So it could be the lack of televised sports getting to me in these shelter-in-place days, but, I mean, sure, I guess I’ll throw in some bucks for a pay-per-view of a Pike/Thompson cage match. FIGHT!
Followups set.
> On Apr 18, 2020, at 6:28 PM, Rob Pike <robpike(a)gmail.com> wrote:
> It wasn't my intention.
> On Sun, Apr 19, 2020 at 11:12 AM Ken Thompson <ken(a)google.com> wrote:
>>
>> you shouldn't have shut down this discussion.
>> On Sat, Apr 18, 2020 at 3:27 PM Rob Pike <robpike(a)gmail.com> wrote:
>>> ``because''.
So I imagine that most readers of this list have heard that a number of US
states are actively looking for COBOL programmers.
If you have not, the background is that, in the US, a number of
unemployment insurance systems have mainframe backends running applications
mostly written in COBOL. Due to the economic downturn as a result of
COVID-19, these systems are being overwhelmed with unprecedented numbers of
newly-unemployed people filing claims. The situation is so dire that the
Governor of New Jersey mentioned it during a press conference.
This has led to a number of articles in the popular press about this
situation, some rather sensational: "60 year old programming language
prevents people filing for unemployment!" E.g.,
https://www.cnn.com/2020/04/08/business/coronavirus-cobol-programmers-new-j…
On the other hand, some are rather more measured:
https://spectrum.ieee.org/tech-talk/computing/software/cobol-programmers-an…
I suspect the real problem is less "COBOL and mainframes" and more
organizational along the lines of lack of investment in training,
maintenance and modernization. I can't imagine that bureaucrats are
particularly motivated to invest in technology that mostly "just works."
But the news coverage has led to a predictable set of rebuttals from the
mainframe faithful on places like Twitter; they point out that COBOL has
been updated by recent standards in 2002 and 2014 and is being unfairly
blamed for the present failures, which arguably have more to do with
organizational issues than technology. However, the pendulum seems to have
swung too far with their arguments in that they're now asserting that COBOL
codebases are uniformly masterworks. I don't buy that.
I find all of this interesting. I don't know COBOL, nor all that much about
it, save for some generalities about its origin and Grace Hopper's
involvement in its creation. However, in the last few days I've read up on
it a bit and see very little to recommend it: the type and scoping rules
are a mess, things like the 'ALTER' statement and the ability to cascade
procedure invocations via the 'THRU' keyword seem like a recipe for
spaghetti code, and while they added an object system in 2002, it doesn't
seem to integrate with the rest of the language coherently and I don't see
it doing anything that can't be done in any other OO language. And of
course the syntax is abjectly horrible. All in all, it may not be the cause
of the current problems, but I don't know why anyone would be much of a fan
of it and unless you're already sitting on a mountain of COBOL code (which,
in fairness, many organizations in government, insurance and finance
are...) I wouldn't invest in it.
I read an estimate somewhere that there are something like 380 billion
lines of COBOL out there, and another 5 billion are written annually
(mostly by body shops in the BRIC countries?). That's a lot of code; surely
not all of it is good.
So....What do folks think? Is COBOL being unfairly maligned simply due to
age, or is it really a problem? How about continued reliance on IBM
mainframes: strategic assets or mistakes?
- Dan C.
(PS: I did look up the specs for the IBM z15. It's an impressive machine,
but without an existing mainframe investment, I wouldn't get one.)