On 7/6/20 7:30 PM, Greg 'groggy' Lehey wrote:
> People (not just Clem), when you change the topic, can you please
> modify the Subject: to match? I'm not overly interested in uucp,
> but editors are a completely different matter. I'm sure I'm not the
> only one, so many interested parties will miss these replies.
I see this type of change happen — in my not so humble opinion — /way/
/too/ /often/.
So, I'm wondering if people are interested in configuring TUHS and / or
COFF mailing list (Mailman) with topics. That way people could
subscribe to the topics that they are interested in and not receive
copies of topics they aren't interested in.
I'm assuming that TUHS and COFF are still on Mailman mailing lists and
that Warren would be amicable to such.
To clarify, it would still be the same mailing list(s) as they exist
today. They would just have the to be utilized option of picking
interesting topics. Where topics would be based on keywords in the
message body.
I'm just trying to gauge people's interest in this idea.
--
Grant. . . .
unix || die
On Wed, 8 Jul 2020, Jacob Goense wrote:
> When I have tuned out of a long running thread and the topic drifts
> significantly I'm always grateful to the kind soul that tags..:
>
> Subject: Re: new [was: old]
Yeah, I try and remember to do that...
-- Dave
The Internet's original graphics trio of "*Booth, Beatty, and Barsky*" (who
had been affectionately dubbed the graphics industries' "*Killer Bs*") lost
one for their greats minds and personalities this last week. With great
sadden in my heart, I regret to inform you that John Beatty has passed
away. I was informed this AM by his partner is so many enterprises, Kelly
Booth, that John died peacefully in his sleep in the early morning on
Thursday, July 2, 2020.
When we look at the wonders of the Internet's growth in the use of computer
graphics, we view his legacy. But more than that, John was a good friend of
many of us and, of course, we all miss him.
Kelly tells me that the obituary with further information is being prepared
by John’s sister, Jean Beatty, and well as others. We'll be sure to try to pass
on a copy (or a link if it is on the web) when it has been released.
Clem
Does anyone have any experience with UUCP on macOS or *BSD systems that
would be willing to help me figure something out?
I'm working on adding a macOS X system to my micro UUCP network and
running into some problems.
- uuto / uucp copy files from my non-root / non-(_)uucp user to the
UUCP spool. But the (demand based) ""call (pipe over SSH) is failing.
- running "uucico -r1 -s <remote system name> -f" as the (_)uucp user
(via sudo) works.
- I'm using the pipe port type and running things over an SSH connection.
- The (_)uucp user can ssh to the remote system as expected
without any problems or being prompted for a password. (Service
specific keys w/ forced command.)
I noticed that the following files weren't set UID or GID like they are
on Linux. But I don't know if that's a macOS and / or *BSD difference
when compared to Linux.
/usr/bin/uucp
/usr/bin/uuname
/usr/bin/uustat
/usr/bin/uux
/usr/sbin/uucico
/usr/sbin/uuxqt
Adding the set UID & GID bits allowed things to mostly work.
Aside: Getting the contemporary macOS so that I could edit the
(/usr/share/uucp/) sys & port files was a treat.
I figured that it was worth inquiring if anyone had any more experience
/ tips / gotchas before I go bending ~> breaking things even more.
Thank you.
--
Grant. . . .
unix || die
In 1966, Engineers at IBM invented a method of speeding up execution
without adding a lot of very expensive memory. They called their
invention the muffer. The name did not catch on so they picked another
name and submitted an article to the IBM System J. The editor noted
that their second name was heavily overused and suggested a third name,
which the engineers accepted. The third name was cache.
(Muffer was short for memory buffer.)
This from "IBM's 360 and Early 370 Systems", MIT Press. I found this
an amusing tidbit of history -- hopefully so may others.
N.
Redirected to COFF, since this isn't very TUHS related.
Richard Salz wrote:
> Noel Chiappa wote:
>> MLDEV on ITS would, I think, fit under that description.
>> I don't know if there's a paper on it; it's mid-70's.
I don't think there's anything like an academic paper.
The earliest evidence I found for the MLDEV facility, or a predecessor,
is from 1972:
https://retrocomputing.stackexchange.com/questions/13709/what-are-some-earl…
> A web search for "its mldev" finds several things (mostly by Lars Brinkhoff
> it seems) , including
> https://github.com/larsbrinkhoff/mldev/blob/master/doc/mldev.protoc
That's just a copy I made.
ABC-TV "Grantchester" (a detective-priest series[*]) featured someone who
died from mercury poisoning, when someone opened the tanks. Apart from
the beautiful Teletype in the foreground, there was a machine with lots of
dials in the background; obviously some early computer, but not enough
footage for me to tell which one, but is a British show if that helps.
[*]
I suppose that I need not remind any Monty Python freaks of these lines:
"There's another dead bishop on the landing, Vicar-Sergeant!"
"Err, Detective-Parsons, Madam."
Etc... Otherwise I'd go on all day :-)
-- Dave
[ -TUHS and +COFF ]
On Mon, Jun 8, 2020 at 1:49 AM Lars Brinkhoff <lars(a)nocrew.org> wrote:
> Chris Torek wrote:
> > You pay (sometimes noticeably) for the GC, but the price is not
> > too bad in less time-critical situations. The GC has a few short
> > stop-the-world points but since Go 1.6 or so, it's pretty smooth,
> > unlike what I remember from 1980s Lisp systems. :-)
>
> I'm guessing those 1980s Lisp systems would also be pretty smooth on
> 2020s hardware.
>
I worked on a big system written in Common Lisp at one point, and it used a
pretty standard Lispy garbage collector. It spent something like 15% of its
time in GC. It was sufficiently bad that we forced a full GC between every
query.
The Go garbage collector, on the other hand, is really neat. I believe it
reserves something like 25% of available CPU time for itself, but it runs
concurrently with your program: having to stop the world for GC is very,
very rare in Go programs and even when you have to do it, the concurrent
code (which, by the way, can make use of full physical parallelism on
multicore systems) has already done much of the heavy lifting, meaning you
spend less time in a GC pause. It's a very impressive piece of engineering.
- Dan C.
> I've never heard of the dimensionality of an array in such a context
> described in terms of a sum type,[...]
I was also puzzled by this remark. Perhaps Bram was thinking of an
infinite sum type such as
Arrays_0 + Arrays_1 + Arrays_2 + ...
where "Arrays_n" is the type of arrays of size n. However, I don't see
a way of defining this without dependent (or at least indexed) types.
> [...] but have often heard of it described in terms of a dependent
> type.
I think that's right, yes. We want the type checker to decide, at
compile time, whether a certain operation on arrays, such as scalar
product, or multiplication with a matrix, will respect the types. That
means providing the information about the size to the type checker:
types need to know about values, hence the need for dependent types.
Ideally, a lot of the type information can then be erased at run time,
so that we don't need the arrays to carry their sizes around. But the
distinction between what can and what cannot be erased at run time is
muddled in a dependently-typed programming language.
Best wishes,
Cezar
Dan Cross <crossd(a)gmail.com> writes:
> [-TUHS, +COFF as per wkt's request]
>
> On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie(a)gmail.com> wrote:
>
>> Dependent types aren't needed for sum types though, which is what you'd
>> normally use for an array that carries its size, correct?
>>
>
> No, I don't think so. I've never heard of the dimensionality of an array in
> such a context described in terms of a sum type, but have often heard of it
> described in terms of a dependent type. However, I'm no type theorist.
>
> - Dan C.
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
[-TUHS, +COFF as per wkt's request]
On Sun, Jun 7, 2020 at 8:26 PM Bram Wyllie <bramwyllie(a)gmail.com> wrote:
> Dependent types aren't needed for sum types though, which is what you'd
> normally use for an array that carries its size, correct?
>
No, I don't think so. I've never heard of the dimensionality of an array in
such a context described in terms of a sum type, but have often heard of it
described in terms of a dependent type. However, I'm no type theorist.
- Dan C.
Moving to COFF where this discussion really belongs ...
On Sun, Jun 7, 2020 at 2:51 PM Nemo Nusquam <cym224(a)gmail.com> wrote:
> On 06/07/20 11:26, Clem Cole wrote (in part):
> > Neither language is used for anything in production in our world at this
> point.
>
> They seem to be used in some worlds: https://blog.golang.org/10years and
> https://www.rust-lang.org/production
Nemo,
That was probably not my clearest wording. I did not mean to imply either
Go or Rust was not being used by any imagination.
My point was that in SW development environments that I reside (HPC, some
startups here in New England and the Bay Area, as well as Intel in general)
-- Go and Rust both have smaller use cases compared to C and C++ (much less
Python and Java for that matter). And I know of really no 'money' project
that relies yet on either. That does not mean I know of no one using
either; as I know of projects using both (including a couple of my own),
but no one that had done anything production or deployed 'mission-critical'
SW with one or the other. Nor does that mean it has not happened, it just
means I have not seen been exposed.
I also am saying that in my own personal opinion, I expect it too, in
particular in Go in userspace code - possibly having a chance to push out
Java and hopefully pushing out C++ a bit.
My response was to an earlier comment about C's popularity WRT to C++. I
answered with my experience and I widened it to suggest that maybe C++ was
not the guaranteed incumbent as the winner for production. What I did not
say then, but I alluded too was the particularly since nothing in nearly 70
years has displaced Fortran, which >>is<< still the #1 language for
production codes (as you saw with the Archer statistics I pointed out).
Reality time ... Intel, IBM, *et al,* spend a lot of money making sure
that there are >>production quality<< Fortran compilers easily available.
Today's modern society is from Weather prediction to energy, to param,
chemistry, and physics. As I have here and in other places, over my
career, Fortran has paid me and my peeps salary. It is the production
enabler and without a solid answer to having a Fortran solution, you are
unlikely to make too much progress, certainly in the HPC space.
Let me take this in a slightly different direction. I tend to use the
'follow the money' as a way to root out what people care about.
Where firms spend money to create or purchase tools to help their staff?
The answer is in tools that give them return that they can measure. So
using that rule: What programming languages have the largest ecosystems
for tools that help find performance and operation issues? Fortran, C, C++
have the largest that I know. My guess would be Java and maybe
JavaScript/PHP would be next, but I don't know of any.
If I look at Intel, were do we spend money on the development tools: C/C++
and Fortran (which all use a common backend) are #1. Then we invest in
other versions of the same (GCC/LLVM) for particularly things we care
about. After that, it's Python and used to be Java and maybe some
JavaScript. Why? because ensuring that those ecosystems are solid on
devices that we make is good for us, even it means we help some of our
competitors' devices also. But our investment helps us and Fortran,
C/C++ is where people use our devices (and our most profitable versions in
particular), so it's our own best interest to make sure there are tools to
bring out the best.
BTW: I might suggest you take a peek at where other firms do the same
thing, and I think you'll find the follow the money rule is helpful to
understand what people care the most about.
Hi
I'm wondering if anybody knows what happened with this? (Besides the
fact it crashed and burnt. I mean, where are the source trees?)
lowendmac.com/2016/nutek-mac-clones/
It was a Macintosh clone that reverse-engineered the Mac interface and
appearance, using the X Window System as the GUI library - what in
Macs of that era was locked into the Mac system BIOS.
As such, it's an interesting use of the X Window System.
Wesley Parish
Cc: to COFF, as this isn't so Unix-y anymore.
On Tue, May 26, 2020 at 12:22 PM Christopher Browne <cbbrowne(a)gmail.com>
wrote:
> [snip]
> The Modula family seemed like the better direction; those were still
> Pascal-ish, but had nice intentional extensions so that they were not
> nearly so "impotent." I recall it being quite popular, once upon a time,
> to write code in Modula-2, and run it through a translator to mechanically
> transform it into a compatible subset of Ada for those that needed DOD
> compatibility. The Modula-2 compilers were wildly smaller and faster for
> getting the code working, you'd only run the M2A part once in a while
> (probably overnight!)
>
Wirth's languages (and books!!) are quite nice, and it always surprised and
kind of saddened me that Oberon didn't catch on more.
Of course Pascal was designed specifically for teaching. I learned it in
high school (at the time, it was the language used for the US "AP Computer
Science" course), but I was coming from C (with a little FORTRAN sprinkled
in) and found it generally annoying; I missed Modula-2, but I thought
Oberon was really slick. The default interface (which inspired Plan 9's
'acme') had this neat graphical sorting simulation: one could select
different algorithms and vertical bars of varying height were sorted into
ascending order to form a rough triangle; one could clearly see the
inefficiency of e.g. Bubble sort vs Heapsort. I seem to recall there was a
way to set up the (ordinarily randomized) initial conditions to trigger
worst-case behavior for quick.
I have a vague memory of showing it off in my high school CS class.
- Dan C.
Hi all, I have a strange question and I'm looking for pointers.
Assume that you can multiply two 8-bit values in hardware and get a 16-bit
result (e.g. ROM lookup table). It's straightforward to use this to multiply
two 16-bit values:
AABB *
CCDD
----
PPPP = BB*DD
QQQQ00 = BB*CC
RRRR00 = AA*DD
SSSS0000 = AA*CC
--------
32-bit result
But if the hardware can only provide the low eight bits of the 8-bit by
8-bit multiply, is it still possible to do a 16-bit by 16-bit multiply?
Next question, is it possible to do 16-bit division when the hardware
can only do 8-bit divided by 8-bit. Ditto 16-bit modulo with only 8-bit
modulo?
Yes, I could sit down and nut it all out from scratch, but I assume that
somewhere this has already been done and I could use the results.
Thanks in advance for any pointers.
Warren
** Back story. I'm designing an 8-bit TTL CPU which has 8-bit multiply, divide
and modulo in a ROM table. I'd like to write subroutines to do 16-bit and
32-bit integer maths.
On Sun, May 17, 2020 at 12:24 PM Paul Winalski <paul.winalski(a)gmail.com>
wrote:
> On 5/16/20, Steffen Nurpmeso <steffen(a)sdaoden.eu> wrote:
> >
> > Why was there no byte or "mem" type?
>
> These days machine architecture has settled on the 8-bit byte as the
> unit for addressing, but it wasn't always the case. The PDP-10
> addressed memory in 36-bit units. The character manipulating
> instructions could deal with a variety of different byte lengths: you
> could store six 6-bit BCD characters per machine word,
Was this perhaps a typo for 9 4-bit BCD digits? I have heard that a reason
for the 36-bit word size of computers of that era was that the main
competition at the time was against mechanical calculator, which had
9-digit precision. 9*4=36, so 9 BCD digits could fit into a single word,
for parity with the competition.
6x6-bit data would certainly hold BAUDOT data, and I thought the Univac/CDC
machines supported a 6-bit character set? Does this live on in the Unisys
1100-series machines? I see some reference to FIELDATA online.
I feel like this might be drifting into COFF territory now; Cc'ing there.
or five ASCII
> 7-bit characters (with a bit left over), or four 8-bit characters
> (ASCII plus parity, with four bits left over), or four 9-bit
> characters.
>
> Regarding a "mem" type, take a look at BLISS. The only data type that
> language has is the machine word.
>
> > +getfield(buf)
> > +char buf[];
> > +{
> > + int j;
> > + char c;
> > +
> > + j = 0;
> > + while((c = buf[j] = getc(iobuf)) >= 0)
> > + if(c==':' || c=='\n') {
> > + buf[j] =0;
> > + return(1);
> > + } else
> > + j++;
> > + return(0);
> > +}
> >
> > so here the EOF was different and char was signed 7-bit it seems.
>
> That makes perfect sense if you're dealing with ASCII, which is a
> 7-bit character set.
To bring it back slightly to Unix, when Mary Ann and I were playing around
with First Edition on the emulated PDP-7 at LCM+L during the Unix50 event
last USENIX, I have a vague recollection that the B routine for reading a
character from stdin was either `getchar` or `getc`. I had some impression
that this did some magic necessary to extract a character from half of an
18-bit word (maybe it just zeroed the upper half of a word or something).
If I had to guess, I imagine that the coincidence between "character" and
"byte" in C is a quirk of this history, as opposed to any special hidden
meaning regarding textual vs binary data, particularly since Unix makes no
real distinction between the two: files are just unstructured bags of
bytes, they're called 'char' because that was just the way things had
always been.
- Dan C.
On May 14, 2020, at 10:32 AM, Larry McVoy <lm(a)mcvoy.com> wrote:
> I'm being a whiney grumpy old man,
I’ve been one of those since I was, like, 20. I am finally growing into it. It’s kinda nice.
Adam
[redirecting to COFF]
On Wednesday, 15 April 2020 at 18:19:57 +1000, Dave Horsfall wrote:
> On Wed, 15 Apr 2020, Don Hopkins wrote:
>
>> I love how in a discussion of how difficult it was to publish a book on
>> Unix with the correct punctuation characters 42 years ago, we still
>> can???t even quote the title of the book in a discussion about Unix
>> without the punctuation characters degrading and mutating each round
>> trip.
>
> Well, I'm not the one here using Windoze...
Arguably Microsoft does it better than Unix. Most of the issues are
related to the character encoding. And as Don's headers say:
X-Mailer: Apple Mail (2.3608.60.0.2.5)
I agree with Don. I use mutt, which has many advantages, but sane
character encoding isn't one of them.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
So it could be the lack of televised sports getting to me in these shelter-in-place days, but, I mean, sure, I guess I’ll throw in some bucks for a pay-per-view of a Pike/Thompson cage match. FIGHT!
Followups set.
> On Apr 18, 2020, at 6:28 PM, Rob Pike <robpike(a)gmail.com> wrote:
> It wasn't my intention.
> On Sun, Apr 19, 2020 at 11:12 AM Ken Thompson <ken(a)google.com> wrote:
>>
>> you shouldn't have shut down this discussion.
>> On Sat, Apr 18, 2020 at 3:27 PM Rob Pike <robpike(a)gmail.com> wrote:
>>> ``because''.
So I imagine that most readers of this list have heard that a number of US
states are actively looking for COBOL programmers.
If you have not, the background is that, in the US, a number of
unemployment insurance systems have mainframe backends running applications
mostly written in COBOL. Due to the economic downturn as a result of
COVID-19, these systems are being overwhelmed with unprecedented numbers of
newly-unemployed people filing claims. The situation is so dire that the
Governor of New Jersey mentioned it during a press conference.
This has led to a number of articles in the popular press about this
situation, some rather sensational: "60 year old programming language
prevents people filing for unemployment!" E.g.,
https://www.cnn.com/2020/04/08/business/coronavirus-cobol-programmers-new-j…
On the other hand, some are rather more measured:
https://spectrum.ieee.org/tech-talk/computing/software/cobol-programmers-an…
I suspect the real problem is less "COBOL and mainframes" and more
organizational along the lines of lack of investment in training,
maintenance and modernization. I can't imagine that bureaucrats are
particularly motivated to invest in technology that mostly "just works."
But the news coverage has led to a predictable set of rebuttals from the
mainframe faithful on places like Twitter; they point out that COBOL has
been updated by recent standards in 2002 and 2014 and is being unfairly
blamed for the present failures, which arguably have more to do with
organizational issues than technology. However, the pendulum seems to have
swung too far with their arguments in that they're now asserting that COBOL
codebases are uniformly masterworks. I don't buy that.
I find all of this interesting. I don't know COBOL, nor all that much about
it, save for some generalities about its origin and Grace Hopper's
involvement in its creation. However, in the last few days I've read up on
it a bit and see very little to recommend it: the type and scoping rules
are a mess, things like the 'ALTER' statement and the ability to cascade
procedure invocations via the 'THRU' keyword seem like a recipe for
spaghetti code, and while they added an object system in 2002, it doesn't
seem to integrate with the rest of the language coherently and I don't see
it doing anything that can't be done in any other OO language. And of
course the syntax is abjectly horrible. All in all, it may not be the cause
of the current problems, but I don't know why anyone would be much of a fan
of it and unless you're already sitting on a mountain of COBOL code (which,
in fairness, many organizations in government, insurance and finance
are...) I wouldn't invest in it.
I read an estimate somewhere that there are something like 380 billion
lines of COBOL out there, and another 5 billion are written annually
(mostly by body shops in the BRIC countries?). That's a lot of code; surely
not all of it is good.
So....What do folks think? Is COBOL being unfairly maligned simply due to
age, or is it really a problem? How about continued reliance on IBM
mainframes: strategic assets or mistakes?
- Dan C.
(PS: I did look up the specs for the IBM z15. It's an impressive machine,
but without an existing mainframe investment, I wouldn't get one.)
Hello:
Im apologize for the OT but Im desperately looking for the answer to a
question regarding BitKeeper and, since I heard about its vitues and
also about it becoming open source in this list, I thought you guys
wouldn't mind helping me find the answer to my problem.
I've been using Git for some time now, but after hearing here about BK,
Im trying it out to see if I can change to it. At the moment I'm quite
happy with it and I very much like it's smaller than Git.
In Git I use a remote SSH "original" repo instead of a public (HTTP)
service, since I want to have distributed copies (several workstations
and laptops) and a "non interactive" copy on a server of mine. I'm not
sure I am making a lot of sense, since I'm not very expert on Git either
(I just use a few basic functions) and English isn't my first language.
What I actually need/want is to create a remote SSH repo from my local
(working) repo. I haven't found how to do that neither in the
documentation nor the Users Forum. Alas I havent found a mailing list to
subscribe to and ask for help. That's why I'm asking you guys for help.
Again sorry for the OT and thank you so much for reading until here. :)
Cheers,
Ángel
Way back then, when dinosaurs strode the earth and System/360 reigned
supreme, we were taught to slash our zeros and sevens (can't quite find
the glyphs for them right now) in order to distinguish them from Oscars
and Ones for the benefit of the keypunch girls (yes, really; I had the
hots for one of them at one time, but someone else took her) on those
green sheets.
In the time-mean I also saw a slash through "Z" (Zulu, Zed, Zee) in order
to distinguish it from a "2" (FIGURES TWO); WTF? And you *never* slashed
your ones, lest thou ended up with the contents of an 029 chad-box over
thine head...
These days, I merely slash my sevens by habit and it doesn't even raise an
eyebrow; even Oz Pigeon Post's mail mangler seems to accept it[*].
What're other bods' experiences?
[*]
Don't even mention Redfern, OK? Just don't... And as for getting off at
Redfern, well, I did a few times when I had a contract job around there.
-- Dave, bored at home
Remember I learned slashing my zeroes, small bar in sevens and zets,
curly letter X.
This was when a studied Mathematics fro a while at the University of
Technology in Delft, the Netherlands.. I think it had to do with
avoiding misunderstandings in scribbled formulars in a time we still
did a lot of real writing semester papers. This was around mid-70tish
for me.
Anybody feel like a text/voice chat on the ClassicCmp Discord server in about 13 hours, say 2200 UTC?
#coff and the General voice channel.
I'll pop on for an hour but start whenever you feel like.
Cheers, Warren
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.