I work at an astronomy facility. I get to do some fun dumpster diving.
I recently have pulled out of the trash a plugboard with a male and a
female D-Sub 52 connector. 3 rows of pins, 17-18-17. I took the
connectors off the board: there's nothing back there, so this thing only
ever existed so you could plug the random cable you found into it and its
friends to see what the cable fit.
I can't find much evidence that a 52-pin D-Sub ever existed.
Is this just Yet Another Physics Experiment thing where, hey, if your
instrument already costs three million dollars, what's a couple of grand
for machining custom connectors? Or was it once a thing?
(also posting to cc-talk)
Adam
Not UNIX, not 52-pin, but old, old and serial
Your mission, should you choose to accept it, is to save data from a
computer that should have died aeons ago
...
Tap into the serial line – what could be simpler?
Alas, the TI was smart enough to spot the absence of the rattly old
beast ("the software wouldn't print without some of the seldom-used
serial control lines functioning," explained Aaron) so the customer
was asked to bring in the printer as well.
https://www.theregister.co.uk/2020/02/24/who_me/
---------- Forwarded message ---------
From: Adam Thornton <athornton(a)gmail.com>
Date: Tue, Feb 11, 2020 at 5:56 PM
Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
To: Clem Cole <clemc(a)ccc.com>
As someone working in this area right now....yeah, and that’s why
traditional HPC centers do not deliver the services we want for projects
like the Vera Rubin Observatory’s Legacy Survey Of Space and Time.
Almost all of our scientist-facing code is Python though a lot of the
performance critical stuff is implemented in C++ with Python bindings.
The incumbents are great at running data centers like they did in 1993.
That’s not the best fit for our current workload. It’s not generally the
compute that needs to be super-fast: it’s access to arbitrary slices
through really large data sets that is the interesting problem for us.
That’s not to say that that lfortran isn’t cool. It really really is, and
Ondrej Cestik has done amazing work in making modern FORTRAN run in a
Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
compiled languages) are astounding.
But...HPC is no longer the cutting edge. We are seeing a Kuhnian paradigm
shift in action, and, sure, the old guys (and they are overwhelmingly guys)
who have tenure and get the big grants will never give up FORTRAN which
after all was good enough for their grandpappy and therefore good enough
for them. But they will retire. Scaling out is way way cheaper than
scaling up.
On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc(a)ccc.com> wrote:
> moving to COFF
>
> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike(a)gmail.com> wrote:
>
>> My general mood about the current standard way of nerd working is how
>> unimaginative and old-fashioned it feels.
>>
> ...
>>
>> But I'm a grumpy old man and getting far off topic. Warren should cry,
>> "enough!".
>>
>> -rob
>>
>
> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
> words. But I caution that just, because something is old-fashioned, does
> not necessarily make it wrong (much less bad).
>
> I ask you to take a look at the Archer statistics of code running in
> production (Archer large HPC site in Europe):
> http://archer.ac.uk/status/codes/
>
> I think there are similar stats available for places like CERN, LRZ, and
> of the US labs, but I know of these so I point to them.
>
> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
> C++ @ 8%, Python @ 1% and all the others at 1%.
>
> Why is that? The math has not changed ... and open up any of those codes
> and what do you see: solving systems of differential equations with linear
> algebra. It's the same math my did by hand as a 'computer' in the 1950s.
>
> There is not 'tensor flows' or ML searches running SPARK in there. Sorry,
> Google/AWS et al. Nothing 'modern' and fresh -- just solid simple science
> being done by scientists who don't care about the computer or sexy new
> computer languages.
>
> IIRC, you trained as a physicist, I think you understand their thinking. *They
> care about getting their science done.*
>
> By the way, a related thought comes from a good friend of mine from
> college who used to be the Chief Metallurgist for the US Gov (NIST in
> Colorado). He's back in the private sector now (because he could not
> stomach current American politics), but he made an important
> observation/comment to me a couple of years ago. They have 60+ years of
> metallurgical data that has and his peeps have been using with known
> Fortran codes. If we gave him new versions of those analytical programs
> now in your favorite new HLL - pick one - your Go (which I love), C++
> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
> to reconfirm previous results. They are not going to do that. It's not
> economical. They 'know' how the data works, the types of errors they
> have, how the programs behave* etc*.
>
> So to me, the bottom line is just because it's old fashioned does not make
> it bad. I don't want to write an OS in Fortran-2018, but I can wrote a
> system that supports code compiled with my sexy new Fortran-2018 compiler.
>
> That is to say, the challenge for >>me<< is to build him a new
> supercomputer that can run those codes for him and not change what they are
> doing and have them scale to 1M nodes *etc*..
>
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
First please continue this discussion on COFF (which has been CC'ed).
While Fortran is interesting to many, it not a UNIX topic per say.
Also, as I have noted in other places, I work for Intel - these comments
are my own and I'm not trying to sell you anything. Just passing on 45+
years of programming experience.
On Mon, Feb 24, 2020 at 10:34 AM Adam Thornton <athornton(a)gmail.com> wrote:
> I would think that FORTRAN is likelier to be passed around as folk wisdom
> and ancient PIs (uh, Primary Investigators, not the detective kind)
> thrusting a dog-eared FORTRAN IV manual at their new grad students and
> snarling "RTFM!" than as actual college courses.
>
FWIW: I was at CMU last week recruiting. Fortran, even at a leading CS
place like CMU, is hardly "folk wisdom". All the science PhD's (Chem, Mat
Sci, Bio, Physics) that I interviewed all knew and used Fortran (nad listed
on their CV's) as their primary language for their science.
As I've quipped before, Fortran pays my own (and a lot of other people's
salaries in the industry). Check out:
https://www.archer.ac.uk/status/codes/ Fortran is about 90% of the codes
running (FWIW: I have seen similar statistics from other large HPC sites -
you'll need to poke).
While I do not write in it, I believe there are three reasons why these
statistics are true and* going to be true for a very long time*:
1. The math being used has not changed. Just open up the codes and look
at what they are doing. You will find that they all are all solving
systems of partial differential equations using linear algebra (-- see the
movie: "Hidden Figures").
2. 50-75 years of data sets with know qualities and programs to work
with them. If you were able to replace the codes magically with something
'better' - (from MathLab to Julia or Python to Java) all their data would
have to be requalified (it is like the QWERTY keyboard - that shipped
sailed years ago).
3. The *scientists want to do their science* for their work to get their
degree or prize. The computer and its programs *are a tool* for them
look at data *to do their science*. They don't care as long as they
get their work done.
Besides Adam's mention of flang, there is, of course, gfortran; but there
are also commerical compilers available for use: Qualify for Free Software
| Intel® Software
<https://software.intel.com/en-us/articles/qualify-for-free-software> I
believe PGI offers something similar, but I have not checked in a while.
Most 'production' codes use a real compiler like Intel, PGI or Cray's.
FWIW: the largest number of LLVM developers are at Intel now. IMO,
while flang is cute, it will be a toy for a while, as the LLVM IL really
can not handle Fortran easily. There is a huge project to put a number of
the learnings from the DEC Gem compilers into LLVM and one piece is gutting
the internal IL and making work for parallel architectures. The >>hope<<
by many of my peeps, (still unproven) is that at some point the FOSS world
will produce a compiler as good a Gem or the current Intel icc/ifort set.
(Hence, Intel is forced to support 3 different compiler technologies
internally in the technical languages group).
Seen on one of our local "seminars" lists recently...
> Emerging hardware, such as non-volatile main memory (NVMM) [...]
> changes the way system software should be designed and implemented,
> because they are not just an enhanced version of existing devices,
> but provide new qualitative features and software interfaces.
Core store, mutter, mutter.
We used to regularly restart machines which had been turned off for a
while, and they would happily pick up where they left off. One PDP-8 was
happy to resume after several years of idleness.
Sorry, had to send that, mutter...
--
George D M Ross MSc PhD CEng MBCS CITP
University of Edinburgh, School of Informatics,
Appleton Tower, 11 Crichton Street, Edinburgh, Scotland, EH8 9LE
Mail: gdmr(a)inf.ed.ac.uk Voice: 0131 650 5147
PGP: 1024D/AD758CC5 B91E D430 1E0D 5883 EF6A 426C B676 5C2B AD75 8CC5
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
[ Moved to COFF ]
On Wed, 19 Feb 2020, Richard Salz wrote:
> He's a loon. Search for the techdirt.com articles.
>
> > On Wed, Feb 19, 2020, 7:07 PM Ed Carp <erc(a)pobox.com> wrote:
> > I've noticed that some guy named Dr. Shiva Ayyadurai is all over
> > Twitter, claiming that he is the inventor of email. He doesn't
> > look like he's nearly old enough. I thought it was Ray
> > Tomlinson. Looks like he's trying to create some press for his
> > Senate run.
> >
> > Anyone older that me here that can either confirm or deny?
> > Thanks!
Back when I was posting "On this day" events, I had this for Ray
Tomlinson for 23rd April:
Ray Tomlinson, computer pioneer, was born on this day in 1941. He is
credited with inventing this weird thing called "email" on the
ARPAnet, in particular the "@" sign to designate a remote host
(although some jerk -- his name is not important -- is claiming that
he was first).
-- Dave
Moving to COFF where this belongs…
Here is my basic issue. I'm not 'blind' as Larry says. I lived it and I
try to awknowledge who did what and why if I can. I try to remember we got
here by a path and that path was hardly straight, but you don't get to join
the convoy late and they say -- hey the journey began someplace else.
@FLAME(on)
Open/Free/Available/Access – whatever you want to call it, did not just pop
up in the late 1980’ with the Free Software Foundation or in the 90s with
the Linux Foundation *et al*. The facts are that in the early years, a
computer customer got everything including schematics from the cpu
manufacturer.
The ‘culture’ described in Levi’s circa 1980 book “Hackers” that took off
at MIT, Stanford, CMU, *et al.* because everything *was available* and
people *shared things because it saved us all time and trouble*. In fact,
the name of the IBM user group was just that, SHARE.
IBM published patched to the OS or their compilers, in the source as
'PTFs' - program temporary fix'. Each site might have modified things a
little (or a lot), so got the PTF and tape and looked at how the patch
affected you. That was my first job support York APL/360 on TSS. (CMU had
TSS working before IBM did, so a lot of PTFs from IBM would be things we
already had dealt).
Certainly, when I started programming in the late 1960s, the idea of
proprietary SW had been around, but it was still somewhat constrained to
the commercial side (Banking/Insurance *etc*… parts – where the real money
was). The research and university community (which of course DEC was
heavily part) was very much, we are all in together. Still everyone had
sources and we moved things back and forth via mag tape at DECUS
conferences or eventually the ARPAnet.
At some point that started to change. Doug, Ken and others older than I
can probably tell you more than I can about that transition. But the
vendors started to lock up more and more of their IP. A user no longer
got a mag tape with the sources and you did not do a full system
generation. The end users/customers only got parts of the system, the rest
was binaries. Unless you paid huge fees, the source at best was available
on microfiche, and often you lacked important things needed to recreate the
binaries. Thus the concept of the closed or proprietary systems started
to become the norm, not as it has been previously.
I remember, since CMU had VAX Serial #1, and a 'special' relationship with
DEC, we have VMS sources. One spring/summer we were doing a consulting job
(moving ISPS to the Vax for the Israel Government), and that was were I
realized they only had the code on fiche, and CMU was 'different.'
But here is the interesting thing, as the vendors started becoming less and
less 'open', *AT&T was required by the 1956 consent decree to be 'open' and
license its IP *for ‘fair and reasonable terms’ to all interested parties.
(Which, they did, and the world got the transistor and UNIX as two of the
best examples). So AT&T UNIX behavior is the opposite of what the hardware
manufacturers were doing at the time!!!
The argument comes back to a few basic issues. What is ‘fair and
reasonable’ and ‘who gets to decide’ what is made available. As the
creators of some content, started to close access to ‘secret sauce’ a
tension can and did start to build between the creators and some users.
BTW, the other important thing to remember is that you needed a $100K-$250K
hunk of HW from DEC to use that ‘open’ IP from AT&T and *the hardware
acquisition was the barrier to entry*, not the cost the SW.
Folks, those of us that lived it. UNIX was 100% open. Anyone could get a
license for it. The technologies that AT&T developed was also published
in the open literature detailing how it was made/how it worked. They did
this originally because they were bound by the US Gov due to a case that
started in 1949 and settled witht that 1956 decree! The folks at AT&T were
extremely free to talk about and they did give away what they had. The
‘sauce’ was never secret (and thus AT&T would famously lose its case when
they later tried to put the cat back in the bag in AT&T *vs*. UCB/BSDi case)
.
The key is that during the PDP-11 and Vaxen times, the UNIX community all
had licenses, commercial or university. But soon the microprocessor
appears, we start to form new firms and with those sources, we created a
new industry, the *Open Systems Industry* with an organization called
/usr/group. This is all in the early 1980s (before FSF, much less Linux).
What was different here, was *we** could all share* between other licensees
(and anyone could get a license if they >>wanted<< it).
But something interesting happens. These new commercial Open Systems folk
won the war with the proprietary vendors. They were still competing with
the old guard and they competed against each other (surprise/surprise – some
were the same folks who had been competing against each other previously,
now they just was using somewhat standard ammunition – UNIX and a cheap
processor).
Moreover, the new people with the UNIX technology (Sun, DEC, HP, Masscomp,
IBM *et al*) start to handle their own version of UNIX just like they
handled their previous codes. They want to protect it.
And this is where the famous fair and reasonable comes in. Who gets to
set what is fair? Certainly, $150 fee to recover the cost of writing the
magtape (the IP was really free) seemed fair at the time – particularly
since you had to ‘cons up’ another $150K for that PDP-11.
Stallman, in particular, wants to go back to old days, where he got access
toeverything and he had his playground. To add insult to all, he
currently fighting the same war with some of MIT's ideas and the LISP
machine world. So his answer was to try to rewrite everything from scratch
and then try to give it away/get people to use it but add a funny clause
that said you have to give to anyone else that asked for it. He still has
a license, he just has different rules (I’ll not open if this is fair or
reasonable – but it was the rules FSF made). BTW: that only works if you
have something valuable (more in a minute).
Moore’s law starts driving the cost of the hardware down and at some point,
the computer to run UNIX costs $50K, then $10K, $5K, and even $1K. So now
the fees that AT&T is charging the commercial side can be argued (as Larry
and other have so well) are no longer ‘reasonable.’
At some point, FSF’s movement (IMO – after they got a compiler that was
‘good enough’ and that worked on ‘enough’ target ISA’s) starts to take off. I
think this is the real 'Christensen Disruption'. GCC was not as good as
Masscomp or Sun's compilers for the 68k or DEC's for the Vax, but it was
free. As I recall, Sun was charging for its compiler at the time (we did
manage to beat back the ex-DEC marketing types at Masscomp and the C
compiler was free, Fortran and Pascal cost $s).
Even though gcc is not as good, its good enough and people love it, so it
builds a new market (and gets better and better as more people invest in it
-- see Christensen's theory for why).
But this at least 5 years *after* the Open Systems Community has been
birthed. Sorry guys -- the term has been in use for a while to mean the
>>UNIX<< community with its open interfaces and sharing of code. BTW: Linux
itself would happen for another 5 years after that and couple of more years
before the Linux Foundation, much less the community that has grown around
it.
But that’s my point… Please at least here in the historic mailing
lists, start
to admit and be proud that we are standing on people's shoulders and
>>stop<< trying to stepping on people’s toes.
The current FOSS movement is just that – Free and Open. That’s cool –
that’s great. But recognize it started long before FSF or Linux or any of
that.
For a different time, the person I think who should really be credited as
the start of the FOSS movement as we know it, is the late Prof. Don
Pederson. In the late 1960s, he famously gave away his first ECAD program
from UCB (which I believe was called MOTIS – and would later begat SPICE).
As ‘dop’ used to tell his students (like me) back in the day – ‘*I always
give away my sources. Because that way I go in the back door and get to
see everything* at IBM/HP/Tektronix/AT&T/DEC etc..*. If I license and
sell *our code*, I *have to *go in the front door like any other salesman.’*
For the record a few years later, my other alma mater (CMU) was notorious
for licensing it's work -- hence the SCRIBE debacle of the late 1970s and
much of the CMU SPICE project/Andrew results of the early 1980s - while MIT
gave everything away in Athena and more everything from the NU projects. I
notice that the things that lived the longest from CMU were things that
were given away without any restrictions... but I digress.
So... coming back to the UNIX side of the world. Pederson’s work would
create UCB’s ‘industrial liaison office’ which was the group that released
the original ‘Berkeley Software Distribution’ for UNIX (*a.k.a.* BSD).
They had a 10 year history of ‘giving away’ free software before UNIX came
along. They gave their UNIX code anyone that asked for it. You just had
to prove you had a license from AT&T, but again anyone could get that.
i.e. it was 'open source.'
moved to coff
On Tue, Feb 18, 2020 at 4:29 PM Wesley Parish <wobblygong(a)gmail.com> wrote:
> I don't recall ever seeing "open source" used as a description of the
> Unix "ecosystem" during the 90s.
>
Yes, that's my point. The term 'open' meant was published, open available
anyone could use .. i.e. UNIX
remember the SPEC 1170 work on the 1990s -- the define the 1170 interfaces
and >>publish<< them so anyone could write code to it.
> It was in the air with the (minimal) charges Prentice-Hall charged for
> the Minix 0.x and 1.x disks and source; not dissimilar in that sense
> to the charges the FSF were charging for their tapes at the time.
>
Right... there were fees to write magtapes (or floppies)
Which comes back to my point... 'open source' was not picking. The whole
community is standing on the shoulders of the UNIX ecosystem that really
started to take off in the 1970s and 1980s. But the 'free' part was even
before UNIX.
We stood on the shoulders of things before us. There just was not (yet)
a name for what we were doing.
As Ted saids, I'll give the Debian folks credit for naming it, but the idea
really, really goes back to the early manufacturers and the early community.
FSF was a reaction to the manufacturers taking away something that some
people thought was their 'birth right.'
One last reply here, but CCing COFF where this thread really belongs...
On Thu, Feb 13, 2020 at 12:34 PM Timothe Litt <litt(a)ieee.org> wrote:
> OTOH, and probably more consistent with your experience, card equipment was
>
> almost unheard of when the DEC HW ran Unix...
>
You're probably right about that Tim, but DEC world was mostly
TOPS/TENEX/ITS and UNIX. But you would think that since a huge usage of
UNIX systems were as RJE for IBM gear at AT&T. In fact, that was one of
the 'justifications' if PWB. I'm thinking of the machine rooms I saw in
MH, WH and IH, much less DEC, Tektronix or my university time. It's funny,
I do remember a lot of work to emulate card images and arguments between
the proper character set conversions, but I just don't remember seeing actual
card readers or punches on the PDP-11s, only on the IBM, Univac and CDC
systems.
As other people have pointed out, I'm sure they must have been around, but
my world did not have them.
> From: Clem Cole
> I just don't remember seeing actual card readers or punches on the
> PDP-11s
I'm not sure DEC _had_ a card punch for the PDP11's. Readers, yes, the CR11:
https://gunkies.org/wiki/CR11_Card_Readers
but I don't think they had a punch (although there was one for the PDP-10
family, the CP10).
I think the CR11 must have been _relatively_ common, based on how many
readers and CR11 controller cards survive. Maybe not in computer science
installations, though... :-)
Noel