First please continue this discussion on COFF (which has been CC'ed).
While Fortran is interesting to many, it not a UNIX topic per say.
Also, as I have noted in other places, I work for Intel - these comments
are my own and I'm not trying to sell you anything. Just passing on 45+
years of programming experience.
On Mon, Feb 24, 2020 at 10:34 AM Adam Thornton <athornton(a)gmail.com> wrote:
> I would think that FORTRAN is likelier to be passed around as folk wisdom
> and ancient PIs (uh, Primary Investigators, not the detective kind)
> thrusting a dog-eared FORTRAN IV manual at their new grad students and
> snarling "RTFM!" than as actual college courses.
>
FWIW: I was at CMU last week recruiting. Fortran, even at a leading CS
place like CMU, is hardly "folk wisdom". All the science PhD's (Chem, Mat
Sci, Bio, Physics) that I interviewed all knew and used Fortran (nad listed
on their CV's) as their primary language for their science.
As I've quipped before, Fortran pays my own (and a lot of other people's
salaries in the industry). Check out:
https://www.archer.ac.uk/status/codes/ Fortran is about 90% of the codes
running (FWIW: I have seen similar statistics from other large HPC sites -
you'll need to poke).
While I do not write in it, I believe there are three reasons why these
statistics are true and* going to be true for a very long time*:
1. The math being used has not changed. Just open up the codes and look
at what they are doing. You will find that they all are all solving
systems of partial differential equations using linear algebra (-- see the
movie: "Hidden Figures").
2. 50-75 years of data sets with know qualities and programs to work
with them. If you were able to replace the codes magically with something
'better' - (from MathLab to Julia or Python to Java) all their data would
have to be requalified (it is like the QWERTY keyboard - that shipped
sailed years ago).
3. The *scientists want to do their science* for their work to get their
degree or prize. The computer and its programs *are a tool* for them
look at data *to do their science*. They don't care as long as they
get their work done.
Besides Adam's mention of flang, there is, of course, gfortran; but there
are also commerical compilers available for use: Qualify for Free Software
| Intel® Software
<https://software.intel.com/en-us/articles/qualify-for-free-software> I
believe PGI offers something similar, but I have not checked in a while.
Most 'production' codes use a real compiler like Intel, PGI or Cray's.
FWIW: the largest number of LLVM developers are at Intel now. IMO,
while flang is cute, it will be a toy for a while, as the LLVM IL really
can not handle Fortran easily. There is a huge project to put a number of
the learnings from the DEC Gem compilers into LLVM and one piece is gutting
the internal IL and making work for parallel architectures. The >>hope<<
by many of my peeps, (still unproven) is that at some point the FOSS world
will produce a compiler as good a Gem or the current Intel icc/ifort set.
(Hence, Intel is forced to support 3 different compiler technologies
internally in the technical languages group).
Seen on one of our local "seminars" lists recently...
> Emerging hardware, such as non-volatile main memory (NVMM) [...]
> changes the way system software should be designed and implemented,
> because they are not just an enhanced version of existing devices,
> but provide new qualitative features and software interfaces.
Core store, mutter, mutter.
We used to regularly restart machines which had been turned off for a
while, and they would happily pick up where they left off. One PDP-8 was
happy to resume after several years of idleness.
Sorry, had to send that, mutter...
--
George D M Ross MSc PhD CEng MBCS CITP
University of Edinburgh, School of Informatics,
Appleton Tower, 11 Crichton Street, Edinburgh, Scotland, EH8 9LE
Mail: gdmr(a)inf.ed.ac.uk Voice: 0131 650 5147
PGP: 1024D/AD758CC5 B91E D430 1E0D 5883 EF6A 426C B676 5C2B AD75 8CC5
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
[ Moved to COFF ]
On Wed, 19 Feb 2020, Richard Salz wrote:
> He's a loon. Search for the techdirt.com articles.
>
> > On Wed, Feb 19, 2020, 7:07 PM Ed Carp <erc(a)pobox.com> wrote:
> > I've noticed that some guy named Dr. Shiva Ayyadurai is all over
> > Twitter, claiming that he is the inventor of email. He doesn't
> > look like he's nearly old enough. I thought it was Ray
> > Tomlinson. Looks like he's trying to create some press for his
> > Senate run.
> >
> > Anyone older that me here that can either confirm or deny?
> > Thanks!
Back when I was posting "On this day" events, I had this for Ray
Tomlinson for 23rd April:
Ray Tomlinson, computer pioneer, was born on this day in 1941. He is
credited with inventing this weird thing called "email" on the
ARPAnet, in particular the "@" sign to designate a remote host
(although some jerk -- his name is not important -- is claiming that
he was first).
-- Dave
Moving to COFF where this belongs…
Here is my basic issue. I'm not 'blind' as Larry says. I lived it and I
try to awknowledge who did what and why if I can. I try to remember we got
here by a path and that path was hardly straight, but you don't get to join
the convoy late and they say -- hey the journey began someplace else.
@FLAME(on)
Open/Free/Available/Access – whatever you want to call it, did not just pop
up in the late 1980’ with the Free Software Foundation or in the 90s with
the Linux Foundation *et al*. The facts are that in the early years, a
computer customer got everything including schematics from the cpu
manufacturer.
The ‘culture’ described in Levi’s circa 1980 book “Hackers” that took off
at MIT, Stanford, CMU, *et al.* because everything *was available* and
people *shared things because it saved us all time and trouble*. In fact,
the name of the IBM user group was just that, SHARE.
IBM published patched to the OS or their compilers, in the source as
'PTFs' - program temporary fix'. Each site might have modified things a
little (or a lot), so got the PTF and tape and looked at how the patch
affected you. That was my first job support York APL/360 on TSS. (CMU had
TSS working before IBM did, so a lot of PTFs from IBM would be things we
already had dealt).
Certainly, when I started programming in the late 1960s, the idea of
proprietary SW had been around, but it was still somewhat constrained to
the commercial side (Banking/Insurance *etc*… parts – where the real money
was). The research and university community (which of course DEC was
heavily part) was very much, we are all in together. Still everyone had
sources and we moved things back and forth via mag tape at DECUS
conferences or eventually the ARPAnet.
At some point that started to change. Doug, Ken and others older than I
can probably tell you more than I can about that transition. But the
vendors started to lock up more and more of their IP. A user no longer
got a mag tape with the sources and you did not do a full system
generation. The end users/customers only got parts of the system, the rest
was binaries. Unless you paid huge fees, the source at best was available
on microfiche, and often you lacked important things needed to recreate the
binaries. Thus the concept of the closed or proprietary systems started
to become the norm, not as it has been previously.
I remember, since CMU had VAX Serial #1, and a 'special' relationship with
DEC, we have VMS sources. One spring/summer we were doing a consulting job
(moving ISPS to the Vax for the Israel Government), and that was were I
realized they only had the code on fiche, and CMU was 'different.'
But here is the interesting thing, as the vendors started becoming less and
less 'open', *AT&T was required by the 1956 consent decree to be 'open' and
license its IP *for ‘fair and reasonable terms’ to all interested parties.
(Which, they did, and the world got the transistor and UNIX as two of the
best examples). So AT&T UNIX behavior is the opposite of what the hardware
manufacturers were doing at the time!!!
The argument comes back to a few basic issues. What is ‘fair and
reasonable’ and ‘who gets to decide’ what is made available. As the
creators of some content, started to close access to ‘secret sauce’ a
tension can and did start to build between the creators and some users.
BTW, the other important thing to remember is that you needed a $100K-$250K
hunk of HW from DEC to use that ‘open’ IP from AT&T and *the hardware
acquisition was the barrier to entry*, not the cost the SW.
Folks, those of us that lived it. UNIX was 100% open. Anyone could get a
license for it. The technologies that AT&T developed was also published
in the open literature detailing how it was made/how it worked. They did
this originally because they were bound by the US Gov due to a case that
started in 1949 and settled witht that 1956 decree! The folks at AT&T were
extremely free to talk about and they did give away what they had. The
‘sauce’ was never secret (and thus AT&T would famously lose its case when
they later tried to put the cat back in the bag in AT&T *vs*. UCB/BSDi case)
.
The key is that during the PDP-11 and Vaxen times, the UNIX community all
had licenses, commercial or university. But soon the microprocessor
appears, we start to form new firms and with those sources, we created a
new industry, the *Open Systems Industry* with an organization called
/usr/group. This is all in the early 1980s (before FSF, much less Linux).
What was different here, was *we** could all share* between other licensees
(and anyone could get a license if they >>wanted<< it).
But something interesting happens. These new commercial Open Systems folk
won the war with the proprietary vendors. They were still competing with
the old guard and they competed against each other (surprise/surprise – some
were the same folks who had been competing against each other previously,
now they just was using somewhat standard ammunition – UNIX and a cheap
processor).
Moreover, the new people with the UNIX technology (Sun, DEC, HP, Masscomp,
IBM *et al*) start to handle their own version of UNIX just like they
handled their previous codes. They want to protect it.
And this is where the famous fair and reasonable comes in. Who gets to
set what is fair? Certainly, $150 fee to recover the cost of writing the
magtape (the IP was really free) seemed fair at the time – particularly
since you had to ‘cons up’ another $150K for that PDP-11.
Stallman, in particular, wants to go back to old days, where he got access
toeverything and he had his playground. To add insult to all, he
currently fighting the same war with some of MIT's ideas and the LISP
machine world. So his answer was to try to rewrite everything from scratch
and then try to give it away/get people to use it but add a funny clause
that said you have to give to anyone else that asked for it. He still has
a license, he just has different rules (I’ll not open if this is fair or
reasonable – but it was the rules FSF made). BTW: that only works if you
have something valuable (more in a minute).
Moore’s law starts driving the cost of the hardware down and at some point,
the computer to run UNIX costs $50K, then $10K, $5K, and even $1K. So now
the fees that AT&T is charging the commercial side can be argued (as Larry
and other have so well) are no longer ‘reasonable.’
At some point, FSF’s movement (IMO – after they got a compiler that was
‘good enough’ and that worked on ‘enough’ target ISA’s) starts to take off. I
think this is the real 'Christensen Disruption'. GCC was not as good as
Masscomp or Sun's compilers for the 68k or DEC's for the Vax, but it was
free. As I recall, Sun was charging for its compiler at the time (we did
manage to beat back the ex-DEC marketing types at Masscomp and the C
compiler was free, Fortran and Pascal cost $s).
Even though gcc is not as good, its good enough and people love it, so it
builds a new market (and gets better and better as more people invest in it
-- see Christensen's theory for why).
But this at least 5 years *after* the Open Systems Community has been
birthed. Sorry guys -- the term has been in use for a while to mean the
>>UNIX<< community with its open interfaces and sharing of code. BTW: Linux
itself would happen for another 5 years after that and couple of more years
before the Linux Foundation, much less the community that has grown around
it.
But that’s my point… Please at least here in the historic mailing
lists, start
to admit and be proud that we are standing on people's shoulders and
>>stop<< trying to stepping on people’s toes.
The current FOSS movement is just that – Free and Open. That’s cool –
that’s great. But recognize it started long before FSF or Linux or any of
that.
For a different time, the person I think who should really be credited as
the start of the FOSS movement as we know it, is the late Prof. Don
Pederson. In the late 1960s, he famously gave away his first ECAD program
from UCB (which I believe was called MOTIS – and would later begat SPICE).
As ‘dop’ used to tell his students (like me) back in the day – ‘*I always
give away my sources. Because that way I go in the back door and get to
see everything* at IBM/HP/Tektronix/AT&T/DEC etc..*. If I license and
sell *our code*, I *have to *go in the front door like any other salesman.’*
For the record a few years later, my other alma mater (CMU) was notorious
for licensing it's work -- hence the SCRIBE debacle of the late 1970s and
much of the CMU SPICE project/Andrew results of the early 1980s - while MIT
gave everything away in Athena and more everything from the NU projects. I
notice that the things that lived the longest from CMU were things that
were given away without any restrictions... but I digress.
So... coming back to the UNIX side of the world. Pederson’s work would
create UCB’s ‘industrial liaison office’ which was the group that released
the original ‘Berkeley Software Distribution’ for UNIX (*a.k.a.* BSD).
They had a 10 year history of ‘giving away’ free software before UNIX came
along. They gave their UNIX code anyone that asked for it. You just had
to prove you had a license from AT&T, but again anyone could get that.
i.e. it was 'open source.'
moved to coff
On Tue, Feb 18, 2020 at 4:29 PM Wesley Parish <wobblygong(a)gmail.com> wrote:
> I don't recall ever seeing "open source" used as a description of the
> Unix "ecosystem" during the 90s.
>
Yes, that's my point. The term 'open' meant was published, open available
anyone could use .. i.e. UNIX
remember the SPEC 1170 work on the 1990s -- the define the 1170 interfaces
and >>publish<< them so anyone could write code to it.
> It was in the air with the (minimal) charges Prentice-Hall charged for
> the Minix 0.x and 1.x disks and source; not dissimilar in that sense
> to the charges the FSF were charging for their tapes at the time.
>
Right... there were fees to write magtapes (or floppies)
Which comes back to my point... 'open source' was not picking. The whole
community is standing on the shoulders of the UNIX ecosystem that really
started to take off in the 1970s and 1980s. But the 'free' part was even
before UNIX.
We stood on the shoulders of things before us. There just was not (yet)
a name for what we were doing.
As Ted saids, I'll give the Debian folks credit for naming it, but the idea
really, really goes back to the early manufacturers and the early community.
FSF was a reaction to the manufacturers taking away something that some
people thought was their 'birth right.'
One last reply here, but CCing COFF where this thread really belongs...
On Thu, Feb 13, 2020 at 12:34 PM Timothe Litt <litt(a)ieee.org> wrote:
> OTOH, and probably more consistent with your experience, card equipment was
>
> almost unheard of when the DEC HW ran Unix...
>
You're probably right about that Tim, but DEC world was mostly
TOPS/TENEX/ITS and UNIX. But you would think that since a huge usage of
UNIX systems were as RJE for IBM gear at AT&T. In fact, that was one of
the 'justifications' if PWB. I'm thinking of the machine rooms I saw in
MH, WH and IH, much less DEC, Tektronix or my university time. It's funny,
I do remember a lot of work to emulate card images and arguments between
the proper character set conversions, but I just don't remember seeing actual
card readers or punches on the PDP-11s, only on the IBM, Univac and CDC
systems.
As other people have pointed out, I'm sure they must have been around, but
my world did not have them.
> From: Clem Cole
> I just don't remember seeing actual card readers or punches on the
> PDP-11s
I'm not sure DEC _had_ a card punch for the PDP11's. Readers, yes, the CR11:
https://gunkies.org/wiki/CR11_Card_Readers
but I don't think they had a punch (although there was one for the PDP-10
family, the CP10).
I think the CR11 must have been _relatively_ common, based on how many
readers and CR11 controller cards survive. Maybe not in computer science
installations, though... :-)
Noel
On Sunday, 9 February 2020 at 22:09:47 -0800, jason-tuhs(a)shalott.net wrote:
>
>>> All, I've also set this up to try out for the video chats:
>>> https://meet.tuhs.org/COFF
>>> Password to join is "unix" at the moment.
>
>> Just tried it out. On FreeBSD I get a blank grey screen. I could
>> only get something more on a Microsoft box, not quite what I'd want to
>> do. Is there some trick?
>
> * Install /usr/ports/net-im/jitsi. (Comment out the BROKEN line from the
> Makefile and "make install" should work as usual; the source can actually
> be fetched just fine...)
In fact, the package was indeed unfetchable, but the ports collection
had a cached version, which is what you got. But now I've brought it
up to date (only 2 years old rather than 4).
> * kldload cuse
>
> * Run firefox and surf to that URL.
I haven't found that necessary. In fact, installing jitsi doesn't
seem to be necessary. All you need is a more recent browser than the
antiques I was running.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
All, I've also set this up to try out for the video chats:
https://meet.tuhs.org/COFF
Password to join is "unix" at the moment.
I just want to test it to confirm that it works; I'll be heading
out the door to go to the shops soon.
Cheers, Warren
Jon Steinhart wrote in
<202002120012.01C0CpEC3910426(a)darkstar.fourwinds.com>:
|Steffen Nurpmeso writes:
|> Of course you are right, you will likely need to focus your mind,
|> and that requires an intellectual context, knowledge, to base upon.
|
|Interesting that you mention this as I'm about to leave for a multi-day
|advanced yoga workshop. One of the things that I like about yoga is that
Then i wish you a good time, and deep breath!
|you do have to learn to focus your mind, and it's amazingly difficult to
|be focused on something as seemingly simple as standing up straight. I
|don't think that it's reasonable to expect people to be able to focus
|without training. Can you imagine if a computer tried to follow all of
|your fleeting thoughts?
I feel clearly overrated. The last time i had such fleeting was
i know when, no good, i "collapsed with overflow" like John
Falkens computer in Wargames. But i have the impression that "the
only winning move is not to play" is not very hip.
|In some respects, this takes me back to the early days of speech recogni\
|tion.
|I remember people enthusiastically telling me how it would solve the \
|problem
|of repetitive stress injuries. They were surprised when I pointed out that
|most people who use their voice in their work actually take vocal training;
|RSIs are not uncommon among performers.
|
|So really, what problem are we trying to solve here? I would claim \
|that the
|problem is signal-to-noise ratio degradation that's a result of too many
|people "learning to code" who have never learned to think. Much like \
|I feel
|that it became harder to find good music when MIDI was invented because \
|there
|was all of a sudden a lot more noise masquerading as music.
I am chewing on that one. You can be lucky to have lived in times
with great classical music artists as well as a tremendous flurry
of styles, ideas etc. otherwise. In the 60s and 70s and even the
first half of the 80s so much has happened. Not only in music.
Just take psychological treatment, before there was lobotomy and
electrical shocks, and studied persons stood on these ground solid
as rocks, but then it exploded.
Today the situation is really bad. And that "everyone is an
artist" was surely as naive as "everyone shall learn coding".
But i have spend long hours in MIDI piano rolls and i think you
are right. Unfortunately.
|I'm reminded of a Usenix panel session that I moderated on the future \
|of window
|systems a long time ago. Rob was on the panel as was some guy whose name I
|can't remember from Silicon Graphics. The highlight of the presentation \
|was
|when Robin asked the question "So, if I understand what the SGI person \
|is saying,
|it doesn't matter how ugly your shirt is, you can always cover it up \
|with a nice
|jacket...." While she was asking the question Rob anticipated the \
|rest of the
|question and started unbuttoning his shirt.
|
|So maybe I'm just an old-school minimalist, but I think that the biggest \
|problem
|that needs solving is good low-level abstractions that are simple and \
|work and
|don't have to be papered over with layer upon layer on top of them. \
| I just find
|myself without the patience to learn all of the magic incantations \
|of the package
|of the week.
I like that.
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
> From: Clem Cole
> Noel's email has much wisdom. New is not necessarily better and old
> fashioned is not always a bad thing.
For those confused by the reference, it's to an email that didn't go to the
whole list (I was not sure if people would be interested):
>> One of my favourite sayings (original source unknown; I saw it in
>> "Shockwave Rider"): "There are two kinds of fool. One says 'This is
>> old, and therefore good'; the other says 'This is new, and therefore
>> better'."
Noel
moving to COFF
On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike(a)gmail.com> wrote:
> My general mood about the current standard way of nerd working is how
> unimaginative and old-fashioned it feels.
>
...
>
> But I'm a grumpy old man and getting far off topic. Warren should cry,
> "enough!".
>
> -rob
>
@Rob - I hear you and I'm sure there is a solid amount of wisdom in your
words. But I caution that just, because something is old-fashioned, does
not necessarily make it wrong (much less bad).
I ask you to take a look at the Archer statistics of code running in
production (Archer large HPC site in Europe):
http://archer.ac.uk/status/codes/
I think there are similar stats available for places like CERN, LRZ, and of
the US labs, but I know of these so I point to them.
Please note that Fortran is #1 (about 80%) followed by C @ about 10%, C++ @
8%, Python @ 1% and all the others at 1%.
Why is that? The math has not changed ... and open up any of those codes
and what do you see: solving systems of differential equations with linear
algebra. It's the same math my did by hand as a 'computer' in the 1950s.
There is not 'tensor flows' or ML searches running SPARK in there. Sorry,
Google/AWS et al. Nothing 'modern' and fresh -- just solid simple science
being done by scientists who don't care about the computer or sexy new
computer languages.
IIRC, you trained as a physicist, I think you understand their thinking. *They
care about getting their science done.*
By the way, a related thought comes from a good friend of mine from college
who used to be the Chief Metallurgist for the US Gov (NIST in Colorado).
He's back in the private sector now (because he could not stomach current
American politics), but he made an important observation/comment to me a
couple of years ago. They have 60+ years of metallurgical data that has
and his peeps have been using with known Fortran codes. If we gave him
new versions of those analytical programs now in your favorite new HLL -
pick one - your Go (which I love), C++ (which I loath), DPC++, Rust, Python
- whatever, the scientists would have to reconfirm previous results. They
are not going to do that. It's not economical. They 'know' how the data
works, the types of errors they have, how the programs behave* etc*.
So to me, the bottom line is just because it's old fashioned does not make
it bad. I don't want to write an OS in Fortran-2018, but I can wrote a
system that supports code compiled with my sexy new Fortran-2018 compiler.
That is to say, the challenge for >>me<< is to build him a new
supercomputer that can run those codes for him and not change what they are
doing and have them scale to 1M nodes *etc*..
Took this to coff since it's really hardware and non-Unix...
On 2/8/20 1:59 PM, Noel Chiappa wrote:
> > From: Dave Horsfall<dave(a)horsfall.org>
>
> > [ Getting into COFF territory, I think ]
>
>
>
>
> In all fairness, the entire field didn't really appreciate the metastability
> issue until the LINC guys at WUSTL did a big investigation of it, and then
> started a big campaign to educate everyone about it - it wasn't DEC being
> particularly clueless.
>
>
> > Hey, if the DEC marketoids didn't want 3rd-party UNIBUS implementations
> > then why was it published?
>
> Well, exactly - but it's useful to remember the differening situation for DEC
> from 1970 (first PDP-11's) and later.
>
> In 1970 DEC was mostly selling to scientists/engineers, who wanted to hook up
> to some lab equipment they'd built, and OEM's, who often wanted to use a mini
> to control some value-added gear of their own devising. An open bus was really
> necessary for those markets. Which is why the 1970 PDP-11/20 manual goes into
> a lot of detail on how to interface to the PDP-11's UNIBUS.
>
> Later, of course, they were in a different business model.
>
> Noel
My old Field Service memory is DEC never really went after Unibus
interfaces and the spec was open. It was connections to the big old
Massbus for things like tapes and disks that they kept closed and used
patent protection on along with the SBI and the later Vax BI bus. DEC
was the only maker of the BIIC chip from the VAXBI and the wouldn't sell
it to competitors...
Braegan (may be a spelling error) made interfaces to connect Calcomp
hard disks to the PDP11's on a Massbus. IIRC they were shut down hard
with legal action. I had a customer with a Unisys (formerly RCA)
Spectra 70 system that had Braegan Calcomp drives with an Eatontown, NJ
based Diva Disk controller. My tech school instructor pre-DEC career
worked for Diva Disk as an engineer.
Systems Industries, later (EMC), cloned the Massbus Adapter on the SBI
Bus and didn't directly share the bus or controller with DEC sold disk
drives so the SI-9400 showed up on DEC 11/780's (and I think they had an
11/70 controller as well. DEC, IIRC went after them about them using
the SBI backplane interconnect.
A Google search showed up this note about EMC Memory boards in Vaxes but
also mentions DEC patent suits against people who used the Massbus. I
don't remember that on Unibus devices like the controllers from Emulex
and others. (Until they tried to deal with the Vax BI bus -- a DEC chip
only or the MSCP disk subsystems.)
Like you say, different time, different business model. Many inside DEC
wanted them to OEM Sell Vax chips like they did PDP11 LSI/F11/J11
chips. There are a number of DECcies who feel that attitude came over
with the influx of IBM'ers and others who came to DEC in the Vax period
to sell into the Data Centers.
They were really protecting the "family-er-crown jewels" back then to
the company's detriment.
Old Computerworld and Datamation adverts along with PR releases are what
I find when searching, unfortunately. Here's a suit against EMC --
which cloned DEC memory products and interfaced to the SBI 11/78x bus.
https://books.google.com/books?id=0sNDKMzgG8gC&pg=RA1-PA70&dq=DEC%2BMassbus…
Along with the DIVA Computroller V there's another picture at the left
of the page with a different emulating controller.
Here's a Legal CDC9766 (I think) on a Plessey controller that plugged
into an RH70 but didn't use the actual DEC Massbus (probably the CDC A
and B SMD cables... (Storage Module Device? IIRC)
https://books.google.com/books?id=-Nentjp6qSMC&pg=RA1-PA66&dq=eatontown,+nj…
DEC even took the Emulex controllers on service contract in the late 80's.
Bill
[x-posting to COFF]
Idea: anybody interested in a regular video chat? I was thinking of
one that progresses(*) through three different timezones (Asia/Aus/NZ,
then the Americas, then Europe/Africa) so that everybody should be
able to get to two of the three timezones.
(* like a progressive dinner)
30-60 minutes each one, general old computing. Perhaps a guest speaker
now and then with a short presentation. Perhaps a theme now and then.
Perhaps just chew the fat, shoot the breeze as well.
Platform: Zoom or I'd be happy to set up a private Jitsi instance.
Something else?
How often: perhaps weekly or fortnightly through the three timezones,
so it would cycle back every three or six weeks.
Comments, suggestions?!
Cheers, Warren
Moving to COFF to avoid the wrath of wkt.
On Friday, 7 February 2020 at 18:54:33 -0500, Richard Salz wrote:
> BDS C stood for Brain-Damaged Software, it was the work of one guy (Leor
> Zolman). I think it was used to build the Mark of the Unicorn stuff
> (MINCE, Mince is not complete emacs, and Scribble, a scribe clone).
Correct. That's how I came in contact with it (and Emacs, for that
matter).
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
On Saturday, 8 February 2020 at 9:37:22 +1100, Dave Horsfall wrote:
> On Fri, 7 Feb 2020, Greg 'groggy' Lehey wrote:
>
>> But over the years I've been surprised how many people have been fooled.
>
> I'm sure that we've all pulled pranks like that. My favourite was piping
> the output of "man" (a shell script on that system) through "Valley Girl"
> (where each "!" was followed e.g. by "Gag me with a spoon!" etc).
>
> Well, $BOSS came into the office after a "heavy" night, and did something
> like "man uucp", not quite figuring out what was wrong; I was summoned
> shortly afterwards, as I was the only possible culprit...
That brings back another recollection, not Unix-related.
In about 1978 I was getting fed up with the lack of clear text error
messages from Tandem's Guardian operating system. A typical message
might be
FILE SYSTEM ERROR 011
Yes, Tandem didn't use leading 0 to indicate octal. This basically
meant ENOENT, but it was all that the end user saw. By chance I had
been hacking in the binaries and found ways to catch such messages and
put them through a function which converted them into clear text
messages. For reasons that no longer make sense to me, I stored the
texts in an external file, which required a program to update it.
Early one morning I was playing around with this, and for the fun of
it I changed the text for error 11 from "File not found" to "Please
enter FUP PURGE ! *" (effectively rm -f *).
I was still giggling about this when the project manager came to me
and said "Mr. Lehey, I think I've done something silly".
Thank God for backups! We were in a big IBM shop, and the operators
religiously ran a backup every night. Nothing lost.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA
On Fri, 7 Feb 2020, Rudi Blom wrote:
>>> Regarding Nasa's Tidbinbilla Tracking station, someone suggested to me
>>> they might >>have had MODCOMPs
>>
>> Dunno about Tidbinbilla, but Parkes ("The Dish") has a roomful of Linux
>> boxen; I didn't >have time to enquire further.
>
> The questions was
>
> "Does anyone on this list know anyone who worked at a tracking station
> during the 60s and 70s? They might be able to help fill in the details."
>
> Maybe MODCOMP, but at THAT time for sure no Linux.
I didn't say there was.... Where did you get that idea?
-- Dave
>From: Dave Horsfall <dave(a)horsfall.org>
>To: Computer Old Farts Followers <coff(a)tuhs.org>
>Cc:
>Bcc:
>Date: Fri, 7 Feb 2020 07:04:43 +1100 (EST)
>Subject: Re: [COFF] How much Fortran?
>On Thu, 6 Feb 2020, Rudi Blom wrote:
>
>>Regarding Nasa's Tidbinbilla Tracking station, someone suggested to me they might >>have had MODCOMPs
>
>Dunno about Tidbinbilla, but Parkes ("The Dish") has a roomful of Linux boxen; I didn't >have time to enquire further.
>-- Dave
The questions was
"Does anyone on this list know anyone who worked at a tracking station
during the 60s and 70s? They might be able to help fill in the
details."
Maybe MODCOMP, but at THAT time for sure no Linux.
Cheers
Regarding Nasa's Tidbinbilla Tracking station, someone suggested to me
they might have had MODCOMPs
https://en.wikipedia.org/wiki/MODCOMP
Cheers,
uncle rubl
===========
From: Wesley Parish <wobblygong(a)gmail.com>
To: Computer Old Farts Followers <coff(a)tuhs.org>
Cc:
Bcc:
Date: Tue, 4 Feb 2020 14:25:25 +1300
Subject: Re: [COFF] How much Fortran?
My thoughts exactly. I was once lucky enough to visit the NASA's
Tidbinbilla Tracking Station in the ACT just a few miles out of
Canberra c. 1976 or 77, and they had some sizeable minicomputers in
their computer room. (How many I don't know.) I imagine they would've
been used to record the transmissions on tape and do some preliminary
processing, before sending the tapes to NASA HQ in the States for
storage and further analysis.
I think what NASA did with their early probes would've made Real
Programmers (TM) sit up and gasp. :)
Does anyone on this list know anyone who worked at a tracking station
during the 60s and 70s? They might be able to help fill in the
details.
Wesley Parish
I recall reading in an old manpage that the (patented) set-uid bit was to
solved the MOO problem. I've searched around, but cannot find anything
relevant. Anyone know?
-- Dave
I despair of getting an attachment through. Let's hope a link survives.
https://www.dropbox.com/s/2lfmrkp34j9z68n/Huffman.jpg?dl=0
The NYC Math Museum (MoMath) had/has an origami exhibit. Seems David
Huffman was interested in origami as well as compression.
=> coff since it's non-Unix
On 2020-Jan-22 13:42:44 -0500, Noel Chiappa <jnc(a)mercury.lcs.mit.edu> wrote:
>Pretty interesting machine, if you study its instruction set, BTW; with no
>stack, subroutines are 'interesting'.
"no stack" was fairly standard amongst early computers. Note the the IBM
S/360 doesn't have a stack..
The usual approach to subroutines was to use some boilerplate as part of the
"call" or function prologue that stashed a return address in a known
location (storing it in the word before the function entry or patching the
"return" branch were common aproaches). Of course this made recursion
"hard" (re-entrancy typically wasn't an issue) and Fortran and Cobol (at
least of that vintage) normally don't support recursion for that reason.
--
Peter Jeremy
Moving to COFF
On Tue, Jan 21, 2020 at 12:53 PM Jon Forrest <nobozo(a)gmail.com> wrote:
> As I remember the Z8000 was going to be the great white hope that
> would continue Zilog's success with the Z80 into modern times.
> But, it obviously didn't happen.
>
> Why?
>
A really good question. I will offer my opinion as someone that lived
through the times.
The two contemporary chips of that time were the Intel 8086 and Z8000.
Certainly, between those two, the Zilog chip was a better chip from a
software standpoint. The funny part was that Moto had been pushing the
6809 against those two. The story of the IBM/PC and Moto is infamous.
Remember the 68K is a skunkworks project and is not something they were
talking about it.
Why IBM picked 8086/8088 over Z8000 I do not know. I'm >>guessing<< total
system cost and maybe vendor preference. The tea, that developed the PC
had been using the 8085 for another project, so the jump of vendors to
Zilog would have been more difficult (Moto and IBM corporate had been tight
for years, MECL was designed by Moto for IBM for the System 360). I do
know they picked Intel over the 6809, even though they had the X-series
device in Yorktown (just like we had it at Tektronix) and had wanted to use
what would become the 68000.
In the end, other than Forest's scheme, none of them could do VM without a
lot of help. If I had not known about the X-series chip (or had been given
a couple of them), I think Roger and I would have used the Z8000 for
Magnolia. But I know Roger and I liked it better; as did most of our
peeps in Tek Labs at the time. IIRC Our thinking was that Z8000 has an
"almost" good enough instruction set, but since many of the processors's
addressing modes are missing on some/most of the instructions, it makes
writing a compiler more difficult (Bill Wulf used to describe this as an
'irregular' instruction set). And even though the address space was large,
you still had to contend with a non-linear segmented address scheme.
So I think once the 68000 came on the scene for real, it had the advantage
if the best instructions set (all instructions worked as expected, was
symmetric) and looked pretty darned near a PDP-11. The large linear
address was a huge win and even though it was built as a 16-bit chip
internally (i.e. 16-bit barrel shifter and needing 2 ticks for all 32-bit
ops), all the registers were defined as 32-bits wide. I think we saw it
as becoming a full 32-bit device the soonest and with the least issues.
Hi
This isn't precisely Unix-related, but I'm wondering about the Third
Ronnie's SDI's embedded systems. Is there anyone alive who knows just
what they were? I'm also wondering, since the "Star Wars" program
seemed to go off the boil at the end of the "Cold War", and the
embedded systems were made with the US taxpayer's dollar, whether or
not they are now public domain - since iirc, US federal law mandates
that anything made with the taxpayer's dollar is owned by the taxpayer
and is thus in the public domain. I'm wondering about starting a
Freedom of Information request to find all of that out, but I don't
quite know how to go about it. (FWVLIW, I'm a fan of outer space
exploration (and commercial use) and a trove of realtime, embedded
source code dealing with satellites would be a treasure indeed. It'd
raise the bar and lower the cost of entry into that market.)
Also, more Unixy, what status at the time were the POSIX realtime
standards, and what precise relation did they have to Unix?
Thanks
Wesley Parish