On Thu, 19 Mar 2020, Mike Markowski wrote:
>> I've been using my trusty HP-42S for so long that I can hardly remember
>> how to use a "normal" calculator :-)
>
> When my classmate's calculator died during an engineering exam, he asked
> if he could borrow my spare. I handed him my HP 32s and after a minute
> he whispered, "Where's the equals key?" He gave my calculator back.
> :-)
I did that to a financial controller in a previous life; she was not
amused... Hey, it was the only calculator that I had! I could see her
helplessly looking for the "=" key, then I took pity on her.
-- Dave
+COFF
On 3/20/20 8:03 AM, Noel Chiappa wrote:
> Maybe I'm being clueless/over-asking, but to me it's appalling that
> any college student (at least all who have _any_ math requirement at
> all; not sure how many that is) doesn't know how an RPN calculator
> works.
I'm sure that there are some people, maybe not the corpus you mention,
that have zero clue how an RPN calculator works. But I would expect
anybody with a little gumption to be able to poke a few buttons and
probably figure out the basic operation, or, ask if they are genuinely
confused.
> It's not exactly rocket science, and any reasonably intelligent
> high-schooler should get it extremely quickly; just tell them it's
> just a representational thing, number number operator instead of
> number operator number.
I agree that RPN is not rocket science. And for basic single operation
equations, I think that it's largely interchangeable with infix notation.
However, my experience is, as the number of operations goes up, RPN can
become more difficult to use. This is likely a mental shortcoming on my
part. But it is something that does take tractable mental effort for me
to do.
For example, let's start with Pythagorean Theorem
a² + b² = c²
This is relatively easy to enter in infix notation on a typical
scientific calculator.
However, I have to stop and think about how to enter this on an RPN
calculator. I'll take a swing at this, but I might get it wrong, and I
don't have anything handy to test at the moment.
[a] [enter]
[a] [enter]
[multiply]
[b] [enter]
[b] [enter]
[multiply]
[add]
[square root] # to solve for c
(12 keys)
Conversely infix notation for comparison.
[a]
[square]
[plus]
[b]
[square]
[square root]
(6 keys)
As I type this, I realize that I'm using higher order operations
(square) in infix than I am in RPN. But that probably speaks to my
ignorance of RPN.
I also realize that this equation does a poor job at demonstrating what
I'm trying to convey. — Or perhaps what I'm trying to convey is
incorrect. — I had to arrange sub-different parts of the equation so
that their results ended up together on the stack for them to be the
targets of the operation. I believe this (re)arrangement of the
equation is where most of my mental load / objection comes from with
RPN. I feel like I have to process the equation before I can tell the
calculator to compute the result for me. I don't feel like I have this
burden with infix notation.
Aside: I firmly believe that computers are supposed to do our bidding,
not the other way around. s/computers/calculators/
> I know it's not a key intellectual skill, but it does seem to me to
> be part of comon intellectual heritage that everyone should know,
> like musical scales or poetry rhyming. Have you ever considered
> taking two minutes (literally!) to cover it briefly, just 'someone
> tried to borrow my RPN calculator, here's the basic idea of how they
> work'?
I'm confident that 80% of people, more of the corpus you describe, could
use an RPN calculator to do simple equations. But I would not be
surprised if many found that the re-arrangement of equations to being
RPN friendly would simply forego the RPN calculator for simpler
arithmetic operations.
I think some of it is a mental question: Which has more mental load,
doing the annoying arithmetic or re-arranging to use RPN.
I believe that for the simpler of the arithmetic operations, RPN is
going to be more difficult.
All of this being said, I'd love to have someone lay out points and / or
counterpoints to my understanding.
--
Grant. . . .
unix || die
Moving to COFF ...
On Fri, Mar 20, 2020 at 1:24 PM Grant Taylor via TUHS <tuhs(a)minnie.tuhs.org>
wrote:
> Would you humor me with an example of what you mean by "thinking on the
> fly"? Either I'm not understanding you or we think differently.
>
I'll take a stab at it in a minute.
But first, I never cared either way. In college, I had an SR50 and my GF
had an HP45. I would say, between my EE friends we were probably split
50/50 between TI and HP. Generally, it was the RPN centric crew were
fiercely loyal as in the editor wars but would grab whichever was near me
when we all were working a problem set; but I knew a couple of folks that
hated RPN too.
It's possible, because of my undiagnosed dyslexia at the time, but I would
grab the closest calculator, pause to see which is was and then start
entering things as needed. But like Jon -- if I had the TI in my hands, I
found myself copying the equation. I was trying to pay attention to what
button I was pressing to check for any keystroke entry errors. Both types
had all of the same math functions so there was little difference in the
number of strokes, other than not needing parentheses on HP and how you
entered the calculation. With the HP, I was more aware of that equation I
was calculating because I was having to make sure I entered it in the
proper order so I could get the right answer. In my case, I was probably
a tad more careful because I was being forced to thinking in terms of
precedence - but I was thinking about the equation. Whereas with the TI I
was just hitting the button per the equation on the paper. I typed a tad
faster on the TI than the HP because I was not thinking as much but ... I
probably made more typing errors there because I thought less about what I
was doing.
Clem
Aside: I'm sending this reply to TUHS where the message that I'm
replying to came from. But i suspect that it should migrate to COFF,
which I'm CCing.
On 3/20/20 5:48 AM, paul(a)guertin.net wrote:
> I teach math in college, and I use an RPN calculator as well (it's
> just easier).
Would you please elaborate on "it's just easier"?
I'm asking from a point of genuine curiosity. I've heard many say that
RPN is easier, or that it takes fewer keys, or otherwise superior to
infix notation. But many of the conversations end up somewhat devolving
into religious like comments about preferences, despite starting with
honest open-minded intentions. (I hope this one doesn't similarly devolve.)
I've heard that there are fewer keys to press for RPN, but the example
equations presented have been effectively he same.
I've heard that RPN is mentally easier. But I apparently don't know
enough RPN to be able to think in RPN natively to evaluate myself.
I dabble with RPN, including keeping my main calculator app on my smart
phone in RPN mode.
So I am genuinely interested in understanding why you say that RPN is
just easier.
> Sometimes, during an exam, a student who forgot to bring their
> calculator will ask if they can borrow mine. I always say "sure, but
> you'll regret it" and hand them the calculator. After wasting one or
> two minutes, they give it back.
~chuckle~
> (Note that I always make sure no calculator is needed for my exams,
> but it's department policy to authorise non programmable calculators,
> and it seems to reassure students to have the calculator on the desk,
> so I don't mind.) >
ACK
--
Grant. . . .
unix || die
As many of you may be aware, Bruce D. Evans <bde(a)freebsd.org> died in
mid-December. I am currently looking through his digital estate on
behalf of his family and the FreeBSD Project.
I have discovered that he kept an extensive collection of 5¼" floppy
disks. I haven't looked through them but they appear to include
things like OS-9 and Hitachi Peach files (and presumably Minix stuff,
though I haven't found any of his Minix work). He also has a
selection of newletters from an Australian Peach users group. Is
there any interest in this material from a historicial perspective?
--
Peter Jeremy
On 3/8/20 9:39 PM, Warner Losh wrote:
> floppy controller supports the full range of crazy that once roamed
> the earth
Does anyone have any knee jerk reaction to the idea of putting a 5¼"
floppy drive on a USB-to-Floppy (nominally 3½") adapter?
Do I want to avoid tilting at this windmill?
Am I better off installing the 5¼" floppy inside the computer and
connecting directly to the motherboard?
I'm only wanting to pull files off of 5¼" disks. At most I'll want to
dd the disks to an image.
That being said, I wonder if I should also be collecting any different
types of images. (This may mean mobo instead of USB.)
Thank you for any pro-tips that you can provide.
--
Grant. . . .
unix || die
Moving to COFF ....
On Tue, Mar 17, 2020 at 10:58 AM Larry McVoy <lm(a)mcvoy.com> wrote:
> As much as I don't care for Forth, man do I wish it had become the standard
> for boot proms, it might not be my cup of tea but I could make it do what
> I needed it to do.
Amen bro... Sun did a nice job on that. Although the Alpha Boot ROMs
were pretty good too. At least they were UNIX like and were extensible like
the Sun boot ROMs. HP's were better than a PC BIOS, but they were pretty
awful.
> Can't say the same for UEFI, I disable that crap.
>
Well, it beats the crap out of IBM's BIOS, but that bar is very low. UEFI
was sort of a 'camel' (a horse designed by committee) and too many people
peed on it. Intel created EFI to try to fix BIOS and then people went
nuts. Apple's version is the best of them, but as you say, they all suck
if you have seen anything better. A big problem IMO is that EFI tried to
be somewhat compatible. In the end, they were not, so you got the worst of
both (new interfaces and legacy functionality).
Server systems that support IPMT have Minux under the covers in
coprocessor, which using a coprocessor is also how Apple runs UEFI. With
IMPT, it is sort of sad more of it is not really exposed, but you need the
added cost of the coprocessor. Plus it adds a new security domain, which
many people complain about. I try to know as little about it as possible
to get my work done, but exposing more of that interface might help.
Hi all, I'm looking for an interactive tool to help students learn the
Unix/Linux command line. I still remember the "learn" tool. Is there an
equivalent for current systems?
I have tried to forward-port the old learn sources to current Linux but
my patience ran out :-)
Thanks in advance for any tips/pointers.
Cheers, Warren
Given the recent discussion of pipes and networking ... I'm passing this
along for those that might not have seen it.
---------- Forwarded message ---------
From: Jack Haverty via Internet-history
Date: Tue, Mar 10, 2020 at 1:30 PM
Subject: Re: [ih] NCP and TCP implementations
To: *Internet-History*
The first TCP implementation for Unix was done in PDP-11 assembly
language, running on a PDP-11/40 (with way too little memory or address
space). It was built using code fragments excerpted from the LSI-11
TCP implementation provided by Jim Mathis, which ran under SRI's
home-built OS. Jim's TCP was all written in PDP-11 assembler. The code
was cross-compiled (assembled) on a PDP-10 Tenex system, and downloaded
over a TTY line to the PDP-11/40. That was much easier and faster than
doing all the implementation work on the PDP-11.
The code architecture involved putting TCP itself at the user level,
communicating with its "customers" using Unix InterProcess
Communications facilities (Rand "Ports"). It would have been
preferable to implement TCP within the Unix kernel, but there was simply
not enough room due to the limited address space available on the 11/40
model. Later implementations of TCP, on larger machines with twice the
address space, were done in the kernel. In addition to the Berkeley BSD
work, I remember Gurwitz, Wingfield, Nemeth, and others working on TCP
implementation for the PDP-11/70 and Vax.
The initial Unix TCP implementation was for TCP version 2 (2.5 IIRC), as
was Jim's LSI-11 code. This 2.5 implementation was one of the players
in the first "TCP Bakeoff" organized by Jon Postel and carried out on a
weekend at ISI before the quarterly Internet meeting. The PDP-11/40 TCP
was modified extensively over the next year or so as TCP advanced
through 2.5, 2.5+, 3, and eventually stabilized at TCP4 (which it seems
we still have today, 40+ years later!)
The Unix TCP implementation required a small addition to the Unix kernel
code, to add the "await" and "capac" system calls. Those calls were
necessary to enable the implementation of user-level code where the
traditional Unix "pipeline" model of programming
(input->process->process...->output) was inadequate for use in
multi-computer programming (such as FTP, Telnet, etc., - anywhere where
more than one computer was involved).
The code to add those new system calls was written in C, as was almost
all of the Unix OS itself. The new system calls added the functionality
of "non-blocking I/O" which did not previously exist. It involved very
few lines of code, since there wasn't room for very many more
instructions, and even so it required finding more space by shortening
many of the kernel error messages to save a few bytes here and there.
Randy Rettberg and I did that work, struggling to understand how Unix
kernel internals worked, since neither of us had ever worked with Unix
before even as a user. We did not try to "get it right" by making
significant changes to the basic Unix architecture. That came later
with the Berkeley and Gurwitz efforts. The PDP-11/40 was simply too
constrained to support such changes, and our mission was to get TCP
support on the machine, rather than develop the OS.
I think I speak authoritatively here, since I wrote and debugged that
first Unix TCP code. I still have an old, yellowing listing of that
first Unix TCP.
FWIW, if there's interest in why certain languages were chosen, there's
a very simple explanation of why the Unix implementation was done in
assembler rather than C, the native language of Unix. First, Jim
Mathis' code was in assembler, so it was easy to extract large chunks
and paste them into the Unix assembler implementation. Second, and
probably most important, was that I was very accustomed to writing
assembler code and working at the processor instruction level. But I
barely knew C existed, and was certainly not proficient in it, and we
needed the TCP working fast for use in other projects. The choice was
very pragmatic, not based at all on technical issues of languages or
superiority of any architecture.
/Jack Haverty
On 3/9/20 11:14 PM, vinton cerf via Internet-history wrote:
> Steve Kirsch asks in what languages NCP and TCP were written.
>
> The Stanford first TCP implementation was done in BCPL by Richard Karp.
> Another version was written for PDP-11/23 by Jim Mathis but not clear in
> what language. Tenex was probably done in C at BBN. Was 360 done in PL/1??
> Dave Clark did one for IBM PC (assembly language/??)
>
> Other recollections much appreciated.
>
> vint
--
Internet-history mailing list
Internet-history(a)elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
I started using 'cpio' in the 80tish and still use it, especially
transferring files and complete directories between various UNIX
versions like SCOUNIX 3.2V4.2, Tru64, HP-UX 11i..
The main option I use with cpio is (of course) "-c" and only occasionally "-u"
Hi,
while exploring the gopherspace (YES! Still existing,
growing community) I found this gopher page:
gopher://pdp11.tk/1
which can be reached with Lynx for example.
Unfortunately I cannot evaluate the items there, but
may be it is worth a look by someone knowledgeable.
Cheers!
mcc
I work at an astronomy facility. I get to do some fun dumpster diving.
I recently have pulled out of the trash a plugboard with a male and a
female D-Sub 52 connector. 3 rows of pins, 17-18-17. I took the
connectors off the board: there's nothing back there, so this thing only
ever existed so you could plug the random cable you found into it and its
friends to see what the cable fit.
I can't find much evidence that a 52-pin D-Sub ever existed.
Is this just Yet Another Physics Experiment thing where, hey, if your
instrument already costs three million dollars, what's a couple of grand
for machining custom connectors? Or was it once a thing?
(also posting to cc-talk)
Adam
Not UNIX, not 52-pin, but old, old and serial
Your mission, should you choose to accept it, is to save data from a
computer that should have died aeons ago
...
Tap into the serial line – what could be simpler?
Alas, the TI was smart enough to spot the absence of the rattly old
beast ("the software wouldn't print without some of the seldom-used
serial control lines functioning," explained Aaron) so the customer
was asked to bring in the printer as well.
https://www.theregister.co.uk/2020/02/24/who_me/
---------- Forwarded message ---------
From: Adam Thornton <athornton(a)gmail.com>
Date: Tue, Feb 11, 2020 at 5:56 PM
Subject: Re: [COFF] Old and Tradition was [TUHS] V9 shell
To: Clem Cole <clemc(a)ccc.com>
As someone working in this area right now....yeah, and that’s why
traditional HPC centers do not deliver the services we want for projects
like the Vera Rubin Observatory’s Legacy Survey Of Space and Time.
Almost all of our scientist-facing code is Python though a lot of the
performance critical stuff is implemented in C++ with Python bindings.
The incumbents are great at running data centers like they did in 1993.
That’s not the best fit for our current workload. It’s not generally the
compute that needs to be super-fast: it’s access to arbitrary slices
through really large data sets that is the interesting problem for us.
That’s not to say that that lfortran isn’t cool. It really really is, and
Ondrej Cestik has done amazing work in making modern FORTRAN run in a
Jupyter notebook, and the implications (LLVM becomes the Imagemagick of
compiled languages) are astounding.
But...HPC is no longer the cutting edge. We are seeing a Kuhnian paradigm
shift in action, and, sure, the old guys (and they are overwhelmingly guys)
who have tenure and get the big grants will never give up FORTRAN which
after all was good enough for their grandpappy and therefore good enough
for them. But they will retire. Scaling out is way way cheaper than
scaling up.
On Tue, Feb 11, 2020 at 11:41 AM Clem Cole <clemc(a)ccc.com> wrote:
> moving to COFF
>
> On Tue, Feb 11, 2020 at 5:00 AM Rob Pike <robpike(a)gmail.com> wrote:
>
>> My general mood about the current standard way of nerd working is how
>> unimaginative and old-fashioned it feels.
>>
> ...
>>
>> But I'm a grumpy old man and getting far off topic. Warren should cry,
>> "enough!".
>>
>> -rob
>>
>
> @Rob - I hear you and I'm sure there is a solid amount of wisdom in your
> words. But I caution that just, because something is old-fashioned, does
> not necessarily make it wrong (much less bad).
>
> I ask you to take a look at the Archer statistics of code running in
> production (Archer large HPC site in Europe):
> http://archer.ac.uk/status/codes/
>
> I think there are similar stats available for places like CERN, LRZ, and
> of the US labs, but I know of these so I point to them.
>
> Please note that Fortran is #1 (about 80%) followed by C @ about 10%,
> C++ @ 8%, Python @ 1% and all the others at 1%.
>
> Why is that? The math has not changed ... and open up any of those codes
> and what do you see: solving systems of differential equations with linear
> algebra. It's the same math my did by hand as a 'computer' in the 1950s.
>
> There is not 'tensor flows' or ML searches running SPARK in there. Sorry,
> Google/AWS et al. Nothing 'modern' and fresh -- just solid simple science
> being done by scientists who don't care about the computer or sexy new
> computer languages.
>
> IIRC, you trained as a physicist, I think you understand their thinking. *They
> care about getting their science done.*
>
> By the way, a related thought comes from a good friend of mine from
> college who used to be the Chief Metallurgist for the US Gov (NIST in
> Colorado). He's back in the private sector now (because he could not
> stomach current American politics), but he made an important
> observation/comment to me a couple of years ago. They have 60+ years of
> metallurgical data that has and his peeps have been using with known
> Fortran codes. If we gave him new versions of those analytical programs
> now in your favorite new HLL - pick one - your Go (which I love), C++
> (which I loath), DPC++, Rust, Python - whatever, the scientists would have
> to reconfirm previous results. They are not going to do that. It's not
> economical. They 'know' how the data works, the types of errors they
> have, how the programs behave* etc*.
>
> So to me, the bottom line is just because it's old fashioned does not make
> it bad. I don't want to write an OS in Fortran-2018, but I can wrote a
> system that supports code compiled with my sexy new Fortran-2018 compiler.
>
> That is to say, the challenge for >>me<< is to build him a new
> supercomputer that can run those codes for him and not change what they are
> doing and have them scale to 1M nodes *etc*..
>
> _______________________________________________
> COFF mailing list
> COFF(a)minnie.tuhs.org
> https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff
>
First please continue this discussion on COFF (which has been CC'ed).
While Fortran is interesting to many, it not a UNIX topic per say.
Also, as I have noted in other places, I work for Intel - these comments
are my own and I'm not trying to sell you anything. Just passing on 45+
years of programming experience.
On Mon, Feb 24, 2020 at 10:34 AM Adam Thornton <athornton(a)gmail.com> wrote:
> I would think that FORTRAN is likelier to be passed around as folk wisdom
> and ancient PIs (uh, Primary Investigators, not the detective kind)
> thrusting a dog-eared FORTRAN IV manual at their new grad students and
> snarling "RTFM!" than as actual college courses.
>
FWIW: I was at CMU last week recruiting. Fortran, even at a leading CS
place like CMU, is hardly "folk wisdom". All the science PhD's (Chem, Mat
Sci, Bio, Physics) that I interviewed all knew and used Fortran (nad listed
on their CV's) as their primary language for their science.
As I've quipped before, Fortran pays my own (and a lot of other people's
salaries in the industry). Check out:
https://www.archer.ac.uk/status/codes/ Fortran is about 90% of the codes
running (FWIW: I have seen similar statistics from other large HPC sites -
you'll need to poke).
While I do not write in it, I believe there are three reasons why these
statistics are true and* going to be true for a very long time*:
1. The math being used has not changed. Just open up the codes and look
at what they are doing. You will find that they all are all solving
systems of partial differential equations using linear algebra (-- see the
movie: "Hidden Figures").
2. 50-75 years of data sets with know qualities and programs to work
with them. If you were able to replace the codes magically with something
'better' - (from MathLab to Julia or Python to Java) all their data would
have to be requalified (it is like the QWERTY keyboard - that shipped
sailed years ago).
3. The *scientists want to do their science* for their work to get their
degree or prize. The computer and its programs *are a tool* for them
look at data *to do their science*. They don't care as long as they
get their work done.
Besides Adam's mention of flang, there is, of course, gfortran; but there
are also commerical compilers available for use: Qualify for Free Software
| Intel® Software
<https://software.intel.com/en-us/articles/qualify-for-free-software> I
believe PGI offers something similar, but I have not checked in a while.
Most 'production' codes use a real compiler like Intel, PGI or Cray's.
FWIW: the largest number of LLVM developers are at Intel now. IMO,
while flang is cute, it will be a toy for a while, as the LLVM IL really
can not handle Fortran easily. There is a huge project to put a number of
the learnings from the DEC Gem compilers into LLVM and one piece is gutting
the internal IL and making work for parallel architectures. The >>hope<<
by many of my peeps, (still unproven) is that at some point the FOSS world
will produce a compiler as good a Gem or the current Intel icc/ifort set.
(Hence, Intel is forced to support 3 different compiler technologies
internally in the technical languages group).
Seen on one of our local "seminars" lists recently...
> Emerging hardware, such as non-volatile main memory (NVMM) [...]
> changes the way system software should be designed and implemented,
> because they are not just an enhanced version of existing devices,
> but provide new qualitative features and software interfaces.
Core store, mutter, mutter.
We used to regularly restart machines which had been turned off for a
while, and they would happily pick up where they left off. One PDP-8 was
happy to resume after several years of idleness.
Sorry, had to send that, mutter...
--
George D M Ross MSc PhD CEng MBCS CITP
University of Edinburgh, School of Informatics,
Appleton Tower, 11 Crichton Street, Edinburgh, Scotland, EH8 9LE
Mail: gdmr(a)inf.ed.ac.uk Voice: 0131 650 5147
PGP: 1024D/AD758CC5 B91E D430 1E0D 5883 EF6A 426C B676 5C2B AD75 8CC5
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
[ Moved to COFF ]
On Wed, 19 Feb 2020, Richard Salz wrote:
> He's a loon. Search for the techdirt.com articles.
>
> > On Wed, Feb 19, 2020, 7:07 PM Ed Carp <erc(a)pobox.com> wrote:
> > I've noticed that some guy named Dr. Shiva Ayyadurai is all over
> > Twitter, claiming that he is the inventor of email. He doesn't
> > look like he's nearly old enough. I thought it was Ray
> > Tomlinson. Looks like he's trying to create some press for his
> > Senate run.
> >
> > Anyone older that me here that can either confirm or deny?
> > Thanks!
Back when I was posting "On this day" events, I had this for Ray
Tomlinson for 23rd April:
Ray Tomlinson, computer pioneer, was born on this day in 1941. He is
credited with inventing this weird thing called "email" on the
ARPAnet, in particular the "@" sign to designate a remote host
(although some jerk -- his name is not important -- is claiming that
he was first).
-- Dave
Moving to COFF where this belongs…
Here is my basic issue. I'm not 'blind' as Larry says. I lived it and I
try to awknowledge who did what and why if I can. I try to remember we got
here by a path and that path was hardly straight, but you don't get to join
the convoy late and they say -- hey the journey began someplace else.
@FLAME(on)
Open/Free/Available/Access – whatever you want to call it, did not just pop
up in the late 1980’ with the Free Software Foundation or in the 90s with
the Linux Foundation *et al*. The facts are that in the early years, a
computer customer got everything including schematics from the cpu
manufacturer.
The ‘culture’ described in Levi’s circa 1980 book “Hackers” that took off
at MIT, Stanford, CMU, *et al.* because everything *was available* and
people *shared things because it saved us all time and trouble*. In fact,
the name of the IBM user group was just that, SHARE.
IBM published patched to the OS or their compilers, in the source as
'PTFs' - program temporary fix'. Each site might have modified things a
little (or a lot), so got the PTF and tape and looked at how the patch
affected you. That was my first job support York APL/360 on TSS. (CMU had
TSS working before IBM did, so a lot of PTFs from IBM would be things we
already had dealt).
Certainly, when I started programming in the late 1960s, the idea of
proprietary SW had been around, but it was still somewhat constrained to
the commercial side (Banking/Insurance *etc*… parts – where the real money
was). The research and university community (which of course DEC was
heavily part) was very much, we are all in together. Still everyone had
sources and we moved things back and forth via mag tape at DECUS
conferences or eventually the ARPAnet.
At some point that started to change. Doug, Ken and others older than I
can probably tell you more than I can about that transition. But the
vendors started to lock up more and more of their IP. A user no longer
got a mag tape with the sources and you did not do a full system
generation. The end users/customers only got parts of the system, the rest
was binaries. Unless you paid huge fees, the source at best was available
on microfiche, and often you lacked important things needed to recreate the
binaries. Thus the concept of the closed or proprietary systems started
to become the norm, not as it has been previously.
I remember, since CMU had VAX Serial #1, and a 'special' relationship with
DEC, we have VMS sources. One spring/summer we were doing a consulting job
(moving ISPS to the Vax for the Israel Government), and that was were I
realized they only had the code on fiche, and CMU was 'different.'
But here is the interesting thing, as the vendors started becoming less and
less 'open', *AT&T was required by the 1956 consent decree to be 'open' and
license its IP *for ‘fair and reasonable terms’ to all interested parties.
(Which, they did, and the world got the transistor and UNIX as two of the
best examples). So AT&T UNIX behavior is the opposite of what the hardware
manufacturers were doing at the time!!!
The argument comes back to a few basic issues. What is ‘fair and
reasonable’ and ‘who gets to decide’ what is made available. As the
creators of some content, started to close access to ‘secret sauce’ a
tension can and did start to build between the creators and some users.
BTW, the other important thing to remember is that you needed a $100K-$250K
hunk of HW from DEC to use that ‘open’ IP from AT&T and *the hardware
acquisition was the barrier to entry*, not the cost the SW.
Folks, those of us that lived it. UNIX was 100% open. Anyone could get a
license for it. The technologies that AT&T developed was also published
in the open literature detailing how it was made/how it worked. They did
this originally because they were bound by the US Gov due to a case that
started in 1949 and settled witht that 1956 decree! The folks at AT&T were
extremely free to talk about and they did give away what they had. The
‘sauce’ was never secret (and thus AT&T would famously lose its case when
they later tried to put the cat back in the bag in AT&T *vs*. UCB/BSDi case)
.
The key is that during the PDP-11 and Vaxen times, the UNIX community all
had licenses, commercial or university. But soon the microprocessor
appears, we start to form new firms and with those sources, we created a
new industry, the *Open Systems Industry* with an organization called
/usr/group. This is all in the early 1980s (before FSF, much less Linux).
What was different here, was *we** could all share* between other licensees
(and anyone could get a license if they >>wanted<< it).
But something interesting happens. These new commercial Open Systems folk
won the war with the proprietary vendors. They were still competing with
the old guard and they competed against each other (surprise/surprise – some
were the same folks who had been competing against each other previously,
now they just was using somewhat standard ammunition – UNIX and a cheap
processor).
Moreover, the new people with the UNIX technology (Sun, DEC, HP, Masscomp,
IBM *et al*) start to handle their own version of UNIX just like they
handled their previous codes. They want to protect it.
And this is where the famous fair and reasonable comes in. Who gets to
set what is fair? Certainly, $150 fee to recover the cost of writing the
magtape (the IP was really free) seemed fair at the time – particularly
since you had to ‘cons up’ another $150K for that PDP-11.
Stallman, in particular, wants to go back to old days, where he got access
toeverything and he had his playground. To add insult to all, he
currently fighting the same war with some of MIT's ideas and the LISP
machine world. So his answer was to try to rewrite everything from scratch
and then try to give it away/get people to use it but add a funny clause
that said you have to give to anyone else that asked for it. He still has
a license, he just has different rules (I’ll not open if this is fair or
reasonable – but it was the rules FSF made). BTW: that only works if you
have something valuable (more in a minute).
Moore’s law starts driving the cost of the hardware down and at some point,
the computer to run UNIX costs $50K, then $10K, $5K, and even $1K. So now
the fees that AT&T is charging the commercial side can be argued (as Larry
and other have so well) are no longer ‘reasonable.’
At some point, FSF’s movement (IMO – after they got a compiler that was
‘good enough’ and that worked on ‘enough’ target ISA’s) starts to take off. I
think this is the real 'Christensen Disruption'. GCC was not as good as
Masscomp or Sun's compilers for the 68k or DEC's for the Vax, but it was
free. As I recall, Sun was charging for its compiler at the time (we did
manage to beat back the ex-DEC marketing types at Masscomp and the C
compiler was free, Fortran and Pascal cost $s).
Even though gcc is not as good, its good enough and people love it, so it
builds a new market (and gets better and better as more people invest in it
-- see Christensen's theory for why).
But this at least 5 years *after* the Open Systems Community has been
birthed. Sorry guys -- the term has been in use for a while to mean the
>>UNIX<< community with its open interfaces and sharing of code. BTW: Linux
itself would happen for another 5 years after that and couple of more years
before the Linux Foundation, much less the community that has grown around
it.
But that’s my point… Please at least here in the historic mailing
lists, start
to admit and be proud that we are standing on people's shoulders and
>>stop<< trying to stepping on people’s toes.
The current FOSS movement is just that – Free and Open. That’s cool –
that’s great. But recognize it started long before FSF or Linux or any of
that.
For a different time, the person I think who should really be credited as
the start of the FOSS movement as we know it, is the late Prof. Don
Pederson. In the late 1960s, he famously gave away his first ECAD program
from UCB (which I believe was called MOTIS – and would later begat SPICE).
As ‘dop’ used to tell his students (like me) back in the day – ‘*I always
give away my sources. Because that way I go in the back door and get to
see everything* at IBM/HP/Tektronix/AT&T/DEC etc..*. If I license and
sell *our code*, I *have to *go in the front door like any other salesman.’*
For the record a few years later, my other alma mater (CMU) was notorious
for licensing it's work -- hence the SCRIBE debacle of the late 1970s and
much of the CMU SPICE project/Andrew results of the early 1980s - while MIT
gave everything away in Athena and more everything from the NU projects. I
notice that the things that lived the longest from CMU were things that
were given away without any restrictions... but I digress.
So... coming back to the UNIX side of the world. Pederson’s work would
create UCB’s ‘industrial liaison office’ which was the group that released
the original ‘Berkeley Software Distribution’ for UNIX (*a.k.a.* BSD).
They had a 10 year history of ‘giving away’ free software before UNIX came
along. They gave their UNIX code anyone that asked for it. You just had
to prove you had a license from AT&T, but again anyone could get that.
i.e. it was 'open source.'
moved to coff
On Tue, Feb 18, 2020 at 4:29 PM Wesley Parish <wobblygong(a)gmail.com> wrote:
> I don't recall ever seeing "open source" used as a description of the
> Unix "ecosystem" during the 90s.
>
Yes, that's my point. The term 'open' meant was published, open available
anyone could use .. i.e. UNIX
remember the SPEC 1170 work on the 1990s -- the define the 1170 interfaces
and >>publish<< them so anyone could write code to it.
> It was in the air with the (minimal) charges Prentice-Hall charged for
> the Minix 0.x and 1.x disks and source; not dissimilar in that sense
> to the charges the FSF were charging for their tapes at the time.
>
Right... there were fees to write magtapes (or floppies)
Which comes back to my point... 'open source' was not picking. The whole
community is standing on the shoulders of the UNIX ecosystem that really
started to take off in the 1970s and 1980s. But the 'free' part was even
before UNIX.
We stood on the shoulders of things before us. There just was not (yet)
a name for what we were doing.
As Ted saids, I'll give the Debian folks credit for naming it, but the idea
really, really goes back to the early manufacturers and the early community.
FSF was a reaction to the manufacturers taking away something that some
people thought was their 'birth right.'
One last reply here, but CCing COFF where this thread really belongs...
On Thu, Feb 13, 2020 at 12:34 PM Timothe Litt <litt(a)ieee.org> wrote:
> OTOH, and probably more consistent with your experience, card equipment was
>
> almost unheard of when the DEC HW ran Unix...
>
You're probably right about that Tim, but DEC world was mostly
TOPS/TENEX/ITS and UNIX. But you would think that since a huge usage of
UNIX systems were as RJE for IBM gear at AT&T. In fact, that was one of
the 'justifications' if PWB. I'm thinking of the machine rooms I saw in
MH, WH and IH, much less DEC, Tektronix or my university time. It's funny,
I do remember a lot of work to emulate card images and arguments between
the proper character set conversions, but I just don't remember seeing actual
card readers or punches on the PDP-11s, only on the IBM, Univac and CDC
systems.
As other people have pointed out, I'm sure they must have been around, but
my world did not have them.
> From: Clem Cole
> I just don't remember seeing actual card readers or punches on the
> PDP-11s
I'm not sure DEC _had_ a card punch for the PDP11's. Readers, yes, the CR11:
https://gunkies.org/wiki/CR11_Card_Readers
but I don't think they had a punch (although there was one for the PDP-10
family, the CP10).
I think the CR11 must have been _relatively_ common, based on how many
readers and CR11 controller cards survive. Maybe not in computer science
installations, though... :-)
Noel
On Sunday, 9 February 2020 at 22:09:47 -0800, jason-tuhs(a)shalott.net wrote:
>
>>> All, I've also set this up to try out for the video chats:
>>> https://meet.tuhs.org/COFF
>>> Password to join is "unix" at the moment.
>
>> Just tried it out. On FreeBSD I get a blank grey screen. I could
>> only get something more on a Microsoft box, not quite what I'd want to
>> do. Is there some trick?
>
> * Install /usr/ports/net-im/jitsi. (Comment out the BROKEN line from the
> Makefile and "make install" should work as usual; the source can actually
> be fetched just fine...)
In fact, the package was indeed unfetchable, but the ports collection
had a cached version, which is what you got. But now I've brought it
up to date (only 2 years old rather than 4).
> * kldload cuse
>
> * Run firefox and surf to that URL.
I haven't found that necessary. In fact, installing jitsi doesn't
seem to be necessary. All you need is a more recent browser than the
antiques I was running.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA