While the idea of small tools that do one job well is the core tenant of
what I think of as the UNIX philosophy, this goes a bit beyond UNIX, so I
have moved this discussion to COFF and BCCing TUHS for now.
The key is that not all "bloat" is the same (really)—or maybe one person's
bloat is another person's preference. That said, NIH leads to pure bloat
with little to recommend it, while multiple offerings are a choice. Maybe
the difference between the two may be one person's view over another.
On Fri, May 10, 2024 at 6:08 AM Rob Pike <robpike(a)gmail.com> wrote:
> Didn't recognize the command, looked it up. Sigh.
>
Like Rob -- this was a new one for me, too.
I looked, and it is on the SYS3 tape; see:
https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/man/man1/nl.1
> pr -tn <file>
>
> seems sufficient for me, but then that raises the question of your
> question.
>
Agreed, that has been burned into the ROMs in my fingers since the
mid-1970s 😀
BTW: SYS3 has pr(1) with both switches too (more in a minute)
> I've been developing a theory about how the existence of something leads
> to things being added to it that you didn't need at all and only thought of
> when the original thing was created.
>
That is a good point, and I generally agree with you.
> Bloat by example, if you will. I suspect it will not be a popular theory,
> however accurately it may describe the technological world.
>
Of course, sometimes the new features >>are<< easier (more natural *for
some people*). And herein lies the core problem. The bloat is often
repetitive, and I suggest that it is often implemented in the wrong place -
and usually for the wrong reasons.
Bloat comes about because somebody thinks they need some feature and
probably doesn't understand that it is already there or how they can use
it. But they do know about it, their tool must be set up to exploit it - so
they do not need to reinvent it. GUI-based tools are notorious for this
failure. Everyone seems to have a built-in (unique) editor, or a private
way to set up configuration options et al. But ... that walled garden is
comfortable for many users and >>can be<< useful sometimes.
Long ago, UNIX programmers learned that looking for $EDITOR in the
environment was way better than creating one. Configuration was as ASCII
text, stored in /etc for system-wide and dot files in the home for users.
But it also means the >>output<< of each tool needs to be usable by each
other [*i.e.*, docx or xlx files are a no-no).
For example, for many things on my Mac, I do use the GUI-based tools --
there is no doubt they are better integrated with the core Mac system >>for
some tasks.<< But only if I obey a set of rules Apple decrees. For
instance, this email read is easier much of the time than MH (or the HM
front end, for that matter), which I used for probably 25-30 years. But on
my Mac, I always have 4 or 5 iterm2(1) open running zsh(1) these days. And,
much of my typing (and everything I do as a programmer) is done in the shell
(including a simple text editor, not an 'IDE'). People who love IDEs swear
by them -- I'm just not impressed - there is nothing they do for me that
makes it easier, and I have learned yet another scheme.
That said, sadly, Apple is forcing me to learn yet another debugger since
none of the traditional UNIX-based ones still work on the M1-based systems.
But at least LLDB is in the same key as sdb/dbx/gdb *et al*., so it is a
PITA but not a huge thing as, in the end, LLDB is still based on the UNIX
idea of a single well-designed and specific to the task tool, to do each
job and can work with each other.
FWIW: I was recently a tad gob-smacked by the core idea of UNIX and its
tools, which I have taken for a fact since the 1970s.
It turns out that I've been helping with the PiDP-10 users (all of the
PiDPs are cool, BTW). Before I saw UNIX, I was paid to program a PDP-10. In
fact, my first UNIX job was helping move programs from the 10 to the UNIX.
Thus ... I had been thinking that doing a little PDP-10 hacking shouldn't
be too hard to dust off some of that old knowledge. While some of it has,
of course, come back. But daily, I am discovering small things that are so
natural with a few simple tools can be hard on those systems.
I am realizing (rediscovering) that the "build it into my tool" was the
norm in those days. So instead of a pr(1) command, there was a tool that
created output to the lineprinter. You give it a file, and it is its job to
figure out what to do with it, so it has its set of features (switches) -
so "bloat" is that each tool (like many current GUI tools) has private ways
of doing things. If the maker of tool X decided to support some idea, they
would do it like tool Y. The problem, of course, was that tools X and Y
had to 'know about' each type of file (in IBM terms, use its "access
method"). Yes, the engineers at DEC, in their wisdom, tried to
"standardize" those access methods/switches/features >>if you implemented
them<< -- but they are not all there.
This leads me back to the question Rob raises. Years ago, I got into an
argument with Dave Cutler RE: UNIX *vs.* VMS. Dave's #1 complaint about
UNIX in those days was that it was not "standardized." Every program was
different, and more to Dave's point, there was no attempt to make switches
or errors the same [getopt(3) had been introduced but was not being used by
most applications). He hated that tar/tp used "keys" and tools like cpio
used switches. Dave hated that I/O was so simple - in his world all user
programs should use his RMS access method of course [1]. VMS, TOPS, *etc.*,
tried to maintain a system-wide error scheme, and users could look things
like errors up in a system DB by error number, *etc*. Simply put, VMS is
very "top-down."
My point with Dave was that by being "bottom-up," the best ideas in UNIX
were able to rise. And yes, it did mean some rough edges and repeated
implementations of the same idea. But UNIX offered a choice, and while Rob
and I like and find: pr -tn perfectly acceptable thank you, clearly someone
else desired the features that nl provides. The folks that put together
System 3 offer both solutions and let the user choose.
This, of course, comes as bloat, but maybe that is a type of bloat so bad?
My own thinking is this - get things down to the basics and simplest
privatives and then build back up. It's okay to offer choices, as long as
the foundation is simple and clean. To me, bloat becomes an issue when you
do the same thing over and over again, particularly because you can not
utilize what is there already, the worst example is NIH - which happens way
more than it should.
I think the kind of bloat that GUI tools and TOPS et al. created forces
recreation, not reuse. But offering choice and the expense of multiple
tools that do the same things strikes me as reasonable/probably a good
thing.
1.] BTW: One of my favorite DEC stories WRT to VMS engineering has to do
with the RMS I/O system. Supporting C using VMS was a bit of PITA.
Eventually, the VMS engineers added Stream I/O - which simplified the C
runtime, but it was also made available for all technical languages.
Fairly soon after it was released, the DEC Marketing folks discovered
almost all new programs, regardless of language, had started to use Stream
I/O and many older programs were being rewritten by customers to use it. In
fact, inside of DEC itself, the languages group eventually rewrote things
like the FTN runtime to use streams, making it much smaller/easier to
maintain. My line in the old days: "It's not so bad that ever I/O has
offer 1000 options, it's that Dave to check each one for every I/O. It's a
classic example of how you can easily build RMS I/O out of stream-based
I/O, but the other way around is much harder. My point here is to *use
the right primitives*. RMS may have made it easier to build RDB, but it
impeded everything else.
Sorry for the dual list post, I don’t who monitors COFF, the proper place for this.
There may a good timeline of the early decades of Computer Science and it’s evolution at Universities in some countries, but I’m missing it.
Doug McIlroy lived through all this, I hope he can fill in important gaps in my little timeline.
It seems from the 1967 letter, defining the field was part of the zeitgeist leading up to the NATO conference.
1949 ACM founded
1958 First ‘freshman’ computer course in USA, Perlis @ CMU
1960 IBM 1400 - affordable & ‘reliable’ transistorised computers arrived
1965 MIT / Bell / General Electric begin Multics project.
CMU establishes Computer Sciences Dept.
1967 “What is Computer Science” letter by Newell, Perlis, Simon
1968 “Software Crisis” and 1st NATO Conference
1969 Bell Labs withdraws from Multics
1970 GE's sells computer business, including Multics, to Honeywell
1970 PDP-11/20 released
1974 Unix issue of CACM
=========
The arrival of transistorised computers - cheaper, more reliable, smaller & faster - was a trigger for the accelerated uptake of computers.
The IBM 1400-series was offered for sale in 1960, becoming the first (large?) computer to sell 10,000 units - a marker of both effective marketing & sales and attractive pricing.
The 360-series, IBM’s “bet the company” machine, was in full development when the 1400 was released.
=========
Attached is a text file, a reformatted version of a 1967 letter to ’Science’ by Allen Newell, Alan J. Perlis, and Herbert A. Simon:
"What is computer science?”
<https://www.cs.cmu.edu/~choset/whatiscs.html>
=========
A 1978 masters thesis on Early Australian Computers (back to 1950’s, mainly 1960’s) cites a 17 June 1960 CSIRO report estimating
1,000 computers in the US and 100 in the UK. With no estimate mentioned for Western Europe.
The thesis has a long discussion of what to count as a (digital) ‘computer’ -
sources used different definitions, resulting in very different numbers,
making it difficult to reconcile early estimates, especially across continents & countries.
Reverse estimating to 1960 from the “10,000” NATO estimate of 1968, with a 1- or 2-year doubling time,
gives a range of 200-1,000, including the “100” in the UK.
Licklider and later directors of ARPA’s IPTO threw millions into Computing research in the 1960’s, funding research and University groups directly.
[ UCB had many projects/groups funded, including the CSRG creating BSD & TCP/IP stack & tools ]
Obviously there was more to the “Both sides of the Atlantic” argument of E.W. Dijkstra and Alan Kay - funding and numbers of installations was very different.
The USA had a substantially larger installed base of computers, even per person,
and with more university graduates trained in programming, a higher take-up in private sector, not just the public sector and defence, was possible.
=========
<https://www.acm.org/about-acm/acm-history>
In September 1949, a constitution was instituted by membership approval.
————
<https://web.archive.org/web/20160317070519/https://www.cs.cmu.edu/link/inst…>
In 1958, Perlis began teaching the first freshman-level computer programming course in the United States at Carnegie Tech.
In 1965, Carnegie Tech established its Computer Science Department with a $5 million grant from the R.K. Mellon Foundation. Perlis was the first department head.
=========
From the 1968 NATO report [pg 9 of pdf ]
<http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF>
Helms:
In Europe alone there are about 10,000 installed computers — this number is increasing at a rate of anywhere from 25 per cent to 50 per cent per year.
The quality of software provided for these computers will soon affect more than a quarter of a million analysts and programmers.
d’Agapeyeff:
In 1958 a European general purpose computer manufacturer often had less than 50 software programmers,
now they probably number 1,000-2,000 people; what will be needed in 1978?
_Yet this growth rate was viewed with more alarm than pride._ (comment)
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
... every memory location is a stack.
I'm thinking of a notation like:
A => CADR (yes, I'm a LISP fan)
A' => A[0]
A'' => A[-1]
etc.
Not quite APL\360, but I guess pretty close :-)
-- Dave
Has there ever been a full implementation of PL/I? It seems akin to
solving the halting problem...
Yes, I've used PL/I in my CompSci days, and was told that IBM had trademarked
everything from /I to /C :-)
-- Dave, who loved PL/360
Good morning, I was reading recently on the earliest days of COBOL and from what I could gather, the picture (heh) regarding compilers looks something like this:
- RCA introduced the "Narrator" compiler for the 501[1] in November, 1960 (although it had been demonstrated in August of that year)
- Sperry Rand introduced a COBOL compiler running under the BOSS operating system[2] sometime in late 1960 or early 1961 (according to Wikipedia which then cites [3]).
First, I had some trouble figuring out the loading environment on the RCA 501, would this have been a "set the start address on the console, load cards/PPT, and go" sort of setup, or was there an OS/monitor involved?
And for the latter, there is a UNIVAC III COBOL Programmer's Guide (U-3389) cited in [2] but thus far I haven't found this online. Does anyone know if COBOL was also on the UNIVAC II (or I) and/or if any distinct COBOL documents from this era of UNIVAC survive anywhere?
- Matt G.
[1] - http://bitsavers.org/pdf/rca/501/RCA501_COBOL_Narrator_Dec60.pdf
[2] - http://bitsavers.org/pdf/univac/univac3/U-3521_BOSS_III_Jan63.pdf
[3] - Williams, Kathleen Broome (10 November 2012). Grace Hopper: Admiral of the Cyber Sea.
[ moved to coff ]
On Thu, Mar 14, 2024 at 7:49 AM Theodore Ts'o <tytso(a)mit.edu> wrote:
> On Thu, Mar 14, 2024 at 11:44:45AM +1100, Alexis wrote:
> >
> > i basically agree. i won't dwell on this too much further because i
> > recognise that i'm going off-topic, list-wise, but:
> >
> > i think part of the problem is related to different people having
> > different preferences around the interfaces they want/need for
> > discussions. What's happened is that - for reasons i feel are
> > typically due to a lock-in-oriented business model - many discussion
> > systems don't provide different interfaces/'views' to the same
> > underlying discussions. Which results in one community on platform X,
> > another community on platform Y, another community on platform Z
> > .... Whereas, for example, the 'Rocksolid Light' BBS/forum software
> > provides a Web-based interface to an underlying NNTP-based system,
> > such that people can use their NNTP clients to engage in forum
> > discussions. i wish this sort of approach was more common.
>
> This is a bit off-topic, and so if we need to push this to a different
> list (I'm not sure COFF is much better?), let's do so --- but this is
> a conversation which is super-improtant to have. If not just for Unix
> heritage, but for the heritage of other collecvtive systems-related
> projects, whether they be open source or proprietary.
>
> A few weeks ago, there were people who showed up on the git mailing
> list requesting that discussion of the git system move from the
> mailing list to using a "forge" web-based system, such as github or
> gitlab. Their reason was that there were tons of people who think
> e-mail is so 1970's, and that if we wanted to be more welcoming to the
> up-and-coming programmers, we should meet them were they were at. The
> obvious observations of how github was proprietary, and locking up our
> history there might be contra-indicated was made, and the problem with
> gitlab is that it doesn't have a good e-mail gateway, and while we
> might be disenfranchising the young'uns by not using some new-fangled
> web interface, disenfranchising the existing base of expertise was
> even worse idea.
>
> The best that we have today is lore.kernel.org, which is used by both
> the Linux Kernel and the git development communities. It uses
> public-inbox to archive the mailing list traffic, and it can be
> accessed via threaded e-mail interface, as well as via NNTP. There
> are also tools for subscribing to messages that match a filtering
> criteria, as well as tools for extracting patches plus code review
> sign-offs into a form that can be easily consumed by git.
>
email based flows are horrible. Absolutely the worst. They are impossible
to manage. You can't subscribe to them w/o insane email filtering rules,
you can't discover patches or lost patches easily. There's no standard way
to do something as simple as say 'never mind'. There's no easy way
to follow much of the discussion or find it after the fact if your email was
filtered off (ok, yea, there kinda is, if you know which archives to troll
through).
As someone who recently started contributing to QEMU I couldn't get over
how primitive the email interaction was. You magically have to know who
to CC on the patches. You have to look at the maintainers file, which is
often
stale and many of the people you CC never respond. If a patch is dropped,
or overlooked it's up to me to nag people to please take a look. There's
no good way for me to find stuff adjacent to my area (it can be done, but
it takes a lot of work).
So you like it because you're used to it. I'm firmly convinced that the
email
workflow works only because of the 30 years of toolings, work arounds, extra
scripts, extra tools, cult knowledge, and any number of other "living with
the
poo, so best polish up" projects. It's horrible. It's like everybody has
collective
Stockholm syndrome.
The peoople begging for a forge don't care what the forge is. Your
philisophical
objections to one are blinding you to things like self-hosted gitea,
gitlab, gerrit
which are light years ahead of this insane workflow.
I'm no spring chicken (I sent you patches, IIRC, when you and bruce were
having
the great serial port bake off). I've done FreeBSD for the past 30 years
and we have
none of that nonsense. The tracking isn't as exacting as Linux, sure. I'll
grant. the
code review tools we've used over the years are good enough, but everybody
that's used them has ideas to make them better. We even accept pull requests
from github, but our source of truth is away from github. We've taken an
all
of the above approach and it makes the project more approachable.In
addition,
I can land reviewed and tested code in FreeBSD in under an hour (including
the
review and acceptance testing process). This makes it way more efficient
for me
to do things in FreeBSD than in QEMU where the turn around time is days,
where
I have to wait for the one true pusher to get around to my pull request,
where I have
to go through weeks long processes to get things done (and I've graduated to
maintainer status).
So the summary might be email is so 1970s, but the real problem with it is
that it requires huge learning curve. But really, it's not something any
sane person would
design from scratch today, it has all these rules you have to cope with,
many unwritten.
You have to hope that the right people didn't screw up their email filters.
You have to
wait days or weeks for an answer, and the enthusiasm to contribute dies in
that time.
A quick turnaround time is essential for driving enthusiasm for new
committers in the
community. It's one of my failings in running FreeBSD's github experiment:
it takes me
too long to land things, even though we've gone from years to get an answer
to days to
weeks....
I studied the linux approach when FreeBSD was looking to improve it's git
workflow. And
none of the current developers think it's a good idea. In fact, I got huge
amounts of grief,
death threads, etc for even suggesting it. Everybody thought, to a person
that as badly
as our hodge-podge of bugzilla, phabricator and cruddy git push hooks, it
was lightyears
ahead of Linux's system and allowed us to move more quickly and produced
results that
were good enough.
So, respectfully, I think Linux has succeed despite its tooling, not
because of it. Other factors
have made it successful. The heroics that are needed to make it work are
possible only
because there's a huge supply that can be wasted and inefficiently deployed
and still meet
the needs of the project.
Warner
On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote:
>
> I eventually reverted back to Linux because it was clear that the
> user community was getting much larger, I was using it
> professionally at work and there was just a larger range of
> applications available. Lately, I find myself getting tired of the
> bloat and how big and messy and complicated it has all gotten.
> Thinking of looking for something simpler and was just wondering
> what do other old timers use for their primary home computing needs?
I'm surprised how few of the responders use BSD. My machines all
(currently) run FreeBSD, with the exception of a Microsoft box
(distress.lemis.com) that I use remotely for photo processing. I've
tried Linux (used to work developing Linux kernel code), but I
couldn't really make friends with it. It sounds like our reasons are
similar.
More details:
1977-1984: CP/M, 86-DOS
1984-1990: MS-DOS
1991-1992: Inactive UNIX
1992-1997: BSD/386, BSD/OS
1997-now: FreeBSD
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
This is UNIX history, but since the Internet's history and Unix history are
so intertwined, I'm going to risk the wrath of the IH moderators to try to
explain, as I was one of the folks who was at the table in those the times
and participated in my small way in both events: the birth of the Internet
and the spreading of the UNIX IP.
More details can be found in a paper I did a few years ago:
https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-…
[If you cannot find it and are interested send me email off list and I'll
forward it].
And ... if people want to continue this discussion -- please, please, move
it to the more appropriate COFF mailing list:
https://www.tuhs.org/cgi-bin/mailman/listinfo/coff - which I have CC'ed in
this reply.
On Fri, Mar 8, 2024 at 11:32 PM Greg Skinner via Internet-history <
internet-history(a)elists.isoc.org> wrote:
> Forwarded for Barbara
>
> > I will admit your response is confusing me. My post only concerns what
> I think I remember as a problem in getting BSD UNIX, in particular the
> source code. Nothing about getting something we wanted to use on a
> hardware platform from one of the commercial vendors. We needed the BSD
> source but got hung up.
>
Let me see if I can explain better ...
Assuming you were running BSD UNIX on a Vax, your team would have needed
two things:
- an AT&T License for 32/V [Research Version 7 -- port to a Vax/780 at
AT&T] and a
- a license for BSD 3, later 4, then 4.1, *etc*., from the Regents of
the University of CA.
The first license gave your team core a few rights from AT&T:
1. the right to run UNIX binaries on a single CPU (which was named in
your license)
2. the right to look at and modify the sources,
3. the right the create derivative works from the AT&T IP, and
4. the right to exchange your derivative works with others people that
held a similar license from AT&T.
[AT&T had been forced to allow this access (license) to their IP under the
rules of the 1956 consent decree - see paper for more details, but
remember, as part of the consent decree allow it to have a legal monopoly
on the phone system, AT&T had to make its IP available to the US Gov --
which I'm guessing the crux of Barbara's question/observation].
For not-for-profits (University/Research), a small fee was allowed to be
charged (order of 1-2 hundred $s) to process the paperwork and copy the mag
tape. But their IP came without any warranty, and you had to hold AT&T
harmless if you used it. In those days, we referred to this as *the UNIX IP
was abandoned on your doorstep.* BTW: This license allowed the research
sites to move AT&T derivative work (binaries) within their site freely.
Still, if you look at the license carefully, most had a restriction
(often/usually ignored at the universities) that the sources were supposed
to only be available on the original CPU named in their specific license.
Thus, if you were a University license, no fees were charged to run the
AT&T IP on other CPUs --> however, the licensees were not allowed to use it
for "commercial" users at the University [BTW: this clause was often
ignored, although a group of us at CMU hackers in the late 1970s famously
went on strike until the Unversity obtained at least one commercial
license]. The agreement was that a single CPU should be officially bound
for all commercial use for that institution. I am aware that Case-Western
got a similar license soon after CMU did (their folks found out about
the CMU strike/license). But I do not know if MIT, Standford, or UCB
officials came clean on that part and paid for a commercial license
(depending on the type of license, its cost was the order of $20K-25K for
the first CPU and an order of $7K-10K for each CPU afterward - each of
these "additional cpu' could also have the sources - but named in an
appendix for each license with AT&T). I believe that some of the larger
state schools like Penn State, Rutgers, Purdue, and UW started to follow
that practice by the time Unix started to spread around each campus.
That said, a different license for UNIX-based IP could be granted by the
Regents of the University of CA and managed by its 'Industrial
Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses
for tools like SPICE, SPLICE, MOTIS,* et al*). This license gave the holder
the right to examine and use the UCB's derivative works on anything as long
as you acknowledged that you got that from UCB and held the
Regents blameless [we often called this the 'dead-fish license' -- *you
could make a chip, make a computer, or even wrap dead-fish in it.* But you
had to say you started with something from the Regents, but they were not
to be blamed for what you did with it].
The Regents were exercising rights 3 and 4 from AT&T. Thus, a team who
wanted to obtain the Berkeley Software Distribution for UNIX (*a.k.a*. BSD)
needed to demonstrate that they held the appropriate license from AT&T
[send a copy of the signature page from your license to the ILO] before UCB
would release the bits. They also had a small processing fee to the IOL in
the order of $1K. [The original BSD is unnumbered, although most refer to
it today as 1BSD to differentiate it from later BSD releases for UNIX].
Before I go on, in those times, the standard way we operated was that you
needed to have a copy of someone else's signature page to share things. In
what would later become USENIX (truth here - I'm an ex-president of the
same), you could only get invited and come to a conference if you were
licensed from AT&T. That was not a big deal. We all knew each other.
FWIW: at different times in my career, I have had a hanging file in a
cabinet with a copy of the number of these pages from different folks, with
whom I would share mag tapes (remember this is pre-Internet, and many of
the folks using UNIX were not part of the ARPAnet).
However, the song has other verses that make this a little confusing.
If your team obtained a* commercial use license* from AT&T, they could
further obtain a *commercial redistribution license*. This was initially
granted for the Research Seventh Edition. It was later rewritten (with the
business terms changing each time) for what would eventually be called
System III[1], and then the different System V releases. The price of the
redistribution license for V7 was $150K, plus a sliding scale per CPU you
ran the AT&T IP, depending on the number of CPUs you needed. With this, the
single CPU for the source restriction was removed.
So ... if you had a redistribution license, you could also get a license
from the Regents, and as long as you obeyed their rules, you could sell a
copy of UNIX to run on any licensed target. Traditionally, hardware is
part of the same purchase when purchased from a firm like DEC, IBM,
Masscomp,* etc*. However, separate SW licenses were sold via firms such as
Microsoft and Mt. Xinu. The purchaser of *a binary license* from one of
those firms did not have the right to do anything but use the AT&T
derivative work. If your team had a binary licensee, you could not obtain
any of the BSD distributions until the so-called 'NET2" BSD release [and
I'm going to ignore the whole AT&T/BSDi/Regents case here as it is not
relevant to Barbara's question/comment].
So the question is, how did a DoD contractor, be it BBN, Ford Aerospace,
SRI, etc., originally get access to UNIX IP? Universities and traditional
research teams could get a research license. Commercial firms like DEC
needed a commercial licensee. Folks with DoD contracts were in a hazy
area. The original v5 commercial licensee was written for Rand, a DoD
contractor. However, as discussed here in the IH mailing list and
elsewhere, some places like BBN had access to the core UNIX IP as part of
their DoD contracts. I believe Ford Aerospace was working with AT&T
together as part of another US Gov project - which is how UNIX got there
originally (Ford Aero could use it for that project, but not the folks at
Ford Motors, for instance].
The point is, if you access the *IP indirectly* such as that, then
your site probably did not have a negotiated license with a signature page
to send to someone.
@Barbara, I can not say for sure, but if this was either a PDP-11 or a VAX
and you wanted one of the eBSDs, I guess/suspect that maybe your team was
dealing with an indirect path to AT&T licensing -- your site license might
have come from a US Gov contract, not directly. So trying to get a BSD tape
directly from the IOL might have been more difficult without a signature
page.
So, rolling back to the original. You get access to BSD sources, but you
had to demonstrate to the IOL folks in UCB's Cory Hall that you were
legally allowed access to the AT&T IP in source code form. That
demonstration was traditionally fulfilled with a xerographic copy of the
signature page for your institution, which the IOL kept on file. That
said, if you had legal access to the AT&T IP by indirect means, I do not
know how the IOL completed that check or what they needed to protect the
Regents.
Clem
1.] What would be called a System from a marketing standpoint was
originally developed as PWB 3.0. This was the system a number of firms,
including my own, were discussing with AT&T at the famous meetings at
'Ricky's Hyatt' during the price (re)negotiations after the original V7
redistribution license.