Good evening, I recently came into possession of a "Draft Proposed American National Standard for BASIC" circa September 15, 1980, 6 years prior to the publication of the Full BASIC standard.
I did some brief searching around, checked bitsavers, archive.org, but I don't happen to see this archived anywhere. Is anyone aware of such a scan? If not, I'll be sure to add it to my scan docket...which is a bit slow these days due to a beautiful spring and much gardening...but not forgotten!
Flipping through this makes me wonder how things might have been different if this dropped in 1980 rather than 1986.
- Matt G.
The Newcastle Connection, aka Unix United, was an early experiment in
transparent networking: see <
https://web.archive.org/web/20160816184205/http://www.cs.ncl.ac.uk/research…>
for a high-level description. A name of the form "/../host/path"
represented a file or device on a remote host in a fully transparent way.
This was layered on V7 at the libc level, so that the kernel did not need
to be modified (though the shell did, since it was not libc-based at the
time). MUNIX was an implementation of the same idea using System V as the
underlying system.
This appears to be a VHS vs. Betamax battle: NFS was not transparent, but
Sun had far more marketing clout. However, the Manchester Connection
required a single uid space (as far as I can tell), which may also have
been a (perceived) institutional barrier.
On Thu, May 16, 2024 at 3:34 AM Ralph Corderoy <ralph(a)inputplus.co.uk>
wrote:
> Hi,
>
> I've set ‘mail-followup-to: coff(a)tuhs.xn--org-to0a.
>
> > > Every so often I want to compare files on remote machines, but all
> > > I can do is to fetch them first (usually into /tmp); I'd like to do
> > > something like:
> > >
> > > rdiff host1:file1 host2:file2
> > >
> > > Breathes there such a beast?
>
> No, nor should there. It would be slain less it beget rcmp, rcomm,
> rpaste, ...
>
> > > Think of it as an extension to the Unix philosophy of "Everything
> > > looks like a file"...
>
> Then make remote files look local as far as their access is concerned.
> Ideally at the system-call level. Less ideal, at libc.a.
>
> > Maybe
> >
> > diff -u <(ssh host1 cat file1) <(ssh host2 cat file2)
>
> This is annoyingly noisy if the remote SSH server has sshd_config(5)'s
> ‘Banner’ set which spews the contents of a file before authentication,
> e.g. the pointless
>
> This computer system is the property of ...
>
> Disconnect NOW if you have not been expressly authorised to use this
> system. Unauthorised use is a criminal offence under the Computer
> Misuse Act 1990.
>
> Communications on or through ...uk's computer systems may be
> monitored or recorded to secure effective system operation and for
> other lawful purposes.
>
> It appears on stderr so doesn't upset the diff but does clutter.
> And discarding stderr is too sloppy.
>
> --
> Cheers, Ralph.
>
While the idea of small tools that do one job well is the core tenant of
what I think of as the UNIX philosophy, this goes a bit beyond UNIX, so I
have moved this discussion to COFF and BCCing TUHS for now.
The key is that not all "bloat" is the same (really)—or maybe one person's
bloat is another person's preference. That said, NIH leads to pure bloat
with little to recommend it, while multiple offerings are a choice. Maybe
the difference between the two may be one person's view over another.
On Fri, May 10, 2024 at 6:08 AM Rob Pike <robpike(a)gmail.com> wrote:
> Didn't recognize the command, looked it up. Sigh.
>
Like Rob -- this was a new one for me, too.
I looked, and it is on the SYS3 tape; see:
https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/man/man1/nl.1
> pr -tn <file>
>
> seems sufficient for me, but then that raises the question of your
> question.
>
Agreed, that has been burned into the ROMs in my fingers since the
mid-1970s 😀
BTW: SYS3 has pr(1) with both switches too (more in a minute)
> I've been developing a theory about how the existence of something leads
> to things being added to it that you didn't need at all and only thought of
> when the original thing was created.
>
That is a good point, and I generally agree with you.
> Bloat by example, if you will. I suspect it will not be a popular theory,
> however accurately it may describe the technological world.
>
Of course, sometimes the new features >>are<< easier (more natural *for
some people*). And herein lies the core problem. The bloat is often
repetitive, and I suggest that it is often implemented in the wrong place -
and usually for the wrong reasons.
Bloat comes about because somebody thinks they need some feature and
probably doesn't understand that it is already there or how they can use
it. But they do know about it, their tool must be set up to exploit it - so
they do not need to reinvent it. GUI-based tools are notorious for this
failure. Everyone seems to have a built-in (unique) editor, or a private
way to set up configuration options et al. But ... that walled garden is
comfortable for many users and >>can be<< useful sometimes.
Long ago, UNIX programmers learned that looking for $EDITOR in the
environment was way better than creating one. Configuration was as ASCII
text, stored in /etc for system-wide and dot files in the home for users.
But it also means the >>output<< of each tool needs to be usable by each
other [*i.e.*, docx or xlx files are a no-no).
For example, for many things on my Mac, I do use the GUI-based tools --
there is no doubt they are better integrated with the core Mac system >>for
some tasks.<< But only if I obey a set of rules Apple decrees. For
instance, this email read is easier much of the time than MH (or the HM
front end, for that matter), which I used for probably 25-30 years. But on
my Mac, I always have 4 or 5 iterm2(1) open running zsh(1) these days. And,
much of my typing (and everything I do as a programmer) is done in the shell
(including a simple text editor, not an 'IDE'). People who love IDEs swear
by them -- I'm just not impressed - there is nothing they do for me that
makes it easier, and I have learned yet another scheme.
That said, sadly, Apple is forcing me to learn yet another debugger since
none of the traditional UNIX-based ones still work on the M1-based systems.
But at least LLDB is in the same key as sdb/dbx/gdb *et al*., so it is a
PITA but not a huge thing as, in the end, LLDB is still based on the UNIX
idea of a single well-designed and specific to the task tool, to do each
job and can work with each other.
FWIW: I was recently a tad gob-smacked by the core idea of UNIX and its
tools, which I have taken for a fact since the 1970s.
It turns out that I've been helping with the PiDP-10 users (all of the
PiDPs are cool, BTW). Before I saw UNIX, I was paid to program a PDP-10. In
fact, my first UNIX job was helping move programs from the 10 to the UNIX.
Thus ... I had been thinking that doing a little PDP-10 hacking shouldn't
be too hard to dust off some of that old knowledge. While some of it has,
of course, come back. But daily, I am discovering small things that are so
natural with a few simple tools can be hard on those systems.
I am realizing (rediscovering) that the "build it into my tool" was the
norm in those days. So instead of a pr(1) command, there was a tool that
created output to the lineprinter. You give it a file, and it is its job to
figure out what to do with it, so it has its set of features (switches) -
so "bloat" is that each tool (like many current GUI tools) has private ways
of doing things. If the maker of tool X decided to support some idea, they
would do it like tool Y. The problem, of course, was that tools X and Y
had to 'know about' each type of file (in IBM terms, use its "access
method"). Yes, the engineers at DEC, in their wisdom, tried to
"standardize" those access methods/switches/features >>if you implemented
them<< -- but they are not all there.
This leads me back to the question Rob raises. Years ago, I got into an
argument with Dave Cutler RE: UNIX *vs.* VMS. Dave's #1 complaint about
UNIX in those days was that it was not "standardized." Every program was
different, and more to Dave's point, there was no attempt to make switches
or errors the same [getopt(3) had been introduced but was not being used by
most applications). He hated that tar/tp used "keys" and tools like cpio
used switches. Dave hated that I/O was so simple - in his world all user
programs should use his RMS access method of course [1]. VMS, TOPS, *etc.*,
tried to maintain a system-wide error scheme, and users could look things
like errors up in a system DB by error number, *etc*. Simply put, VMS is
very "top-down."
My point with Dave was that by being "bottom-up," the best ideas in UNIX
were able to rise. And yes, it did mean some rough edges and repeated
implementations of the same idea. But UNIX offered a choice, and while Rob
and I like and find: pr -tn perfectly acceptable thank you, clearly someone
else desired the features that nl provides. The folks that put together
System 3 offer both solutions and let the user choose.
This, of course, comes as bloat, but maybe that is a type of bloat so bad?
My own thinking is this - get things down to the basics and simplest
privatives and then build back up. It's okay to offer choices, as long as
the foundation is simple and clean. To me, bloat becomes an issue when you
do the same thing over and over again, particularly because you can not
utilize what is there already, the worst example is NIH - which happens way
more than it should.
I think the kind of bloat that GUI tools and TOPS et al. created forces
recreation, not reuse. But offering choice and the expense of multiple
tools that do the same things strikes me as reasonable/probably a good
thing.
1.] BTW: One of my favorite DEC stories WRT to VMS engineering has to do
with the RMS I/O system. Supporting C using VMS was a bit of PITA.
Eventually, the VMS engineers added Stream I/O - which simplified the C
runtime, but it was also made available for all technical languages.
Fairly soon after it was released, the DEC Marketing folks discovered
almost all new programs, regardless of language, had started to use Stream
I/O and many older programs were being rewritten by customers to use it. In
fact, inside of DEC itself, the languages group eventually rewrote things
like the FTN runtime to use streams, making it much smaller/easier to
maintain. My line in the old days: "It's not so bad that ever I/O has
offer 1000 options, it's that Dave to check each one for every I/O. It's a
classic example of how you can easily build RMS I/O out of stream-based
I/O, but the other way around is much harder. My point here is to *use
the right primitives*. RMS may have made it easier to build RDB, but it
impeded everything else.
Sorry for the dual list post, I don’t who monitors COFF, the proper place for this.
There may a good timeline of the early decades of Computer Science and it’s evolution at Universities in some countries, but I’m missing it.
Doug McIlroy lived through all this, I hope he can fill in important gaps in my little timeline.
It seems from the 1967 letter, defining the field was part of the zeitgeist leading up to the NATO conference.
1949 ACM founded
1958 First ‘freshman’ computer course in USA, Perlis @ CMU
1960 IBM 1400 - affordable & ‘reliable’ transistorised computers arrived
1965 MIT / Bell / General Electric begin Multics project.
CMU establishes Computer Sciences Dept.
1967 “What is Computer Science” letter by Newell, Perlis, Simon
1968 “Software Crisis” and 1st NATO Conference
1969 Bell Labs withdraws from Multics
1970 GE's sells computer business, including Multics, to Honeywell
1970 PDP-11/20 released
1974 Unix issue of CACM
=========
The arrival of transistorised computers - cheaper, more reliable, smaller & faster - was a trigger for the accelerated uptake of computers.
The IBM 1400-series was offered for sale in 1960, becoming the first (large?) computer to sell 10,000 units - a marker of both effective marketing & sales and attractive pricing.
The 360-series, IBM’s “bet the company” machine, was in full development when the 1400 was released.
=========
Attached is a text file, a reformatted version of a 1967 letter to ’Science’ by Allen Newell, Alan J. Perlis, and Herbert A. Simon:
"What is computer science?”
<https://www.cs.cmu.edu/~choset/whatiscs.html>
=========
A 1978 masters thesis on Early Australian Computers (back to 1950’s, mainly 1960’s) cites a 17 June 1960 CSIRO report estimating
1,000 computers in the US and 100 in the UK. With no estimate mentioned for Western Europe.
The thesis has a long discussion of what to count as a (digital) ‘computer’ -
sources used different definitions, resulting in very different numbers,
making it difficult to reconcile early estimates, especially across continents & countries.
Reverse estimating to 1960 from the “10,000” NATO estimate of 1968, with a 1- or 2-year doubling time,
gives a range of 200-1,000, including the “100” in the UK.
Licklider and later directors of ARPA’s IPTO threw millions into Computing research in the 1960’s, funding research and University groups directly.
[ UCB had many projects/groups funded, including the CSRG creating BSD & TCP/IP stack & tools ]
Obviously there was more to the “Both sides of the Atlantic” argument of E.W. Dijkstra and Alan Kay - funding and numbers of installations was very different.
The USA had a substantially larger installed base of computers, even per person,
and with more university graduates trained in programming, a higher take-up in private sector, not just the public sector and defence, was possible.
=========
<https://www.acm.org/about-acm/acm-history>
In September 1949, a constitution was instituted by membership approval.
————
<https://web.archive.org/web/20160317070519/https://www.cs.cmu.edu/link/inst…>
In 1958, Perlis began teaching the first freshman-level computer programming course in the United States at Carnegie Tech.
In 1965, Carnegie Tech established its Computer Science Department with a $5 million grant from the R.K. Mellon Foundation. Perlis was the first department head.
=========
From the 1968 NATO report [pg 9 of pdf ]
<http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF>
Helms:
In Europe alone there are about 10,000 installed computers — this number is increasing at a rate of anywhere from 25 per cent to 50 per cent per year.
The quality of software provided for these computers will soon affect more than a quarter of a million analysts and programmers.
d’Agapeyeff:
In 1958 a European general purpose computer manufacturer often had less than 50 software programmers,
now they probably number 1,000-2,000 people; what will be needed in 1978?
_Yet this growth rate was viewed with more alarm than pride._ (comment)
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
... every memory location is a stack.
I'm thinking of a notation like:
A => CADR (yes, I'm a LISP fan)
A' => A[0]
A'' => A[-1]
etc.
Not quite APL\360, but I guess pretty close :-)
-- Dave
Has there ever been a full implementation of PL/I? It seems akin to
solving the halting problem...
Yes, I've used PL/I in my CompSci days, and was told that IBM had trademarked
everything from /I to /C :-)
-- Dave, who loved PL/360
Good morning, I was reading recently on the earliest days of COBOL and from what I could gather, the picture (heh) regarding compilers looks something like this:
- RCA introduced the "Narrator" compiler for the 501[1] in November, 1960 (although it had been demonstrated in August of that year)
- Sperry Rand introduced a COBOL compiler running under the BOSS operating system[2] sometime in late 1960 or early 1961 (according to Wikipedia which then cites [3]).
First, I had some trouble figuring out the loading environment on the RCA 501, would this have been a "set the start address on the console, load cards/PPT, and go" sort of setup, or was there an OS/monitor involved?
And for the latter, there is a UNIVAC III COBOL Programmer's Guide (U-3389) cited in [2] but thus far I haven't found this online. Does anyone know if COBOL was also on the UNIVAC II (or I) and/or if any distinct COBOL documents from this era of UNIVAC survive anywhere?
- Matt G.
[1] - http://bitsavers.org/pdf/rca/501/RCA501_COBOL_Narrator_Dec60.pdf
[2] - http://bitsavers.org/pdf/univac/univac3/U-3521_BOSS_III_Jan63.pdf
[3] - Williams, Kathleen Broome (10 November 2012). Grace Hopper: Admiral of the Cyber Sea.
[ moved to coff ]
On Thu, Mar 14, 2024 at 7:49 AM Theodore Ts'o <tytso(a)mit.edu> wrote:
> On Thu, Mar 14, 2024 at 11:44:45AM +1100, Alexis wrote:
> >
> > i basically agree. i won't dwell on this too much further because i
> > recognise that i'm going off-topic, list-wise, but:
> >
> > i think part of the problem is related to different people having
> > different preferences around the interfaces they want/need for
> > discussions. What's happened is that - for reasons i feel are
> > typically due to a lock-in-oriented business model - many discussion
> > systems don't provide different interfaces/'views' to the same
> > underlying discussions. Which results in one community on platform X,
> > another community on platform Y, another community on platform Z
> > .... Whereas, for example, the 'Rocksolid Light' BBS/forum software
> > provides a Web-based interface to an underlying NNTP-based system,
> > such that people can use their NNTP clients to engage in forum
> > discussions. i wish this sort of approach was more common.
>
> This is a bit off-topic, and so if we need to push this to a different
> list (I'm not sure COFF is much better?), let's do so --- but this is
> a conversation which is super-improtant to have. If not just for Unix
> heritage, but for the heritage of other collecvtive systems-related
> projects, whether they be open source or proprietary.
>
> A few weeks ago, there were people who showed up on the git mailing
> list requesting that discussion of the git system move from the
> mailing list to using a "forge" web-based system, such as github or
> gitlab. Their reason was that there were tons of people who think
> e-mail is so 1970's, and that if we wanted to be more welcoming to the
> up-and-coming programmers, we should meet them were they were at. The
> obvious observations of how github was proprietary, and locking up our
> history there might be contra-indicated was made, and the problem with
> gitlab is that it doesn't have a good e-mail gateway, and while we
> might be disenfranchising the young'uns by not using some new-fangled
> web interface, disenfranchising the existing base of expertise was
> even worse idea.
>
> The best that we have today is lore.kernel.org, which is used by both
> the Linux Kernel and the git development communities. It uses
> public-inbox to archive the mailing list traffic, and it can be
> accessed via threaded e-mail interface, as well as via NNTP. There
> are also tools for subscribing to messages that match a filtering
> criteria, as well as tools for extracting patches plus code review
> sign-offs into a form that can be easily consumed by git.
>
email based flows are horrible. Absolutely the worst. They are impossible
to manage. You can't subscribe to them w/o insane email filtering rules,
you can't discover patches or lost patches easily. There's no standard way
to do something as simple as say 'never mind'. There's no easy way
to follow much of the discussion or find it after the fact if your email was
filtered off (ok, yea, there kinda is, if you know which archives to troll
through).
As someone who recently started contributing to QEMU I couldn't get over
how primitive the email interaction was. You magically have to know who
to CC on the patches. You have to look at the maintainers file, which is
often
stale and many of the people you CC never respond. If a patch is dropped,
or overlooked it's up to me to nag people to please take a look. There's
no good way for me to find stuff adjacent to my area (it can be done, but
it takes a lot of work).
So you like it because you're used to it. I'm firmly convinced that the
email
workflow works only because of the 30 years of toolings, work arounds, extra
scripts, extra tools, cult knowledge, and any number of other "living with
the
poo, so best polish up" projects. It's horrible. It's like everybody has
collective
Stockholm syndrome.
The peoople begging for a forge don't care what the forge is. Your
philisophical
objections to one are blinding you to things like self-hosted gitea,
gitlab, gerrit
which are light years ahead of this insane workflow.
I'm no spring chicken (I sent you patches, IIRC, when you and bruce were
having
the great serial port bake off). I've done FreeBSD for the past 30 years
and we have
none of that nonsense. The tracking isn't as exacting as Linux, sure. I'll
grant. the
code review tools we've used over the years are good enough, but everybody
that's used them has ideas to make them better. We even accept pull requests
from github, but our source of truth is away from github. We've taken an
all
of the above approach and it makes the project more approachable.In
addition,
I can land reviewed and tested code in FreeBSD in under an hour (including
the
review and acceptance testing process). This makes it way more efficient
for me
to do things in FreeBSD than in QEMU where the turn around time is days,
where
I have to wait for the one true pusher to get around to my pull request,
where I have
to go through weeks long processes to get things done (and I've graduated to
maintainer status).
So the summary might be email is so 1970s, but the real problem with it is
that it requires huge learning curve. But really, it's not something any
sane person would
design from scratch today, it has all these rules you have to cope with,
many unwritten.
You have to hope that the right people didn't screw up their email filters.
You have to
wait days or weeks for an answer, and the enthusiasm to contribute dies in
that time.
A quick turnaround time is essential for driving enthusiasm for new
committers in the
community. It's one of my failings in running FreeBSD's github experiment:
it takes me
too long to land things, even though we've gone from years to get an answer
to days to
weeks....
I studied the linux approach when FreeBSD was looking to improve it's git
workflow. And
none of the current developers think it's a good idea. In fact, I got huge
amounts of grief,
death threads, etc for even suggesting it. Everybody thought, to a person
that as badly
as our hodge-podge of bugzilla, phabricator and cruddy git push hooks, it
was lightyears
ahead of Linux's system and allowed us to move more quickly and produced
results that
were good enough.
So, respectfully, I think Linux has succeed despite its tooling, not
because of it. Other factors
have made it successful. The heroics that are needed to make it work are
possible only
because there's a huge supply that can be wasted and inefficiently deployed
and still meet
the needs of the project.
Warner
On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote:
>
> I eventually reverted back to Linux because it was clear that the
> user community was getting much larger, I was using it
> professionally at work and there was just a larger range of
> applications available. Lately, I find myself getting tired of the
> bloat and how big and messy and complicated it has all gotten.
> Thinking of looking for something simpler and was just wondering
> what do other old timers use for their primary home computing needs?
I'm surprised how few of the responders use BSD. My machines all
(currently) run FreeBSD, with the exception of a Microsoft box
(distress.lemis.com) that I use remotely for photo processing. I've
tried Linux (used to work developing Linux kernel code), but I
couldn't really make friends with it. It sounds like our reasons are
similar.
More details:
1977-1984: CP/M, 86-DOS
1984-1990: MS-DOS
1991-1992: Inactive UNIX
1992-1997: BSD/386, BSD/OS
1997-now: FreeBSD
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
This is UNIX history, but since the Internet's history and Unix history are
so intertwined, I'm going to risk the wrath of the IH moderators to try to
explain, as I was one of the folks who was at the table in those the times
and participated in my small way in both events: the birth of the Internet
and the spreading of the UNIX IP.
More details can be found in a paper I did a few years ago:
https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-…
[If you cannot find it and are interested send me email off list and I'll
forward it].
And ... if people want to continue this discussion -- please, please, move
it to the more appropriate COFF mailing list:
https://www.tuhs.org/cgi-bin/mailman/listinfo/coff - which I have CC'ed in
this reply.
On Fri, Mar 8, 2024 at 11:32 PM Greg Skinner via Internet-history <
internet-history(a)elists.isoc.org> wrote:
> Forwarded for Barbara
>
> > I will admit your response is confusing me. My post only concerns what
> I think I remember as a problem in getting BSD UNIX, in particular the
> source code. Nothing about getting something we wanted to use on a
> hardware platform from one of the commercial vendors. We needed the BSD
> source but got hung up.
>
Let me see if I can explain better ...
Assuming you were running BSD UNIX on a Vax, your team would have needed
two things:
- an AT&T License for 32/V [Research Version 7 -- port to a Vax/780 at
AT&T] and a
- a license for BSD 3, later 4, then 4.1, *etc*., from the Regents of
the University of CA.
The first license gave your team core a few rights from AT&T:
1. the right to run UNIX binaries on a single CPU (which was named in
your license)
2. the right to look at and modify the sources,
3. the right the create derivative works from the AT&T IP, and
4. the right to exchange your derivative works with others people that
held a similar license from AT&T.
[AT&T had been forced to allow this access (license) to their IP under the
rules of the 1956 consent decree - see paper for more details, but
remember, as part of the consent decree allow it to have a legal monopoly
on the phone system, AT&T had to make its IP available to the US Gov --
which I'm guessing the crux of Barbara's question/observation].
For not-for-profits (University/Research), a small fee was allowed to be
charged (order of 1-2 hundred $s) to process the paperwork and copy the mag
tape. But their IP came without any warranty, and you had to hold AT&T
harmless if you used it. In those days, we referred to this as *the UNIX IP
was abandoned on your doorstep.* BTW: This license allowed the research
sites to move AT&T derivative work (binaries) within their site freely.
Still, if you look at the license carefully, most had a restriction
(often/usually ignored at the universities) that the sources were supposed
to only be available on the original CPU named in their specific license.
Thus, if you were a University license, no fees were charged to run the
AT&T IP on other CPUs --> however, the licensees were not allowed to use it
for "commercial" users at the University [BTW: this clause was often
ignored, although a group of us at CMU hackers in the late 1970s famously
went on strike until the Unversity obtained at least one commercial
license]. The agreement was that a single CPU should be officially bound
for all commercial use for that institution. I am aware that Case-Western
got a similar license soon after CMU did (their folks found out about
the CMU strike/license). But I do not know if MIT, Standford, or UCB
officials came clean on that part and paid for a commercial license
(depending on the type of license, its cost was the order of $20K-25K for
the first CPU and an order of $7K-10K for each CPU afterward - each of
these "additional cpu' could also have the sources - but named in an
appendix for each license with AT&T). I believe that some of the larger
state schools like Penn State, Rutgers, Purdue, and UW started to follow
that practice by the time Unix started to spread around each campus.
That said, a different license for UNIX-based IP could be granted by the
Regents of the University of CA and managed by its 'Industrial
Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses
for tools like SPICE, SPLICE, MOTIS,* et al*). This license gave the holder
the right to examine and use the UCB's derivative works on anything as long
as you acknowledged that you got that from UCB and held the
Regents blameless [we often called this the 'dead-fish license' -- *you
could make a chip, make a computer, or even wrap dead-fish in it.* But you
had to say you started with something from the Regents, but they were not
to be blamed for what you did with it].
The Regents were exercising rights 3 and 4 from AT&T. Thus, a team who
wanted to obtain the Berkeley Software Distribution for UNIX (*a.k.a*. BSD)
needed to demonstrate that they held the appropriate license from AT&T
[send a copy of the signature page from your license to the ILO] before UCB
would release the bits. They also had a small processing fee to the IOL in
the order of $1K. [The original BSD is unnumbered, although most refer to
it today as 1BSD to differentiate it from later BSD releases for UNIX].
Before I go on, in those times, the standard way we operated was that you
needed to have a copy of someone else's signature page to share things. In
what would later become USENIX (truth here - I'm an ex-president of the
same), you could only get invited and come to a conference if you were
licensed from AT&T. That was not a big deal. We all knew each other.
FWIW: at different times in my career, I have had a hanging file in a
cabinet with a copy of the number of these pages from different folks, with
whom I would share mag tapes (remember this is pre-Internet, and many of
the folks using UNIX were not part of the ARPAnet).
However, the song has other verses that make this a little confusing.
If your team obtained a* commercial use license* from AT&T, they could
further obtain a *commercial redistribution license*. This was initially
granted for the Research Seventh Edition. It was later rewritten (with the
business terms changing each time) for what would eventually be called
System III[1], and then the different System V releases. The price of the
redistribution license for V7 was $150K, plus a sliding scale per CPU you
ran the AT&T IP, depending on the number of CPUs you needed. With this, the
single CPU for the source restriction was removed.
So ... if you had a redistribution license, you could also get a license
from the Regents, and as long as you obeyed their rules, you could sell a
copy of UNIX to run on any licensed target. Traditionally, hardware is
part of the same purchase when purchased from a firm like DEC, IBM,
Masscomp,* etc*. However, separate SW licenses were sold via firms such as
Microsoft and Mt. Xinu. The purchaser of *a binary license* from one of
those firms did not have the right to do anything but use the AT&T
derivative work. If your team had a binary licensee, you could not obtain
any of the BSD distributions until the so-called 'NET2" BSD release [and
I'm going to ignore the whole AT&T/BSDi/Regents case here as it is not
relevant to Barbara's question/comment].
So the question is, how did a DoD contractor, be it BBN, Ford Aerospace,
SRI, etc., originally get access to UNIX IP? Universities and traditional
research teams could get a research license. Commercial firms like DEC
needed a commercial licensee. Folks with DoD contracts were in a hazy
area. The original v5 commercial licensee was written for Rand, a DoD
contractor. However, as discussed here in the IH mailing list and
elsewhere, some places like BBN had access to the core UNIX IP as part of
their DoD contracts. I believe Ford Aerospace was working with AT&T
together as part of another US Gov project - which is how UNIX got there
originally (Ford Aero could use it for that project, but not the folks at
Ford Motors, for instance].
The point is, if you access the *IP indirectly* such as that, then
your site probably did not have a negotiated license with a signature page
to send to someone.
@Barbara, I can not say for sure, but if this was either a PDP-11 or a VAX
and you wanted one of the eBSDs, I guess/suspect that maybe your team was
dealing with an indirect path to AT&T licensing -- your site license might
have come from a US Gov contract, not directly. So trying to get a BSD tape
directly from the IOL might have been more difficult without a signature
page.
So, rolling back to the original. You get access to BSD sources, but you
had to demonstrate to the IOL folks in UCB's Cory Hall that you were
legally allowed access to the AT&T IP in source code form. That
demonstration was traditionally fulfilled with a xerographic copy of the
signature page for your institution, which the IOL kept on file. That
said, if you had legal access to the AT&T IP by indirect means, I do not
know how the IOL completed that check or what they needed to protect the
Regents.
Clem
1.] What would be called a System from a marketing standpoint was
originally developed as PWB 3.0. This was the system a number of firms,
including my own, were discussing with AT&T at the famous meetings at
'Ricky's Hyatt' during the price (re)negotiations after the original V7
redistribution license.
On 3/7/24, Tom Lyon <pugs78(a)gmail.com> wrote:
> For no good reason, I've been wondering about the early history of C
> compilers that were not derived from Ritchie, Johnson, and Snyder at Bell.
> Especially for x86. Anyone have tales?
> Were any of those compilers ever used to port UNIX?
>
[topic of interest to COFF, as well, I think]
DEC's Ultrix for VAX and MIPS used off-the-shelf Unix cc. I don't
recall what they used for Alpha.
The C compiler for VAX/VMS was written by Dave Cutler's team at
DECwest in Seattle. The C front end generated intermediate language
(IL) for Cutler's VAX Code Generator (VCG), which was designed to be a
common back end for DEC's compilers for VAX/VMS. His team also
licensed the Freiburghouse PL/I front end (commercial version of a
PL/I compiler originally done for Multics) and modified it to generate
VCG IL. The VCG was also the back end for DEC's Ada compiler. VCG
was superseded by the GEM back end, which supported Alpha and Itanium.
A port of GEM to x86 was in progress at the time Compaq sold off the
Alpha technology (including GEM and its C and Fortran front ends) to
Intel.
Currently I've only got an older laptop at home still running Windows 10
Pro. Mostly I use a company provided HP ProBook 440 G7 with Windows 11 Pro.
I installed WSL2 to run Ubuntu 20.04 if only because I wanted to mount UFS
ISO images 😊
Still employed I have access to lots of UNIX servers, SCO UNIX 3.2V4.2 on
Intel based servers, Tru64 on AlphaServers, HP-UX 11.23/11.31 on Itanium
servers. There's an rx-server rx2660 I can call my own but even in a
testroom I can hear it. Reluctant to take home. My electricity bill would
also explode I think.
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
Dropped TUHS; added COFF.
> * Numerous editors show up on different systems, including STOPGAP on
> the MIT PDP6, eventually SOS, TECO, EMACs, etc., and most have some
> concept of a 'line of text' to distinguish from a 'card image.'
I'd like expand on this, since I never heard about STOPGAP or SOS on the
MIT PDP-6/10 computers. TECO was ported over to the 6 only a few weeks
after delivers, and that seems to have been the major editor ever since.
Did you think of the SAIL PDP-6?
> From: Clem Cole
> the idea of a text editor existed long before Ken's version of QED,
> much less, ed(1). Most importantly, Ken's QED came after the original
> QED, which came after other text editors.
Yes; some of the history is given here:
An incomplete history of the QED Text Editor
https://www.bell-labs.com/usr/dmr/www/qed.html
Ken would have run into the original on the Berkeley Time-Sharing System; he
apparently wrote the CTSS one based on his experience with the one on the BTSS.
Oddly enough, CTSS seems to have not had much of an editor before. The
Programmer's Guide has an entry for 'Edit' (Section AH.3.01), but 'edit file'
seems to basically do a (in later terminology) 'cat >> file'. Section AE
seems to indicate that most 'editing' was done by punching new cards on a
key-punch!
The PDP-1 was apparently similar, except that it used paper tape. Editing
paper tapes was difficult enough that Dan Murphy came up with TECO - original
name 'Tape Editor and Corrector':
https://opost.com/tenex/anhc-31-4-anec.pdf
> Will had asked -- how did people learn to use reg-ex?
I learned it from reading the 'sh' and 'ed' V6 man pages.
The MIT V6 systems had TECO (with a ^R mode even), but I started out with ed,
since it was more like editors I had previously used.
Noel
Might interest the bods here too...
-- Dave
---------- Forwarded message ----------
From: Paul Ruizendaal
To: "tuhs(a)tuhs.org" <tuhs(a)tuhs.org>
Subject: [TUHS] RIP Niklaus Wirth, RIP John Walker
Earlier this year two well known computer scientists passed away.
On New Year’s Day it was Niklaus Wirth aged 90. A month later it was John Walker aged 75. Both have some indirect links to Unix.
For Wirth the link is that a few sources claim that Plan 9 and the Go language are in part influenced by the design ideas of Oberon, the language and the OS. Maybe others on this list know more about those influences.
For Walker, the link is via the company that he was running as a side-business before he got underway with AutoCAD: https://www.fourmilab.ch/documents/marinchip/
In that business he was selling a 16-bit system for the S-100 bus, based around the TI9900 CPU (which from a programmer perspective is quite similar to a PDP11). For that system he wrote a Unix-like operating system around 1978-1980, called NOS/MT. He had never worked with Unix, but had spelled the BSTJ issues about it. It was fully written in assembler.
The design was rather unique, maybe inspired by Heinz Lycklama’s “Satellite Processor” paper in BSTJ 57-6. It has a central microkernel that handles message exchange, process scheduling and memory management. Each system call is a message. However, the system call message is then passed on to a privileged “fat kernel” process that handles it. The idea was to provide multiprocessor and network transparency: the microkernel could decide to run processes on other boards in the same rack or on remote systems over a network. Also the kernel processes could be remote. Hence its name “Network Operating System / Multitasking” or “NOS/MT”.
The system calls are pretty similar to Unix. The file system is implemented very similar to Unix (with i-nodes etc.), with some notable differences (there are file locking primitives and formatting a disk is a system call). File handles are not shareable, so special treatment for stdin/out/err is hardcoded. Scheduling and memory management are totally different -- unsurprising as in both cases it reflects the underlying hardware.
Just as NOS/MT was getting into a usable state, John decided to pivot to packaged software including a precursor of what would become the AutoCAD package. What was there worked and saw some use in the UK and Denmark in the 1980’s -- there are emulators that can still run it, along with its small library of tools and applications. “NOS/MT was left in an arrested state” as John puts it. I guess it will remain one of those many “what if” things in computer history.
A friend of mine has a DECmate-II word processor. It is in perfect
working order except for one thing. The field encoding the current
date/time has overflowed. It is impossible to set a date/time in the
21st century.
He says that the software in question is a version of WPS-8 for the
PDP-8. It should be possible to fix the date/time problem by dumping
the DECmate's ROM and disassembling the code. It ought not be too
hard to locate the date/time encode/decode routine and come up with a
fix to the time epoch problem.
Is anyone out there familiar with the DECmate-II software? Or, even
better, knows how to get its source code?
Advice greatly appreciated.
-Paul W.
I've just uploaded a couple new items to archive.org that
folks may find interesting:
https://archive.org/details/5ess-2000-switch-es5431-office-data-base-1998https://archive.org/details/5ess-2000-switch-es5432-system-analysis-1998
Linked above are the ES5431 (Office Data Base Installation)
and ES5432 (System Analysis) training CDs as produced by
Bell Laboratories (Lucent era) for the 5ESS-2000 switch.
Among other things, these CDs contain a 5ESS simulator which
you can see a screenshot of here:
https://www.classicrotaryphones.com/forum/index.php?topic=28001.msg269652#m…
I was able to successfully get it to run on Windows 98 SE in
a virtual machine, although did break one rule of archiving
optical media in that I didn't take iso rips. I intend to
throw an old FreeBSD hard disk in that computer sometime soon
and do some proper rips with dd(1). In the meantime,
this means using the above archives presents only a partial
experience in that the Training section of the software
appears to depend on the original discs being inserted.
In any case, the simulator interests me greatly. I intend
to do a little digging around in it as time goes on to see
if there may be traces of 3B20 emulation or DMERT in the
guts. I'm not holding my breath, but who knows. Either way,
it'll be interesting to play with. Thus far I've only
verified the simulator launches, but have done nothing
with it yet. Picked up Steele's Common Lisp (2nd Edition)
in the same eBay session so time will be split between this,
learning Lisp, and plenty of other little oddball projects
I have going, but if I find anything interesting I'll be
sure to share.
Given that Nokia is shedding 5ESS stuff pretty heavily right
now (or so I've heard) I have to wonder if more of this
stuff will start popping up in online market places. Word
over on the telephone forum is that some folks in Nokia
do have an interest in preserving 5ESS knowledge and
materials but are getting the expected apathy and lack of
engagement from higher ups. Hopefully this at least means
Nokia doesn't mind too much this stuff getting archived if
they don't have to do any of the footwork :)
- Matt G.
On Wed Feb 23 16:33, 1994, I turned on the web service on my machine
"minnie", originally minnie.cs.adfa.edu.au, now minnie.tuhs.org (aka
www.tuhs.org) The web service has been running continuously for thirty
years, except for occasional downtimes and hardware/software upgrades.
I think this makes minnie one of the longest running web services
still in existence :-)
For your enjoyment, I've restored a snapshot of the web site from
around mid-1994. It is visible at https://minnie.tuhs.org/94Web/
Some hyperlinks are broken.
## Web Logs
The web logs show me testing the service locally on Feb 23 1994,
with the first international web fetches on Feb 26:
```
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:13 1994] GET / HTTP/1.0
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:18 1994] GET /BSD.html HTTP/1.0
sparcserve.cs.adfa.oz.au [Wed Feb 23 16:33:20 1994] GET /Images/demon1.gif HTTP/1.0
...
estcs1.estec.esa.nl [Sat Feb 26 01:48:21 1994] GET /BSD-info/BSD.html HTTP/1.0
estcs1.estec.esa.nl [Sat Feb 26 01:48:30 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
estcs1.estec.esa.nl [Sat Feb 26 01:49:46 1994] GET /BSD-info/cdrom.html HTTP/1.0
shazam.cs.iastate.edu [Sat Feb 26 06:31:20 1994] GET /BSD-info/BSD.html HTTP/1.0
shazam.cs.iastate.edu [Sat Feb 26 06:31:24 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
dem0nmac.mgh.harvard.edu [Sat Feb 26 06:32:04 1994] GET /BSD-info/BSD.html HTTP/1.0
dem0nmac.mgh.harvard.edu [Sat Feb 26 06:32:10 1994] GET /BSD-info/Images/demon1.gif HTTP/1.0
```
## Minnie to This Point
Minnie originally started life in May 1991 as an FTP server running KA9Q NOS
on an IBM XT with a 30M RLL disk, see https://minnie.tuhs.org/minannounce.txt
By February 1994 Minnie was running FreeBSD 1.0e on a 386DX25 with 500M
of disk space, 8M of RAM and a 10Base2 network connection. I'd received a copy
of the BSDisc Vol.1 No.1 in December 1993. According to the date on the file
`RELNOTES.FreeBSD` on the CD, FreeBSD 1.0e was released on Oct 28 1993.
## The Web Server
I'd gone to a summer conference in Canberra in mid-February 1994 (see
pg. 29 of https://www.tuhs.org/Archive/Documentation/AUUGN/AUUGN-V15.1.pdf
and https://minnie.tuhs.org/94Web/Canberra-AUUG/cauugs94.html, 10am)
and I'd seen the Mosaic web browser in action. With FreeBSD running on
minnie, it seemed like a good idea to set up a web server on her.
NCSA HTTPd server v1.1 had been released at the end of Jan 1994, see
http://1997.webhistory.org/www.lists/www-talk.1994q1/0282.html
It was the obvious choice to be the web server on minnie.
## Minnie from Then to Now
You can read more about minnie's history and her hardware/software
evolution here: https://minnie.tuhs.org/minnie.html
I obtained the "tuhs.org" domain in May 2000 and switched minnie's
domain name from "minnie.cs.adfa.edu.au" to "minnie.tuhs.org".
Cheers!
Warren
P.S. I couldn't wait until Friday to post this :-)
After learning of Dave's death, a professor I very much enjoyed as a U
of Delaware EE student, I came across this page
https://www.eecis.udel.edu/~mills/gallery/gallery9.html
This reminds me of his lectures, the occasional 90 degree turn into who
knows what, but guaranteed to be interesting. And if anyone has a UDel
Hollerith card they're willing to part with, please get in touch. I
have none. :-(
Mike Markowski
Howdy folks, just finished an exciting series of repairs and now have a DEC VT100 plumbed into a Western Electric Data Set 103J. I was able to supply an answer tone (~2250Hz) at which point the modem began transmitting data. I could then pull the answer tone down and the connection remained, with keypresses on the VT100 properly translating to noise on the line.
Really all I have left is to see if it can do the real thing. I'm keeping an eye out for another such modem but in the meantime, is anyone aware of any 300-baud systems out there in the world that are currently accepting dials in? I don't have POTS at home but they do at my music practice space and if there is such a machine out there, I kinda wanna take my terminal and modem down there and see if I can straight up call a computer over this thing.
I've got other experiments planned too like just feeding it 300-baud modem noise to see if I get the proper text on the screen, that sort of thing, but figured this would be an interesting possibility to put feelers out for.
On that same note, if I get another modem and a stable POTS number to expose it via, I'm considering offering the same, a 300-baud UNIX-y system folks can just call and experiment with (realistically probably a SimH machine with the pty/tty socat'd together)
- Matt G.
Seen on TUHS...
-- Dave
---------- Forwarded message ----------
Date: Sat, 20 Jan 2024 12:27:41 +1000
From: George Michaelson <ggm(a)algebras.org>
To: The Eunuchs Hysterical Society <TUHS(a)tuhs.org>
Subject: [TUHS] (Off topic) Dave Mills
Dave Mills, of fuzzball and ntp fame, one time U Delaware died on the 17th
of January.
He was an interesting, entertaining, prolific and rather idosyncratic
emailer. Witty and informative.
G
Not really UNIX -- so I'm BCC TUHS and moving to COFF
On Tue, Jan 9, 2024 at 12:19 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
> On the subject of troff origins, in a world where troff didn't exist, and
> one purchases a C/A/T, what was the general approach to actually using the
> thing? Was there some sort of datasheet the vendor supplied that the end
> user would have to program a driver around, or was there any sort of
> example code or other materials provided to give folks a leg up on using
> their new, expensive instrument? Did they have any "packaged bundles" for
> users of prominent systems such as 360/370 OSs or say one of the DEC OSs?
>
Basically, the phototypesetter part was turnkey with a built-in
minicomputer with a paper tape unit, later a micro and a floppy disk as a
cost reduction. The preparation for the typesetter was often done
independently, but often the vendor offered some system to prepare the PPT
or Floppy. Different typesetter vendors targeted different parts of the
market, from small local independent newspapers (such as the one my sister
and her husband owned and ran in North Andover MA for many years), to
systems that Globe or the Times might. Similarly, books and magazines
might have different systems (IIRC the APS-5 was originally targeted for
large book publishers). This was all referred to as the 'pre-press'
industry and there were lots of players in different parts.
Large firms that produced documentation, such as DEC, AT&T *et al*., and
even some universities, might own their own gear, or they might send it out
to be set.
The software varied greatly, depending on the target customer. For
instance, by the early 80s, the Boston Globe's input system was still
terrible - even though the computers had gotten better. I had a couple of
friends working there, and they used to b*tch about it. But big newspapers
(and I expect many other large publishers) were often heavy union shops on
the back end (layout and presses), so the editors just wanted to set strips
of "column wide" text as the layout was manual. I've forgotten the name of
the vendor of the typesetter they used, but it was one of the larger firms
-- IIRC, it had a DG Nova in it. My sister used CompuGraphic Gear, which
was based on 8085's. She had two custom editing stations and the
typesetter itself (it sucked). The whole system was under $35K in
late-1970s money - but targeted to small newspapers like hers. In the
mid-1908s, I got her a Masscomp at a reduced price and put 6 Wyse-75
terminals on it, so she could have her folks edit their stories with vi,
run spell, and some of the other UNIX tools. I then reverse-engineered the
floppy enough to split out the format she wanted for her stories -- she
used a manual layout scheme. She still has to use the custom stuff for
headlines and some other parts, but it was a load faster and more parallel
(for instance, we wrote an awk script to generate the School Lunch menus,
which they published each week).
ᐧ
[TUHS bcc, moved to COFF]
On Thursday, January 4th, 2024 at 10:26 AM, Kevin Bowling <kevin.bowling(a)kev009.com> wrote:
> For whatever reason, intel makes it difficult to impossible to remove
> the ME in later generations.
Part of me wonders if the general computing industry is starting to cheat off of the smartphone sector's homework, this phenomenon where whole critical components of a hardware device you literally own are still heavily controlled and provisioned by the vendor unless you do a whole bunch of tinkering to break through their stuff and "root" your device. That I can fully pay for and own a "computer" and I am not granted full root control over that device is one of the key things that keeps "smart" devices besides my work issued mobile at arms length.
For me this smells of the same stuff, they've gotten outside of the lane of *essential to function* design decisions and instead have now put in a "feature" that you are only guaranteed to opt out of by purchasing an entirely different product. In other words, the only guaranteed recourse if a CPU has something like this going on is to not use that CPU, rather than as the device owner having leeway to do what you want. Depends on the vendor really, some give more control than others, but IMO there is only one level of control you give to someone who has bought and paid for a complete device: unlimited. Anything else suggests they do not own the device, it is a permanently leased product that just stops requiring payments after a while, but if I don't get the keys, I don't consider myself to own it, I'm just borrowing it, kinda like how the Bell System used to own your telephone no matter how many decades it had been sitting on your desk.
My two cents, much of this can also be said of BIOS, UEFI, anything else that gets between you and the CPUs reset vector. Is it a nice option to have some vendor provided blob to do your DRAM training, possibly transition out of real mode, enumerate devices, whatever. Absolutely, but it's nice as an *option* that can be turned off should I want to study and commit to doing those things myself. I fear we are approaching an age where the only way you get reset vector is by breadboarding your own thing. I get wanting to protect users from say bricking the most basic firmware on a board, but if I want to risk that, I should be completely free to do so on a device I've fully paid for. For me the key point of contention is choice and consent. I'm fine having this as a selectable option. I'm not fine with it becoming an endemic "requirement." Are we there yet? Can't say, I don't run anything serious on x86-family stuff, not that ARM and RISC-V don't also have weird stuff like this going on. SBI and all that are their own wonderful kettle of fish.
BTW sorry that's pretty rambly, the lack of intimate user control over especially smart devices these days is one of the pillars of my gripes with modern tech. Only time will tell how this plays out. Unfortunately the general public just isn't educated enough (by design, not their own fault) on their rights to really get a big push on a societal scale to change this. People just want I push button I get Netflix, they'll happily throw all their rights in the garbage over bread and circuses....but that ain't new...
- Matt G.