[ moved to coff ]
On Thu, Mar 14, 2024 at 7:49 AM Theodore Ts'o <tytso(a)mit.edu> wrote:
> On Thu, Mar 14, 2024 at 11:44:45AM +1100, Alexis wrote:
> >
> > i basically agree. i won't dwell on this too much further because i
> > recognise that i'm going off-topic, list-wise, but:
> >
> > i think part of the problem is related to different people having
> > different preferences around the interfaces they want/need for
> > discussions. What's happened is that - for reasons i feel are
> > typically due to a lock-in-oriented business model - many discussion
> > systems don't provide different interfaces/'views' to the same
> > underlying discussions. Which results in one community on platform X,
> > another community on platform Y, another community on platform Z
> > .... Whereas, for example, the 'Rocksolid Light' BBS/forum software
> > provides a Web-based interface to an underlying NNTP-based system,
> > such that people can use their NNTP clients to engage in forum
> > discussions. i wish this sort of approach was more common.
>
> This is a bit off-topic, and so if we need to push this to a different
> list (I'm not sure COFF is much better?), let's do so --- but this is
> a conversation which is super-improtant to have. If not just for Unix
> heritage, but for the heritage of other collecvtive systems-related
> projects, whether they be open source or proprietary.
>
> A few weeks ago, there were people who showed up on the git mailing
> list requesting that discussion of the git system move from the
> mailing list to using a "forge" web-based system, such as github or
> gitlab. Their reason was that there were tons of people who think
> e-mail is so 1970's, and that if we wanted to be more welcoming to the
> up-and-coming programmers, we should meet them were they were at. The
> obvious observations of how github was proprietary, and locking up our
> history there might be contra-indicated was made, and the problem with
> gitlab is that it doesn't have a good e-mail gateway, and while we
> might be disenfranchising the young'uns by not using some new-fangled
> web interface, disenfranchising the existing base of expertise was
> even worse idea.
>
> The best that we have today is lore.kernel.org, which is used by both
> the Linux Kernel and the git development communities. It uses
> public-inbox to archive the mailing list traffic, and it can be
> accessed via threaded e-mail interface, as well as via NNTP. There
> are also tools for subscribing to messages that match a filtering
> criteria, as well as tools for extracting patches plus code review
> sign-offs into a form that can be easily consumed by git.
>
email based flows are horrible. Absolutely the worst. They are impossible
to manage. You can't subscribe to them w/o insane email filtering rules,
you can't discover patches or lost patches easily. There's no standard way
to do something as simple as say 'never mind'. There's no easy way
to follow much of the discussion or find it after the fact if your email was
filtered off (ok, yea, there kinda is, if you know which archives to troll
through).
As someone who recently started contributing to QEMU I couldn't get over
how primitive the email interaction was. You magically have to know who
to CC on the patches. You have to look at the maintainers file, which is
often
stale and many of the people you CC never respond. If a patch is dropped,
or overlooked it's up to me to nag people to please take a look. There's
no good way for me to find stuff adjacent to my area (it can be done, but
it takes a lot of work).
So you like it because you're used to it. I'm firmly convinced that the
email
workflow works only because of the 30 years of toolings, work arounds, extra
scripts, extra tools, cult knowledge, and any number of other "living with
the
poo, so best polish up" projects. It's horrible. It's like everybody has
collective
Stockholm syndrome.
The peoople begging for a forge don't care what the forge is. Your
philisophical
objections to one are blinding you to things like self-hosted gitea,
gitlab, gerrit
which are light years ahead of this insane workflow.
I'm no spring chicken (I sent you patches, IIRC, when you and bruce were
having
the great serial port bake off). I've done FreeBSD for the past 30 years
and we have
none of that nonsense. The tracking isn't as exacting as Linux, sure. I'll
grant. the
code review tools we've used over the years are good enough, but everybody
that's used them has ideas to make them better. We even accept pull requests
from github, but our source of truth is away from github. We've taken an
all
of the above approach and it makes the project more approachable.In
addition,
I can land reviewed and tested code in FreeBSD in under an hour (including
the
review and acceptance testing process). This makes it way more efficient
for me
to do things in FreeBSD than in QEMU where the turn around time is days,
where
I have to wait for the one true pusher to get around to my pull request,
where I have
to go through weeks long processes to get things done (and I've graduated to
maintainer status).
So the summary might be email is so 1970s, but the real problem with it is
that it requires huge learning curve. But really, it's not something any
sane person would
design from scratch today, it has all these rules you have to cope with,
many unwritten.
You have to hope that the right people didn't screw up their email filters.
You have to
wait days or weeks for an answer, and the enthusiasm to contribute dies in
that time.
A quick turnaround time is essential for driving enthusiasm for new
committers in the
community. It's one of my failings in running FreeBSD's github experiment:
it takes me
too long to land things, even though we've gone from years to get an answer
to days to
weeks....
I studied the linux approach when FreeBSD was looking to improve it's git
workflow. And
none of the current developers think it's a good idea. In fact, I got huge
amounts of grief,
death threads, etc for even suggesting it. Everybody thought, to a person
that as badly
as our hodge-podge of bugzilla, phabricator and cruddy git push hooks, it
was lightyears
ahead of Linux's system and allowed us to move more quickly and produced
results that
were good enough.
So, respectfully, I think Linux has succeed despite its tooling, not
because of it. Other factors
have made it successful. The heroics that are needed to make it work are
possible only
because there's a huge supply that can be wasted and inefficiently deployed
and still meet
the needs of the project.
Warner
On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote:
>
> I eventually reverted back to Linux because it was clear that the
> user community was getting much larger, I was using it
> professionally at work and there was just a larger range of
> applications available. Lately, I find myself getting tired of the
> bloat and how big and messy and complicated it has all gotten.
> Thinking of looking for something simpler and was just wondering
> what do other old timers use for their primary home computing needs?
I'm surprised how few of the responders use BSD. My machines all
(currently) run FreeBSD, with the exception of a Microsoft box
(distress.lemis.com) that I use remotely for photo processing. I've
tried Linux (used to work developing Linux kernel code), but I
couldn't really make friends with it. It sounds like our reasons are
similar.
More details:
1977-1984: CP/M, 86-DOS
1984-1990: MS-DOS
1991-1992: Inactive UNIX
1992-1997: BSD/386, BSD/OS
1997-now: FreeBSD
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
This is UNIX history, but since the Internet's history and Unix history are
so intertwined, I'm going to risk the wrath of the IH moderators to try to
explain, as I was one of the folks who was at the table in those the times
and participated in my small way in both events: the birth of the Internet
and the spreading of the UNIX IP.
More details can be found in a paper I did a few years ago:
https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-…
[If you cannot find it and are interested send me email off list and I'll
forward it].
And ... if people want to continue this discussion -- please, please, move
it to the more appropriate COFF mailing list:
https://www.tuhs.org/cgi-bin/mailman/listinfo/coff - which I have CC'ed in
this reply.
On Fri, Mar 8, 2024 at 11:32 PM Greg Skinner via Internet-history <
internet-history(a)elists.isoc.org> wrote:
> Forwarded for Barbara
>
> > I will admit your response is confusing me. My post only concerns what
> I think I remember as a problem in getting BSD UNIX, in particular the
> source code. Nothing about getting something we wanted to use on a
> hardware platform from one of the commercial vendors. We needed the BSD
> source but got hung up.
>
Let me see if I can explain better ...
Assuming you were running BSD UNIX on a Vax, your team would have needed
two things:
- an AT&T License for 32/V [Research Version 7 -- port to a Vax/780 at
AT&T] and a
- a license for BSD 3, later 4, then 4.1, *etc*., from the Regents of
the University of CA.
The first license gave your team core a few rights from AT&T:
1. the right to run UNIX binaries on a single CPU (which was named in
your license)
2. the right to look at and modify the sources,
3. the right the create derivative works from the AT&T IP, and
4. the right to exchange your derivative works with others people that
held a similar license from AT&T.
[AT&T had been forced to allow this access (license) to their IP under the
rules of the 1956 consent decree - see paper for more details, but
remember, as part of the consent decree allow it to have a legal monopoly
on the phone system, AT&T had to make its IP available to the US Gov --
which I'm guessing the crux of Barbara's question/observation].
For not-for-profits (University/Research), a small fee was allowed to be
charged (order of 1-2 hundred $s) to process the paperwork and copy the mag
tape. But their IP came without any warranty, and you had to hold AT&T
harmless if you used it. In those days, we referred to this as *the UNIX IP
was abandoned on your doorstep.* BTW: This license allowed the research
sites to move AT&T derivative work (binaries) within their site freely.
Still, if you look at the license carefully, most had a restriction
(often/usually ignored at the universities) that the sources were supposed
to only be available on the original CPU named in their specific license.
Thus, if you were a University license, no fees were charged to run the
AT&T IP on other CPUs --> however, the licensees were not allowed to use it
for "commercial" users at the University [BTW: this clause was often
ignored, although a group of us at CMU hackers in the late 1970s famously
went on strike until the Unversity obtained at least one commercial
license]. The agreement was that a single CPU should be officially bound
for all commercial use for that institution. I am aware that Case-Western
got a similar license soon after CMU did (their folks found out about
the CMU strike/license). But I do not know if MIT, Standford, or UCB
officials came clean on that part and paid for a commercial license
(depending on the type of license, its cost was the order of $20K-25K for
the first CPU and an order of $7K-10K for each CPU afterward - each of
these "additional cpu' could also have the sources - but named in an
appendix for each license with AT&T). I believe that some of the larger
state schools like Penn State, Rutgers, Purdue, and UW started to follow
that practice by the time Unix started to spread around each campus.
That said, a different license for UNIX-based IP could be granted by the
Regents of the University of CA and managed by its 'Industrial
Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses
for tools like SPICE, SPLICE, MOTIS,* et al*). This license gave the holder
the right to examine and use the UCB's derivative works on anything as long
as you acknowledged that you got that from UCB and held the
Regents blameless [we often called this the 'dead-fish license' -- *you
could make a chip, make a computer, or even wrap dead-fish in it.* But you
had to say you started with something from the Regents, but they were not
to be blamed for what you did with it].
The Regents were exercising rights 3 and 4 from AT&T. Thus, a team who
wanted to obtain the Berkeley Software Distribution for UNIX (*a.k.a*. BSD)
needed to demonstrate that they held the appropriate license from AT&T
[send a copy of the signature page from your license to the ILO] before UCB
would release the bits. They also had a small processing fee to the IOL in
the order of $1K. [The original BSD is unnumbered, although most refer to
it today as 1BSD to differentiate it from later BSD releases for UNIX].
Before I go on, in those times, the standard way we operated was that you
needed to have a copy of someone else's signature page to share things. In
what would later become USENIX (truth here - I'm an ex-president of the
same), you could only get invited and come to a conference if you were
licensed from AT&T. That was not a big deal. We all knew each other.
FWIW: at different times in my career, I have had a hanging file in a
cabinet with a copy of the number of these pages from different folks, with
whom I would share mag tapes (remember this is pre-Internet, and many of
the folks using UNIX were not part of the ARPAnet).
However, the song has other verses that make this a little confusing.
If your team obtained a* commercial use license* from AT&T, they could
further obtain a *commercial redistribution license*. This was initially
granted for the Research Seventh Edition. It was later rewritten (with the
business terms changing each time) for what would eventually be called
System III[1], and then the different System V releases. The price of the
redistribution license for V7 was $150K, plus a sliding scale per CPU you
ran the AT&T IP, depending on the number of CPUs you needed. With this, the
single CPU for the source restriction was removed.
So ... if you had a redistribution license, you could also get a license
from the Regents, and as long as you obeyed their rules, you could sell a
copy of UNIX to run on any licensed target. Traditionally, hardware is
part of the same purchase when purchased from a firm like DEC, IBM,
Masscomp,* etc*. However, separate SW licenses were sold via firms such as
Microsoft and Mt. Xinu. The purchaser of *a binary license* from one of
those firms did not have the right to do anything but use the AT&T
derivative work. If your team had a binary licensee, you could not obtain
any of the BSD distributions until the so-called 'NET2" BSD release [and
I'm going to ignore the whole AT&T/BSDi/Regents case here as it is not
relevant to Barbara's question/comment].
So the question is, how did a DoD contractor, be it BBN, Ford Aerospace,
SRI, etc., originally get access to UNIX IP? Universities and traditional
research teams could get a research license. Commercial firms like DEC
needed a commercial licensee. Folks with DoD contracts were in a hazy
area. The original v5 commercial licensee was written for Rand, a DoD
contractor. However, as discussed here in the IH mailing list and
elsewhere, some places like BBN had access to the core UNIX IP as part of
their DoD contracts. I believe Ford Aerospace was working with AT&T
together as part of another US Gov project - which is how UNIX got there
originally (Ford Aero could use it for that project, but not the folks at
Ford Motors, for instance].
The point is, if you access the *IP indirectly* such as that, then
your site probably did not have a negotiated license with a signature page
to send to someone.
@Barbara, I can not say for sure, but if this was either a PDP-11 or a VAX
and you wanted one of the eBSDs, I guess/suspect that maybe your team was
dealing with an indirect path to AT&T licensing -- your site license might
have come from a US Gov contract, not directly. So trying to get a BSD tape
directly from the IOL might have been more difficult without a signature
page.
So, rolling back to the original. You get access to BSD sources, but you
had to demonstrate to the IOL folks in UCB's Cory Hall that you were
legally allowed access to the AT&T IP in source code form. That
demonstration was traditionally fulfilled with a xerographic copy of the
signature page for your institution, which the IOL kept on file. That
said, if you had legal access to the AT&T IP by indirect means, I do not
know how the IOL completed that check or what they needed to protect the
Regents.
Clem
1.] What would be called a System from a marketing standpoint was
originally developed as PWB 3.0. This was the system a number of firms,
including my own, were discussing with AT&T at the famous meetings at
'Ricky's Hyatt' during the price (re)negotiations after the original V7
redistribution license.
On 3/7/24, Tom Lyon <pugs78(a)gmail.com> wrote:
> For no good reason, I've been wondering about the early history of C
> compilers that were not derived from Ritchie, Johnson, and Snyder at Bell.
> Especially for x86. Anyone have tales?
> Were any of those compilers ever used to port UNIX?
>
[topic of interest to COFF, as well, I think]
DEC's Ultrix for VAX and MIPS used off-the-shelf Unix cc. I don't
recall what they used for Alpha.
The C compiler for VAX/VMS was written by Dave Cutler's team at
DECwest in Seattle. The C front end generated intermediate language
(IL) for Cutler's VAX Code Generator (VCG), which was designed to be a
common back end for DEC's compilers for VAX/VMS. His team also
licensed the Freiburghouse PL/I front end (commercial version of a
PL/I compiler originally done for Multics) and modified it to generate
VCG IL. The VCG was also the back end for DEC's Ada compiler. VCG
was superseded by the GEM back end, which supported Alpha and Itanium.
A port of GEM to x86 was in progress at the time Compaq sold off the
Alpha technology (including GEM and its C and Fortran front ends) to
Intel.
Currently I've only got an older laptop at home still running Windows 10
Pro. Mostly I use a company provided HP ProBook 440 G7 with Windows 11 Pro.
I installed WSL2 to run Ubuntu 20.04 if only because I wanted to mount UFS
ISO images 😊
Still employed I have access to lots of UNIX servers, SCO UNIX 3.2V4.2 on
Intel based servers, Tru64 on AlphaServers, HP-UX 11.23/11.31 on Itanium
servers. There's an rx-server rx2660 I can call my own but even in a
testroom I can hear it. Reluctant to take home. My electricity bill would
also explode I think.
Cheers,
uncle rubl
--
The more I learn the better I understand I know nothing.
Dropped TUHS; added COFF.
> * Numerous editors show up on different systems, including STOPGAP on
> the MIT PDP6, eventually SOS, TECO, EMACs, etc., and most have some
> concept of a 'line of text' to distinguish from a 'card image.'
I'd like expand on this, since I never heard about STOPGAP or SOS on the
MIT PDP-6/10 computers. TECO was ported over to the 6 only a few weeks
after delivers, and that seems to have been the major editor ever since.
Did you think of the SAIL PDP-6?
> From: Clem Cole
> the idea of a text editor existed long before Ken's version of QED,
> much less, ed(1). Most importantly, Ken's QED came after the original
> QED, which came after other text editors.
Yes; some of the history is given here:
An incomplete history of the QED Text Editor
https://www.bell-labs.com/usr/dmr/www/qed.html
Ken would have run into the original on the Berkeley Time-Sharing System; he
apparently wrote the CTSS one based on his experience with the one on the BTSS.
Oddly enough, CTSS seems to have not had much of an editor before. The
Programmer's Guide has an entry for 'Edit' (Section AH.3.01), but 'edit file'
seems to basically do a (in later terminology) 'cat >> file'. Section AE
seems to indicate that most 'editing' was done by punching new cards on a
key-punch!
The PDP-1 was apparently similar, except that it used paper tape. Editing
paper tapes was difficult enough that Dan Murphy came up with TECO - original
name 'Tape Editor and Corrector':
https://opost.com/tenex/anhc-31-4-anec.pdf
> Will had asked -- how did people learn to use reg-ex?
I learned it from reading the 'sh' and 'ed' V6 man pages.
The MIT V6 systems had TECO (with a ^R mode even), but I started out with ed,
since it was more like editors I had previously used.
Noel
Might interest the bods here too...
-- Dave
---------- Forwarded message ----------
From: Paul Ruizendaal
To: "tuhs(a)tuhs.org" <tuhs(a)tuhs.org>
Subject: [TUHS] RIP Niklaus Wirth, RIP John Walker
Earlier this year two well known computer scientists passed away.
On New Year’s Day it was Niklaus Wirth aged 90. A month later it was John Walker aged 75. Both have some indirect links to Unix.
For Wirth the link is that a few sources claim that Plan 9 and the Go language are in part influenced by the design ideas of Oberon, the language and the OS. Maybe others on this list know more about those influences.
For Walker, the link is via the company that he was running as a side-business before he got underway with AutoCAD: https://www.fourmilab.ch/documents/marinchip/
In that business he was selling a 16-bit system for the S-100 bus, based around the TI9900 CPU (which from a programmer perspective is quite similar to a PDP11). For that system he wrote a Unix-like operating system around 1978-1980, called NOS/MT. He had never worked with Unix, but had spelled the BSTJ issues about it. It was fully written in assembler.
The design was rather unique, maybe inspired by Heinz Lycklama’s “Satellite Processor” paper in BSTJ 57-6. It has a central microkernel that handles message exchange, process scheduling and memory management. Each system call is a message. However, the system call message is then passed on to a privileged “fat kernel” process that handles it. The idea was to provide multiprocessor and network transparency: the microkernel could decide to run processes on other boards in the same rack or on remote systems over a network. Also the kernel processes could be remote. Hence its name “Network Operating System / Multitasking” or “NOS/MT”.
The system calls are pretty similar to Unix. The file system is implemented very similar to Unix (with i-nodes etc.), with some notable differences (there are file locking primitives and formatting a disk is a system call). File handles are not shareable, so special treatment for stdin/out/err is hardcoded. Scheduling and memory management are totally different -- unsurprising as in both cases it reflects the underlying hardware.
Just as NOS/MT was getting into a usable state, John decided to pivot to packaged software including a precursor of what would become the AutoCAD package. What was there worked and saw some use in the UK and Denmark in the 1980’s -- there are emulators that can still run it, along with its small library of tools and applications. “NOS/MT was left in an arrested state” as John puts it. I guess it will remain one of those many “what if” things in computer history.