On Wed, Jul 10, 2024 at 9:54 PM John R Levine <johnl(a)taugh.com> wrote:
> On Wed, 10 Jul 2024, Dan Cross wrote:
> > It's not clear to me why you suggest with such evident authority that
> > Knuth was referring only to serialized instruction emulation and not
> > something like JIT'ed code; true, he doesn't specify one way or the
> > other, but I find it specious to conclude that that implies the
> > technique wasn't already in use, or at least known.
>
> The code on pages 205 to 211 shows an instruction by instruction
> interpreter. I assume Knuth knew about JIT compiling since Lisp systems
> had been doing it since the 1960s, but that's not what this section of the
> book is about.
Sure. But we're trying to date the topic here; my point is that
JITing was well known, and simulation was similarly well known; we
know when work on those books started; it doesn't seem that odd to me
that combining the two would be known around that time as well.
> One of the later volumes of TAOCP was supposed to be about
> compiling, but it seems unlikely he'll have time to write it.
Yes; volumes 5, 6 and 7 are to cover parsing, languages, and compilers
(more or less respectively). Sadly, I suspect you are right that it's
unlikely he will have time to write them.
> >> We've been discussing batch or JIT translation of code which gives
> >> much better performance without a lot of hardware help.
> >
> > JIT'd performance of binary transliteration is certainly going to be
> > _better_ than strict emulation, but it is unlikely to be _as good_ as
> > native code.
>
> Well, sure, except in odd cases like the Vax compiler and reoptimizer
> someone mentioned a few messages back.
I think the point about the VAX compiler is that it's an actual
compiler and that the VAX MACRO-32 _language_ is treated as a "high
level" programming language, rather than as a macro assembly language.
That's not doing binary->binary translation, that's doing
source->binary compilation. It's just that, in this case, the source
language happens to look like assembler for an obsolete computer.
- Dan C.
On Sat, Jul 13, 2024 at 1:35 PM John R Levine <johnl(a)taugh.com> wrote:
> On Sat, 13 Jul 2024, Dan Cross wrote:
> > Honeywell was doing it with their "Liberator" software on the
> > Honeywell 200 computer in, at least, 1966:
> > https://bitsavers.org/pdf/honeywell/series200/charlie_gibbs/012_Series_200_…
> > (See the section on, "Conversion Compatibility."). Given that that
> > document was published in February of 1966, it stands to reason work
> > started on that earlier, in at least 1965 if not before ...
>
> Good thought. Now that you mention it, I recall that there were a lot of
> Autocoder to X translators, where X was anything from another machine
> to Cobol. Of course I can't find any of them now but they must have been
> around the same time.
>
> R's,
> John
>
> PS: For you young folks, Autocoder was the IBM 1401 assembler. There were
> other Autocoders but that was by far the most popular because the 1401 was
> the most popular computer of the late 1950s.
Oops, it appears that I inadvertently forgot to Cc: COFF in my earlier
reply to John. Mea culpa.
For context, here's my complete earlier message; the TL;DR is that
Honeywell was doing binary translation from the 1401 to the H-200
sometime in 1965 or earlier; possibly as early as 1963, according to
some sources.
-->BEGIN<--
Honeywell was doing it with their "Liberator" software on the
Honeywell 200 computer in, at least, 1966:
https://bitsavers.org/pdf/honeywell/series200/charlie_gibbs/012_Series_200_…
(See the section on, "Conversion Compatibility."). Given that that
document was published in February of 1966, it stands to reason work
started on that earlier, in at least 1965 if not before (how much
earlier is unclear). According to Wikipedia, that machine was
introduced in late 1963; it's unclear whether the Liberator software
was released at the same time, however. Ease of translation of IBM
1401 instructions appears to have been a design goal. At least some
sources suggest that Liberator shipped with the H-200 in 1963
(https://ibm-1401.info/1401-Competition.html#UsingLib)
It seemed like what Doug was describing earlier was still
source->binary translation, using some clever macro packages.
-->END<--
- Dan C.
(Let me try sending this again, now that I'm a member of the list.)
Another example of operator-typing in BLISS, of more use in a kernel
than floating point, is in the relational operators. For example, GTR
(greater-than) for signed comparison, GTRU for unsigned comparison, and
GTRA for address comparison (where the number of bits in an address is
less than the number of bits in a machine word), etc. for the other 5
relations.
On 7/9/24 13:18, Paul Winalski wrote:
> expression-1<offset-expr, size-expr, padding-expr>
> [...] padding-expr controls the value used to pad the high order
> bits: if even, zero-padded, if odd, one-padded.
>
> I always wondered how this would work on the IBM S/360/370
> architecture. It is big-endian and bit 0 of a machine word is the
> most significant bit, not the least significant as in DEC's architectures.
Offset and Size taken as numbers of bits in a value (machine word), not
bit numbers, works just fine for any architecture. The PDP-10 and other
DEC architectures before the PDP-11 were word-addressed with bit 0 at
the high-order end.
The optional 3rd parameter is actually 0 for unsigned (zero) extension
and 1 for signed (highest order bit in the extracted field) extension.
I don't think signed extension is widely used, but it depends on the
data structure you're using.
When verifying that, I found something I did not remember, that in
BLISS-16 and -32 (and I would guess also -64), but not -36 (the
word-addressed PDP-10), one could declare 8-bit signed and unsigned data:
OWN
X: BYTE SIGNED,
Y: BYTE;
So the concepts of 'type' in BLISS, at least regarding data size and
representation, can get a little complicated (not to be confused with
COMPLEX :-) ).
--------
An aside re: bit twiddling from CMU and hardware description languages:
Note that the ISP/ISPL/ISPS machine description language(s) from books
by Gordon Bell et al. used the following syntax for a bit or a bit field
of a register:
REG<BIT_NR>
REG<HIGH_BIT_NR:LOW_BIT_NR>
REG<BIT_NR,BIT_NR,...>
(',...' is meta syntax.) Sign extension was handled by a unary operator
because the data were all bit vectors, instead of values as in BLISS, so
the width (in bits) of an expression was known. The DECSIM logic
simulator inherited this syntax. Brackets were used for memory
addresses, so you might have M[0]<0:2> for the first 4 bits of the first
word in memory. I still find it the most clear syntax, but then it is
what I used for many years. (Sorry, VHDL and Verilog, although you won
due to the idea back in the day that internally-developed VLSI CAD
software was to be kept internal.)
- Aron
[TUHS to Bcc:, +COFF]
On Wed, Jul 10, 2024 at 5:26 PM John Levine <johnl(a)taugh.com> wrote:
> It appears that Noel Chiappa <jnc(a)mercury.lcs.mit.edu> said:
> > > From: Dan Cross
> >
> > > These techniques are rather old, and I think go back much further than
> > > we're suggesting. Knuth mentions nested translations in TAOCP ..
> > > suggesting the technique was well-known as early as the mid-1960s.
>
> Knuth was talking about simulating one machine on another, interpreting
> one instruction at a time. As he notes, the performance is generally awful,
> although IBM did microcode emulation of many of their second generation
> machines on S/360 which all (for business reasons) ran faster than the
> real machines. Unsurprisingly, you couldn't emulate a 7094 on anything
> smaller than a 360/65.
It's not clear to me why you suggest with such evident authority that
Knuth was referring only to serialized instruction emulation and not
something like JIT'ed code; true, he doesn't specify one way or the
other, but I find it specious to conclude that that implies the
technique wasn't already in use, or at least known. But certainly by
then JIT'ing techniques for "interpreted" programming languages were
known; it doesn't seem like a great leap to extend that to binary
translation. Of course, that's speculation on my part, and I could
certainly be wrong.
> We've been discussing batch or JIT translation of code which gives
> much better performance without a lot of hardware help.
JIT'd performance of binary transliteration is certainly going to be
_better_ than strict emulation, but it is unlikely to be _as good_ as
native code. Indeed, this is still an active area of research; e.g.,
luajit; https://www.mattkeeter.com/blog/2022-10-04-ssra/ (disclaimer:
Matt's a colleague of mine), etc.
- Dan C.
On Wednesday, July 10th, 2024 at 4:00 PM, John Levine <johnl(a)taugh.com> wrote:
> It appears that Al Kossow aek(a)bitsavers.org said:
>
> > On 7/10/24 1:53 PM, Dan Cross wrote:
> >
> > > The idea of writing simulators for machines clearly dates to before
> > > (or near) the beginning of TAOCP.
>
>
> Sure, but the topic of interest here is compiling machine code from one
> machine to another. You know like Rosetta does for x86 code running on
> my Macbook (obUnix: whose OS is descended from FreeBSD and Mach and does
> all the Posix stuff) which has an M2 ARM chip.
>
> We know that someone did it in 1967 from 709x to GE 635, which I agree
> was quite a trick since really the only thing the two machines had in
> common was a 36 bit word size. I was wondering if anyone did machine
> code translation as opposed to instruction at a time simulation before that.
>
Attempting once again to COFF this thread as I am quite interested in the
discussion of this sort of emulation/simulation matter outside of the
confines of UNIX history as well.
To add to the discussion, while not satisfying the question of "where did
this sort of thing begin", the 3B20 was another machine that provided some
means of emulating another architecture via microcode, although what I know
about this is limited to discussions about emulating earlier ESS machines
to support existing telecom switching programs. I've yet to find any
literature suggesting this was ever used to emulate other general-purpose
computers such as IBM, DEC, etc. but likewise no suggestion that it *couldn't*
be used this way.
- Matt G.
All, just a friendly reminder to use the TUHS mailing list for topics
related to Unix, and to switch over to the COFF mailing list when the
topic drifts away from Unix. I think a couple of the current threads
ought to move over to the COFF list.
Thanks!
Warren
I don't know of any OSes that use floating point. But the IBM operating
systems for S/360/370 did use packed decimal instructions in a few places.
This was an issue for the System/360 model 44. The model 44 was
essentially a model 40 but with the (much faster) model 65's floating point
hardware. It was intended as a reduced-cost high-performance technical
computing machine for small research outfits.
To keep the cost down, the model 44 lacked the packed decimal arithmetic
instructions, which of course are not needed in HPTC. But that meant that
off-the-shelf OS/360 would not run on the 44. It had its own OS called
PS/44.
IIRC VAX/VMS ran into similar issues when the microVAX architecture was
adopted. To save on chip real estate, microVAX did not implement packed
decimal, the complicated character string instructions, H-floating point,
and some other exotica (such as CRC) in hardware. They were emulated by
the OS. For performance reasons it behooved one to avoid those data types
and instructions on later VAXen.
I once traced a severe performance problem to a subroutine where there were
only a few instructions that weren't generating emulator faults. The
culprit was the oddball conversion semantics of PL/I, which caused what
should have been D-float arithmetic to be done in 15-digit packed decimal.
Once I fixed that the program ran 100 times faster.
-Paul W.
On Mon, Jul 8, 2024 at 9:04 PM Aron Insinga <aki(a)insinga.com> wrote:
> I found it sad, but the newest versions of the BLISS compilers do not
> support using it as an expression language. The section bridging pp
> 978-979 (as published) of Brender's history is:
>
> "The expression language characteristic was often highly touted in the
> early years of BLISS. While there is a certain conceptual elegance that
> results, in practice this characteristic is not exploited much.
> The most common applications use the if-then-else expression, for
> example, in something like the maximum calculation illustrated in Figure 5.
> Very occasionally there is some analogous use of a case expression.
> Examples using loops (taking advantage of the value of leave), however,
> tend not to work well on human factors grounds: the value computed tends to
> be visually lost in the surrounding control constructs and too far removed
> from where it will be used; an explicit assignment to a temporary variable
> often seems to work better.
> On balance, the expression characteristic of BLISS was not terribly
> important."
>
> Ron Brender is correct. All of the software development groups at DEC had
programming style guidelines and most of those frowned on the use of BLISS
as an expression language. The issue is maintainability of the code. As
Brender says, a human factors issue.
> Another thing that I always liked (but is still there) is the ease of
> accessing bit fields with V<FOO_OFFSET, FOO_SIZE> which was descended from
> BLISS-10's use of the PDP-10 byte pointers. [Add a dot before V to get an
> rvalue.] (Well, there was this logic simulator which really packed data
> into bit fields of blocks representing gates, events, etc....)
>
> Indeed. BLISS is the best bit-banging language around. The field
reference construct is a lot more straightforward than the and/or bit masks
in most languages. In full the construct is:
expression-1<offset-expr, size-expr, padding-expr>
expression-1 is a BLISS value from which the bits are to be extracted.
offset-expr is start of the field to be extracted (bit 0 being the low bit
of the value) and size-expr is the number of bits to be extracted. The
value of the whole mess is a BLISS value with the extracted field in the
low-order bits. padding-expr controls the value used to pad the high order
bits: if even, zero-padded, if odd, one-padded.
I always wondered how this would work on the IBM S/360/370 architecture.
It is big-endian and bit 0 of a machine word is the most significant bit,
not the least significant as in DEC's architectures.
-Paul W.
[redirecting this to COFF]
On Mon, Jul 8, 2024 at 5:40 PM Aron Insinga <aki(a)insinga.com> wrote:
>
> When DEC chose an implementation language, they knew about C but it had
> not yet escaped from Bell Labs. PL/I was considered, but there were
> questions of whether or not it would be suitable for a minicomputer. On
> the other hand, by choosing BLISS, DEC could start with the BLISS-11
> cross compiler running on the PDP-10, which is described in
> https://en.wikipedia.org/wiki/The_Design_of_an_Optimizing_Compiler
> BLISS-11
> <https://en.wikipedia.org/wiki/The_Design_of_an_Optimizing_CompilerBLISS-11>
> and DEC's Common BLISS had changes necessitated by different
> word lengths and architectures, including different routine linkages
> such as INTERRUPT, access to machine-specific operations such as INSQTI,
> and multiple-precision floating point operations using builtin functions
> which used the addresses of data instead of the values.
>
> In order to port VMS to new architectures, DEC/HP/VSI retargeted and
> ported the BLISS compilers to new architectures.
>
> There have in general been two approaches to achieving language
portability (machine independence).
One of them is to provide only abstract data types and operations on them
and to completely hide the machine implementation. PL/I and especially Ada
use this approach.
BLISS does the exact opposite. It takes the least common denominator. All
machine architectures have machine words and ways to pick them apart.
BLISS has only one data type--the word. It provides a few simple
arithmetic and logical operations and also syntax for operating on
contiguous sets of bits within a word. More complicated things such as
floating point are done by what look like routine calls but are actually
implemented in the compiler.
BLISS is also a true, full-blown expression language. Statement constructs
such as if/then/else have a value and can be used in expressions. In C
terminology, everything in BLISS is a lvalue. A semicolon terminates an
expression and throws its value away.
BLISS is also unusual in that it has an explicit fetch operator, the dot
(.). The assignment expression (=) has the semantics "evaluate the
expression to the right of the equal sign and then store that value in the
location specified by the expression to the left of the equal sign".
Supposing that a and b are identifiers for memory locations, the expression:
a = b;
means "place b (the address of a memory location) at the location given by
a (also a memory location)". This is the equivalent of:
a = &b;
in C. To get C's version of "a = b;" in BLISS you need an explicit fetch
operator:
a = .b;
Forgetting to use the fetch operator is probably the most frequent error
made by new BLISS programmers familiar with more conventional languages.
DEC used four dialects of BLISS as their primary software development
language: BLISS-16, BLISS-32, BLISS-36, and BLISS-64 the numbers
indicating the BLISS word size in bits. BLISS-16 targeted the PDP-11 and
BLISS-36 the PDP-10. DEC did implementations of BLISS-32 for VAX, MIPS,
and x86. BLISS-64 was targeted to both Alpha and Itanium. VSI may have a
version of BLISS-64 that generates x86-64 code.
-Paul W.
I moved this to COFF since it's a TWENEX topic. Chet Ramsey pointed folks
at a wonderful set of memories from Dan Murphy WRT to the development of
TENEX and later become TOPS-20. But one comment caught me as particularly
wise and should be understood and digested by all:
*"... a complex, complicated design for some facility in a software or
hardware product is usually not an indication of great skill and maturity
on the part of the designer. Rather, it is typically evidence of lack of
maturity, lack of insight, lack of understanding of the costs of
complexity, and failure to see the problem in its larger context."*
ᐧ
All, recently I saw on Bruce Schneier "Cryptogram" blog that he has had
to change the moderation policy due to toxic comments:
https://www.schneier.com/blog/archives/2024/06/new-blog-moderation-policy.h…
So I want to take this opportunity to thank you all for your civility
and respect for others on the TUHS and COFF lists. The recent systemd
and make discussions have highlighted significant differences between
people's experiences and opinions. Nonetheless, apart from a few pointed
comments, the discussions have been polite and informative.
These lists have been in use for decades now and, thankfully, I've
only had to unsubscribe a handful of people for offensive behaviour.
That's a testament to the calibre of people who are on the lists.
Cheers and thank you again,
Warren
P.S. I'm a happy Devuan (non-systemd) user for many years now.
[Moved to COFF. Mercifully this really has nothing to do with Unix]
On Wednesday, 19 June 2024 at 22:09:11 -0700, Luther Johnson wrote:
> On 06/19/2024 10:01 PM, Scot Jenkins via TUHS wrote:
>> "Greg A. Woods" <woods(a)robohack.ca> wrote:
>>
>>> I will not ever allow cmake to run, or even exist, on the machines I
>>> control...
>>
>> How do you deal with software that only builds with cmake (or meson,
>> scons, ... whatever the developer decided to use as the build tool)?
>> What alternatives exist short of reimplementing the build process in
>> a standard makefile by hand, which is obviously very time consuming,
>> error prone, and will probably break the next time you want to update
>> a given package?
>>
>> If there is some great alternative, I would like to know about it.
>
> I just avoid tools that build with CMake altogether, I look for
> alternative tools. The tool has already told me, what I can expect from
> a continued relationship, by its use of CMake ...
That's fine if you have the choice. I use Hugin
(https://hugin.sourceforge.io/) a panorama stitcher, and the authors
have made the decision to use cmake. I don't see any useful
alternative to to Hugin, so I'm stuck with cmake.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
Sort of between kernel and user mode, Unix [zillion trademarks etc] never
used it. but did RSX-11?
I used the latter long enough to hate it until Edition 5 arrived at UNSW,
and I still remember being blown away by the fact that there was nothing
privileged about the Shell :-)
-- Dave
This report [link at end ] about a security issue with VMware Vsphere, stemming from the design/ architecture, resonated with me and the recent TUHS “Unix Philosophy” thread.
Many of the criticisms of Unix relate to not understanding it’s purpose and design criteria:
A platform on which to develop (other) Software. Which implies ‘running, profiling, testing & debugging’ that code.
Complaining that Unix tools/utilities are terse and arcane for non-developers & testers, needing a steep Learning Curve,
is the same as complaining a large truck doesn’t accelerate or corner like a sports car.
Plan 9, by the same core team twenty years later, addresses the same problems with modern hardware & graphics, including with Networking.
The system they developed in 1990 would’ve been proof against both vSphere attacks because of its security-by-design:
No ‘root’ user, hence no ’sudo’
and no complex, heavyweight RPC protocol with security flaws, instead the simple, lightweight & secure 9P protocol.
It seems Eric Raymond’s exposition on the “Unix Philosophy” is the basis of much of the current understanding / view.
In the ESR & other works cited on Wikipedia, I see a lot about “Userland” approaches,
nothing about the Kernel, Security by Design and innovations like ’shells’, ‘pipes’ and the many novel standard tools, which is
being able to Reuse standard tools and ’stand on the shoulders of giants’ [ versus constantly Reinventing the Wheel, poorly ]
ESR was always outside CSRC and from his resume, not involved with Unix until 1983 at best.
He’s certainly been a mover & shaker in the Linux and associated (GNU led) Open Source community.
<http://catb.org/~esr/resume.html>
ESR baldly states "The Unix philosophy is not a formal design method”,
which isn’t strictly untrue, but highly misleading IMHO.
Nor is the self-description by members of CSRC as having “good taste” a full and enlightening description of their process.
There’s not a general appreciation, even in Research & Academic circles, that “Software is Performance Discipline”,
in the same way as Surgery, Rocketry, Aviation, Music, Art and physical disciplines (dance, gymnastics, even rock climbing) are “Performance” based.
It requires both Theory and Practice.
If an educator hasn’t worked on at least one 1M LOC system, how can they teach “Programming in the Large”, the central problem of Software Engineering?
[ an aside: the problem “golang” addressed was improving Software Engineering, not simply a language & coding. ]
There’s a second factor common to all high-performance disciplines,
why flying has become cheaper, safer and faster since the first jet & crashes in 1950’s:
- good professionals deliberately improve, by learning from mistakes & failures and (perhaps) adopting better practices,
- great professionals don’t just ‘improve’, they actively examine how & why they created Errors, Faults & Failures and detect / remove root causes.
The CSRC folk used to hate Corporate attempts at Soft Skills courses, calling them “Charm School”.
CSRC's deliberate and systematic learning, adaption and improvement wasn’t accidental or incidental,
it was the same conscious approach used by Fairchild in its early days, the reason it quickly became the leader in Silicon devices, highly profitable, highly valued.
Noyce & Moore, and I posit CSRC too, applied the Scientific Method to themselves and their practices, not just what their research field.
IMO, this is what made CSRC unique - they were active Practitioners, developing high-quality, highly-performant code, as well as being astute Researchers,
providing quantifiably better solutions with measurable improvements, not prototypes or partial demonstrators.
Gerard Holtzman’s 1127 Alumni page shows the breadth & depth of talent that worked at CSRC.
The group was unusually productive and influential. [ though I’ve not seen a ‘collected works’ ]
<http://spinroot.com/gerard/1127_alumni.html>
CSRC/1127 had a very strong culture and a very deliberate, structured ‘process’
that naturally led to a world-changing product in 1974 from only ~30 man-years of effort, a minor effort in Software Projects.
perfective “iterative design”, rigorous testing, code quality via a variation of pair-programming,
collaborative design with group consultation / discussion
and above all “performant” code - based first on ‘correct’ and ’secure’,
backed by Doug McIlroy’s insistence on good documentation for everything.
[ It’s worth noting that in the original paper on the “Waterfall” development process, it isn’t "Once & Done”, its specifically “do it twice”, ]
[ the Shewhart Cycle, promoted by Deming, Plan - Do - Check - Act, was well known in Engineering circles, known to be very Effective ]
Unix - the kernel & device drivers, the filesystem, the shell, libraries, userland and standard tools - weren’t done in hurry between 1969 & 1974’s CACM article.
It was written and rewritten many times - far more than the ‘versions’, derived from the numbering of the manuals, might suggest.
Ken’s comment on one of his most productive days, “throwing away 1,000 lines of code”,
demonstrates this dynamic environment dominated by trials, redesign and rewriting - backed by embedded ‘instrumentation’ (profiling).
Ken has also commented he had to deliberately forget all his code at one point (maybe after 1974 or 77).
He was able to remember every line of code he’d written, in every file & program.
I doubt that was an innate skill, even if so, it would’ve improved by deliberate practice, just as in learning to play a musical instrument.
There’s a lot of research in Memory & Recall, all of which documents ‘astonishing’ performance by ‘ordinary’ people, with a only little tuition and deliberate practice.
CSRC had a scientific approach to software design and coding, unlike any I’ve seen in commercial practice, academic research or promoted “Methodologies”.
There’s a casual comment by Dennis in “Evolution of Unix”, 1979, about rewriting the kernel, improving its organisation and adding multiprogramming.
By one person in months.. A documented, incontestable level of productivity, 100x-1000x programmers practising mainstream “methodologies”.
Surely that performance alone would’ve been worthy of intensive study as the workforce & marketplace implications are profound.
<https://www.bell-labs.com/usr/dmr/www/hist.pdf>
Perhaps the most important watershed occurred during 1973, when the operating system kernel was rewritten in C.
… The success of this effort convinced us that C was useful as a nearly universal tool for systems programming, instead of just a toy for simple applications.
The CSRC software evolution methodology is summed by perfectly in Baba Brinkman’s Evolution Rap:
"Performance, Feedback, Revision”
<https://www.youtube.com/watch?v=gTXVo0euMe4>
Website: <https://bababrinkman.com/>
ABC Science Show, 2009, 54 min audio, no transcript
This is the performance Baba gave at the Darwin Festival in Cambridge England, July 2009.
<https://www.abc.net.au/listen/programs/scienceshow/the-rap-guide-to-evoluti…>
Ken also commented that they divided up the work coding, seemingly informally but in a disciplined way,
so that there was only ever one time they created the same file. [ "mis-coordination of work”, Turing Award speech ]
To prove they had well defined coding / naming standards and followed them, the two 20-line files were identical…
———————
There’s a few things with the “Unix Philosophy” that are critical and not included in the commentaries I’ve read or seen quoted:
- The Unix kernel was ‘conservative’, not inventive or novel.
It deliberately used only known, proven solutions, with a focus on small, correct, performant. “Just Worked”, not “Worked, Just”.
Swapping was used, while Virtual Memory not implemented because they didn’t know of a definitive solution.
They avoided the “Second System Effect” - showing how clever they were - working as professional engineers producing a robust, reliable, secure system.
- Along with Unix (kernel, fsys, userland), CSRC developed a high-performance high-quality Software Development culture and methodology,
The two are inseparable, IMO.
- Professionals do not, can not, write non-trivial code in a “One and Done” manner. Professional quality code takes time and always evolves.
It takes significant iterative improvement, including redesign, to develop large systems,
with sufficient security, reliability, maintainability and performance.
[ Despite 60 years of failed “Big Bang” projects using “One & Done”, Enterprises persist with this idioticy, wasting billions every year ]
- Unix was developed to provide CSRC with a great environment for their own work. It never attempted to be more, but has been applied ‘everywhere’.
Using this platform, members of the team developed a whole slew of important and useful tools,
now taken as a given in Software Development: editors, type settings, ‘diff’ and Version Control, profile, debug, …
This includes the computer Language Tools, now core to every language & system.
- Collaboration and Sharing, both ways, was central to the Unix Philosophy developed at CSRC.
Both within the team, within Bell Labs and other Unix installations, notably USENIX & UCB and it’s ARPA-IPTO funded CSRG.
The world of Software and Code Development is clearly in two Eras, “Before Unix” and “After”.
Part of this is “Open Source”, not just shared source targeted for a single platform & environment, but source code mechanically ported to new platforms.
This was predicated on the original CSRC / Bell Labs attitude of Sharing the Source…
Source was shared in & out,
directly against the stance of the Legal Dept, intent on tightly controlling all Intellectual Property with a view of extracting “revenue streams” from clients.
Later events proved CSRC’s “Source Code Sharing” was far more powerful and profitable than a Walled Garden approach, endlessly reinvesting the wheel & competing, not cooperating with others.
Senior Management and the old school lawyers arguably overestimated their marketing & product capability
and wildly underestimated the evolution of computing and failed to understand completely the PC era, with Bill Gates admonisment,
“You guys don’t get it, it’s all about Volume”.
In 1974, Unix was described publicly in CACM.
In 1977, USG then later Unix System Labs was formed to work on and sell Unix commercially, locking day the I.P., with no free source code.
In 1984, AT&T ‘de-merged’, keeping Bell Labs, USL and Western Digital - all the hardware and software to “Rule the World” and beat IBM.
In 1994, AT&T gave up being the new IBM and sold its hardware and software divisions.
In 2004, AT&T was bought by one of its spinoff’s, SBC (Southern Bell),
who’d understood Mobile Telephony (passing on to customers savings from new technology), merged and rebranded themselves as “A&T”.
The “Unix Wars” of the 1990’s, where vendors bought AT&T licenses, confusing “Point of Difference” with “Different & Incompatible”.
They attempted Vendor lock-in, a monopoly tactic to create captive markets that could be gouged.
This failed for two reasons, IMO:
- the software (even binaries) and tools were all portable, the barriers to exit were low.
- Unix wasn’t the only competitor
Microsoft used C to write Windows NT and Intel-based hardware to undercut Unix Servers & Workstations by 10x.
Bill Gates understood ‘Volume’ and the combined AT&T and Unix vendors didn’t.
================
VMware by Broadcom warns of two critical vCenter flaws, plus a nasty sudo bug
<https://www.theregister.com/2024/06/18/vmware_criticial_vcenter_flaws/>
VMware's security bulletin describes both of the flaws as "heap-overflow vulnerabilities in the implementation of the DCE/RPC protocol” …
DCE/RPC (Distributed Computing Environment/Remote Procedure Calls)
is a means of calling a procedure on a remote machine as if it were a local machine – just the ticket when managing virtual machines.
================
CHM, 2019
<https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-sour…>
As Ritchie would later explain:
“What we wanted to preserve was not just a good environment to do programming, but a system around which a fellowship could form.
We knew from experience that the essence of communal computing, as supplied from remote-access, time-shared machines,
is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”
================
Ken Thompson, 1984 Turing Award paper
Reflections on Trusting Trust To what extent should one trust a statement that
a program is free of Trojan horses?
Perhaps it is more important to trust the people who wrote the software.
That brings me to Dennis Ritchie.
Our collaboration has been a thing of beauty.
In the ten years that we have worked together, I can recall only one case of mis-coordination of work.
On that occasion, I discovered that we both had written the same 20-line assembly language program.
I compared the sources and was astounded to find that they matched character-for-character.
The result of our work together has been far greater than the work that we each contributed.
================
The Art of Unix Programming
by ESR
<http://www.catb.org/~esr/writings/taoup/html/index.html>
Basics of the Unix Philosophy
<http://www.catb.org/~esr/writings/taoup/html/ch01s06.html>
================
Wiki
ESR
<https://en.wikipedia.org/wiki/Eric_S._Raymond>
Unix Philosophy
<https://en.wikipedia.org/wiki/Unix_philosophy>
================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
There seems to be some confusion, but I've heard enough sources now
that I believe it to be confirmed. Notably, faculty at UMich EECS have
shared that it was passed to them internally.
RIP Lynn Conway, architect of the VLSI revolution and long-time
transgender activist. She apparently died from heart failure; she was
86. http://www.myhusbandbetty.com/wordPressNEW/2024/06/11/lynn-conway-january-2…
- Dan C.
Could interest a few OFs here... I've used the -8 and of course the -11,
but not the -10 so I may as well start now.
-- Dave
---------- Forwarded message ----------
From: A fellow geek
To: Dave Horsfall <dave(a)horsfall.org>
Subject: PiDP-10 — The MagPi magazine
RasPi is now masquerading as a PDP-10…
https://magpi.raspberrypi.com/articles/pidp-10
Explore this gift article from The New York Times. You can read it for free
without a subscription.
C. Gordon Bell, Creator of a Personal Computer Prototype, Dies at 89
It cost $18,000 when it was introduced in 1965, but it bridged the world
between room-size mainframes and the modern desktop.
https://www.nytimes.com/2024/05/21/technology/c-gordon-bell-dead.html?unloc…
ᐧ
Good evening, I recently came into possession of a "Draft Proposed American National Standard for BASIC" circa September 15, 1980, 6 years prior to the publication of the Full BASIC standard.
I did some brief searching around, checked bitsavers, archive.org, but I don't happen to see this archived anywhere. Is anyone aware of such a scan? If not, I'll be sure to add it to my scan docket...which is a bit slow these days due to a beautiful spring and much gardening...but not forgotten!
Flipping through this makes me wonder how things might have been different if this dropped in 1980 rather than 1986.
- Matt G.
The Newcastle Connection, aka Unix United, was an early experiment in
transparent networking: see <
https://web.archive.org/web/20160816184205/http://www.cs.ncl.ac.uk/research…>
for a high-level description. A name of the form "/../host/path"
represented a file or device on a remote host in a fully transparent way.
This was layered on V7 at the libc level, so that the kernel did not need
to be modified (though the shell did, since it was not libc-based at the
time). MUNIX was an implementation of the same idea using System V as the
underlying system.
This appears to be a VHS vs. Betamax battle: NFS was not transparent, but
Sun had far more marketing clout. However, the Manchester Connection
required a single uid space (as far as I can tell), which may also have
been a (perceived) institutional barrier.
On Thu, May 16, 2024 at 3:34 AM Ralph Corderoy <ralph(a)inputplus.co.uk>
wrote:
> Hi,
>
> I've set ‘mail-followup-to: coff(a)tuhs.xn--org-to0a.
>
> > > Every so often I want to compare files on remote machines, but all
> > > I can do is to fetch them first (usually into /tmp); I'd like to do
> > > something like:
> > >
> > > rdiff host1:file1 host2:file2
> > >
> > > Breathes there such a beast?
>
> No, nor should there. It would be slain less it beget rcmp, rcomm,
> rpaste, ...
>
> > > Think of it as an extension to the Unix philosophy of "Everything
> > > looks like a file"...
>
> Then make remote files look local as far as their access is concerned.
> Ideally at the system-call level. Less ideal, at libc.a.
>
> > Maybe
> >
> > diff -u <(ssh host1 cat file1) <(ssh host2 cat file2)
>
> This is annoyingly noisy if the remote SSH server has sshd_config(5)'s
> ‘Banner’ set which spews the contents of a file before authentication,
> e.g. the pointless
>
> This computer system is the property of ...
>
> Disconnect NOW if you have not been expressly authorised to use this
> system. Unauthorised use is a criminal offence under the Computer
> Misuse Act 1990.
>
> Communications on or through ...uk's computer systems may be
> monitored or recorded to secure effective system operation and for
> other lawful purposes.
>
> It appears on stderr so doesn't upset the diff but does clutter.
> And discarding stderr is too sloppy.
>
> --
> Cheers, Ralph.
>
While the idea of small tools that do one job well is the core tenant of
what I think of as the UNIX philosophy, this goes a bit beyond UNIX, so I
have moved this discussion to COFF and BCCing TUHS for now.
The key is that not all "bloat" is the same (really)—or maybe one person's
bloat is another person's preference. That said, NIH leads to pure bloat
with little to recommend it, while multiple offerings are a choice. Maybe
the difference between the two may be one person's view over another.
On Fri, May 10, 2024 at 6:08 AM Rob Pike <robpike(a)gmail.com> wrote:
> Didn't recognize the command, looked it up. Sigh.
>
Like Rob -- this was a new one for me, too.
I looked, and it is on the SYS3 tape; see:
https://www.tuhs.org/cgi-bin/utree.pl?file=SysIII/usr/src/man/man1/nl.1
> pr -tn <file>
>
> seems sufficient for me, but then that raises the question of your
> question.
>
Agreed, that has been burned into the ROMs in my fingers since the
mid-1970s 😀
BTW: SYS3 has pr(1) with both switches too (more in a minute)
> I've been developing a theory about how the existence of something leads
> to things being added to it that you didn't need at all and only thought of
> when the original thing was created.
>
That is a good point, and I generally agree with you.
> Bloat by example, if you will. I suspect it will not be a popular theory,
> however accurately it may describe the technological world.
>
Of course, sometimes the new features >>are<< easier (more natural *for
some people*). And herein lies the core problem. The bloat is often
repetitive, and I suggest that it is often implemented in the wrong place -
and usually for the wrong reasons.
Bloat comes about because somebody thinks they need some feature and
probably doesn't understand that it is already there or how they can use
it. But they do know about it, their tool must be set up to exploit it - so
they do not need to reinvent it. GUI-based tools are notorious for this
failure. Everyone seems to have a built-in (unique) editor, or a private
way to set up configuration options et al. But ... that walled garden is
comfortable for many users and >>can be<< useful sometimes.
Long ago, UNIX programmers learned that looking for $EDITOR in the
environment was way better than creating one. Configuration was as ASCII
text, stored in /etc for system-wide and dot files in the home for users.
But it also means the >>output<< of each tool needs to be usable by each
other [*i.e.*, docx or xlx files are a no-no).
For example, for many things on my Mac, I do use the GUI-based tools --
there is no doubt they are better integrated with the core Mac system >>for
some tasks.<< But only if I obey a set of rules Apple decrees. For
instance, this email read is easier much of the time than MH (or the HM
front end, for that matter), which I used for probably 25-30 years. But on
my Mac, I always have 4 or 5 iterm2(1) open running zsh(1) these days. And,
much of my typing (and everything I do as a programmer) is done in the shell
(including a simple text editor, not an 'IDE'). People who love IDEs swear
by them -- I'm just not impressed - there is nothing they do for me that
makes it easier, and I have learned yet another scheme.
That said, sadly, Apple is forcing me to learn yet another debugger since
none of the traditional UNIX-based ones still work on the M1-based systems.
But at least LLDB is in the same key as sdb/dbx/gdb *et al*., so it is a
PITA but not a huge thing as, in the end, LLDB is still based on the UNIX
idea of a single well-designed and specific to the task tool, to do each
job and can work with each other.
FWIW: I was recently a tad gob-smacked by the core idea of UNIX and its
tools, which I have taken for a fact since the 1970s.
It turns out that I've been helping with the PiDP-10 users (all of the
PiDPs are cool, BTW). Before I saw UNIX, I was paid to program a PDP-10. In
fact, my first UNIX job was helping move programs from the 10 to the UNIX.
Thus ... I had been thinking that doing a little PDP-10 hacking shouldn't
be too hard to dust off some of that old knowledge. While some of it has,
of course, come back. But daily, I am discovering small things that are so
natural with a few simple tools can be hard on those systems.
I am realizing (rediscovering) that the "build it into my tool" was the
norm in those days. So instead of a pr(1) command, there was a tool that
created output to the lineprinter. You give it a file, and it is its job to
figure out what to do with it, so it has its set of features (switches) -
so "bloat" is that each tool (like many current GUI tools) has private ways
of doing things. If the maker of tool X decided to support some idea, they
would do it like tool Y. The problem, of course, was that tools X and Y
had to 'know about' each type of file (in IBM terms, use its "access
method"). Yes, the engineers at DEC, in their wisdom, tried to
"standardize" those access methods/switches/features >>if you implemented
them<< -- but they are not all there.
This leads me back to the question Rob raises. Years ago, I got into an
argument with Dave Cutler RE: UNIX *vs.* VMS. Dave's #1 complaint about
UNIX in those days was that it was not "standardized." Every program was
different, and more to Dave's point, there was no attempt to make switches
or errors the same [getopt(3) had been introduced but was not being used by
most applications). He hated that tar/tp used "keys" and tools like cpio
used switches. Dave hated that I/O was so simple - in his world all user
programs should use his RMS access method of course [1]. VMS, TOPS, *etc.*,
tried to maintain a system-wide error scheme, and users could look things
like errors up in a system DB by error number, *etc*. Simply put, VMS is
very "top-down."
My point with Dave was that by being "bottom-up," the best ideas in UNIX
were able to rise. And yes, it did mean some rough edges and repeated
implementations of the same idea. But UNIX offered a choice, and while Rob
and I like and find: pr -tn perfectly acceptable thank you, clearly someone
else desired the features that nl provides. The folks that put together
System 3 offer both solutions and let the user choose.
This, of course, comes as bloat, but maybe that is a type of bloat so bad?
My own thinking is this - get things down to the basics and simplest
privatives and then build back up. It's okay to offer choices, as long as
the foundation is simple and clean. To me, bloat becomes an issue when you
do the same thing over and over again, particularly because you can not
utilize what is there already, the worst example is NIH - which happens way
more than it should.
I think the kind of bloat that GUI tools and TOPS et al. created forces
recreation, not reuse. But offering choice and the expense of multiple
tools that do the same things strikes me as reasonable/probably a good
thing.
1.] BTW: One of my favorite DEC stories WRT to VMS engineering has to do
with the RMS I/O system. Supporting C using VMS was a bit of PITA.
Eventually, the VMS engineers added Stream I/O - which simplified the C
runtime, but it was also made available for all technical languages.
Fairly soon after it was released, the DEC Marketing folks discovered
almost all new programs, regardless of language, had started to use Stream
I/O and many older programs were being rewritten by customers to use it. In
fact, inside of DEC itself, the languages group eventually rewrote things
like the FTN runtime to use streams, making it much smaller/easier to
maintain. My line in the old days: "It's not so bad that ever I/O has
offer 1000 options, it's that Dave to check each one for every I/O. It's a
classic example of how you can easily build RMS I/O out of stream-based
I/O, but the other way around is much harder. My point here is to *use
the right primitives*. RMS may have made it easier to build RDB, but it
impeded everything else.
Sorry for the dual list post, I don’t who monitors COFF, the proper place for this.
There may a good timeline of the early decades of Computer Science and it’s evolution at Universities in some countries, but I’m missing it.
Doug McIlroy lived through all this, I hope he can fill in important gaps in my little timeline.
It seems from the 1967 letter, defining the field was part of the zeitgeist leading up to the NATO conference.
1949 ACM founded
1958 First ‘freshman’ computer course in USA, Perlis @ CMU
1960 IBM 1400 - affordable & ‘reliable’ transistorised computers arrived
1965 MIT / Bell / General Electric begin Multics project.
CMU establishes Computer Sciences Dept.
1967 “What is Computer Science” letter by Newell, Perlis, Simon
1968 “Software Crisis” and 1st NATO Conference
1969 Bell Labs withdraws from Multics
1970 GE's sells computer business, including Multics, to Honeywell
1970 PDP-11/20 released
1974 Unix issue of CACM
=========
The arrival of transistorised computers - cheaper, more reliable, smaller & faster - was a trigger for the accelerated uptake of computers.
The IBM 1400-series was offered for sale in 1960, becoming the first (large?) computer to sell 10,000 units - a marker of both effective marketing & sales and attractive pricing.
The 360-series, IBM’s “bet the company” machine, was in full development when the 1400 was released.
=========
Attached is a text file, a reformatted version of a 1967 letter to ’Science’ by Allen Newell, Alan J. Perlis, and Herbert A. Simon:
"What is computer science?”
<https://www.cs.cmu.edu/~choset/whatiscs.html>
=========
A 1978 masters thesis on Early Australian Computers (back to 1950’s, mainly 1960’s) cites a 17 June 1960 CSIRO report estimating
1,000 computers in the US and 100 in the UK. With no estimate mentioned for Western Europe.
The thesis has a long discussion of what to count as a (digital) ‘computer’ -
sources used different definitions, resulting in very different numbers,
making it difficult to reconcile early estimates, especially across continents & countries.
Reverse estimating to 1960 from the “10,000” NATO estimate of 1968, with a 1- or 2-year doubling time,
gives a range of 200-1,000, including the “100” in the UK.
Licklider and later directors of ARPA’s IPTO threw millions into Computing research in the 1960’s, funding research and University groups directly.
[ UCB had many projects/groups funded, including the CSRG creating BSD & TCP/IP stack & tools ]
Obviously there was more to the “Both sides of the Atlantic” argument of E.W. Dijkstra and Alan Kay - funding and numbers of installations was very different.
The USA had a substantially larger installed base of computers, even per person,
and with more university graduates trained in programming, a higher take-up in private sector, not just the public sector and defence, was possible.
=========
<https://www.acm.org/about-acm/acm-history>
In September 1949, a constitution was instituted by membership approval.
————
<https://web.archive.org/web/20160317070519/https://www.cs.cmu.edu/link/inst…>
In 1958, Perlis began teaching the first freshman-level computer programming course in the United States at Carnegie Tech.
In 1965, Carnegie Tech established its Computer Science Department with a $5 million grant from the R.K. Mellon Foundation. Perlis was the first department head.
=========
From the 1968 NATO report [pg 9 of pdf ]
<http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF>
Helms:
In Europe alone there are about 10,000 installed computers — this number is increasing at a rate of anywhere from 25 per cent to 50 per cent per year.
The quality of software provided for these computers will soon affect more than a quarter of a million analysts and programmers.
d’Agapeyeff:
In 1958 a European general purpose computer manufacturer often had less than 50 software programmers,
now they probably number 1,000-2,000 people; what will be needed in 1978?
_Yet this growth rate was viewed with more alarm than pride._ (comment)
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
... every memory location is a stack.
I'm thinking of a notation like:
A => CADR (yes, I'm a LISP fan)
A' => A[0]
A'' => A[-1]
etc.
Not quite APL\360, but I guess pretty close :-)
-- Dave
Has there ever been a full implementation of PL/I? It seems akin to
solving the halting problem...
Yes, I've used PL/I in my CompSci days, and was told that IBM had trademarked
everything from /I to /C :-)
-- Dave, who loved PL/360
Good morning, I was reading recently on the earliest days of COBOL and from what I could gather, the picture (heh) regarding compilers looks something like this:
- RCA introduced the "Narrator" compiler for the 501[1] in November, 1960 (although it had been demonstrated in August of that year)
- Sperry Rand introduced a COBOL compiler running under the BOSS operating system[2] sometime in late 1960 or early 1961 (according to Wikipedia which then cites [3]).
First, I had some trouble figuring out the loading environment on the RCA 501, would this have been a "set the start address on the console, load cards/PPT, and go" sort of setup, or was there an OS/monitor involved?
And for the latter, there is a UNIVAC III COBOL Programmer's Guide (U-3389) cited in [2] but thus far I haven't found this online. Does anyone know if COBOL was also on the UNIVAC II (or I) and/or if any distinct COBOL documents from this era of UNIVAC survive anywhere?
- Matt G.
[1] - http://bitsavers.org/pdf/rca/501/RCA501_COBOL_Narrator_Dec60.pdf
[2] - http://bitsavers.org/pdf/univac/univac3/U-3521_BOSS_III_Jan63.pdf
[3] - Williams, Kathleen Broome (10 November 2012). Grace Hopper: Admiral of the Cyber Sea.
[ moved to coff ]
On Thu, Mar 14, 2024 at 7:49 AM Theodore Ts'o <tytso(a)mit.edu> wrote:
> On Thu, Mar 14, 2024 at 11:44:45AM +1100, Alexis wrote:
> >
> > i basically agree. i won't dwell on this too much further because i
> > recognise that i'm going off-topic, list-wise, but:
> >
> > i think part of the problem is related to different people having
> > different preferences around the interfaces they want/need for
> > discussions. What's happened is that - for reasons i feel are
> > typically due to a lock-in-oriented business model - many discussion
> > systems don't provide different interfaces/'views' to the same
> > underlying discussions. Which results in one community on platform X,
> > another community on platform Y, another community on platform Z
> > .... Whereas, for example, the 'Rocksolid Light' BBS/forum software
> > provides a Web-based interface to an underlying NNTP-based system,
> > such that people can use their NNTP clients to engage in forum
> > discussions. i wish this sort of approach was more common.
>
> This is a bit off-topic, and so if we need to push this to a different
> list (I'm not sure COFF is much better?), let's do so --- but this is
> a conversation which is super-improtant to have. If not just for Unix
> heritage, but for the heritage of other collecvtive systems-related
> projects, whether they be open source or proprietary.
>
> A few weeks ago, there were people who showed up on the git mailing
> list requesting that discussion of the git system move from the
> mailing list to using a "forge" web-based system, such as github or
> gitlab. Their reason was that there were tons of people who think
> e-mail is so 1970's, and that if we wanted to be more welcoming to the
> up-and-coming programmers, we should meet them were they were at. The
> obvious observations of how github was proprietary, and locking up our
> history there might be contra-indicated was made, and the problem with
> gitlab is that it doesn't have a good e-mail gateway, and while we
> might be disenfranchising the young'uns by not using some new-fangled
> web interface, disenfranchising the existing base of expertise was
> even worse idea.
>
> The best that we have today is lore.kernel.org, which is used by both
> the Linux Kernel and the git development communities. It uses
> public-inbox to archive the mailing list traffic, and it can be
> accessed via threaded e-mail interface, as well as via NNTP. There
> are also tools for subscribing to messages that match a filtering
> criteria, as well as tools for extracting patches plus code review
> sign-offs into a form that can be easily consumed by git.
>
email based flows are horrible. Absolutely the worst. They are impossible
to manage. You can't subscribe to them w/o insane email filtering rules,
you can't discover patches or lost patches easily. There's no standard way
to do something as simple as say 'never mind'. There's no easy way
to follow much of the discussion or find it after the fact if your email was
filtered off (ok, yea, there kinda is, if you know which archives to troll
through).
As someone who recently started contributing to QEMU I couldn't get over
how primitive the email interaction was. You magically have to know who
to CC on the patches. You have to look at the maintainers file, which is
often
stale and many of the people you CC never respond. If a patch is dropped,
or overlooked it's up to me to nag people to please take a look. There's
no good way for me to find stuff adjacent to my area (it can be done, but
it takes a lot of work).
So you like it because you're used to it. I'm firmly convinced that the
email
workflow works only because of the 30 years of toolings, work arounds, extra
scripts, extra tools, cult knowledge, and any number of other "living with
the
poo, so best polish up" projects. It's horrible. It's like everybody has
collective
Stockholm syndrome.
The peoople begging for a forge don't care what the forge is. Your
philisophical
objections to one are blinding you to things like self-hosted gitea,
gitlab, gerrit
which are light years ahead of this insane workflow.
I'm no spring chicken (I sent you patches, IIRC, when you and bruce were
having
the great serial port bake off). I've done FreeBSD for the past 30 years
and we have
none of that nonsense. The tracking isn't as exacting as Linux, sure. I'll
grant. the
code review tools we've used over the years are good enough, but everybody
that's used them has ideas to make them better. We even accept pull requests
from github, but our source of truth is away from github. We've taken an
all
of the above approach and it makes the project more approachable.In
addition,
I can land reviewed and tested code in FreeBSD in under an hour (including
the
review and acceptance testing process). This makes it way more efficient
for me
to do things in FreeBSD than in QEMU where the turn around time is days,
where
I have to wait for the one true pusher to get around to my pull request,
where I have
to go through weeks long processes to get things done (and I've graduated to
maintainer status).
So the summary might be email is so 1970s, but the real problem with it is
that it requires huge learning curve. But really, it's not something any
sane person would
design from scratch today, it has all these rules you have to cope with,
many unwritten.
You have to hope that the right people didn't screw up their email filters.
You have to
wait days or weeks for an answer, and the enthusiasm to contribute dies in
that time.
A quick turnaround time is essential for driving enthusiasm for new
committers in the
community. It's one of my failings in running FreeBSD's github experiment:
it takes me
too long to land things, even though we've gone from years to get an answer
to days to
weeks....
I studied the linux approach when FreeBSD was looking to improve it's git
workflow. And
none of the current developers think it's a good idea. In fact, I got huge
amounts of grief,
death threads, etc for even suggesting it. Everybody thought, to a person
that as badly
as our hodge-podge of bugzilla, phabricator and cruddy git push hooks, it
was lightyears
ahead of Linux's system and allowed us to move more quickly and produced
results that
were good enough.
So, respectfully, I think Linux has succeed despite its tooling, not
because of it. Other factors
have made it successful. The heroics that are needed to make it work are
possible only
because there's a huge supply that can be wasted and inefficiently deployed
and still meet
the needs of the project.
Warner