I don't know of any OSes that use floating point. But the IBM operating
systems for S/360/370 did use packed decimal instructions in a few places.
This was an issue for the System/360 model 44. The model 44 was
essentially a model 40 but with the (much faster) model 65's floating point
hardware. It was intended as a reduced-cost high-performance technical
computing machine for small research outfits.
To keep the cost down, the model 44 lacked the packed decimal arithmetic
instructions, which of course are not needed in HPTC. But that meant that
off-the-shelf OS/360 would not run on the 44. It had its own OS called
PS/44.
IIRC VAX/VMS ran into similar issues when the microVAX architecture was
adopted. To save on chip real estate, microVAX did not implement packed
decimal, the complicated character string instructions, H-floating point,
and some other exotica (such as CRC) in hardware. They were emulated by
the OS. For performance reasons it behooved one to avoid those data types
and instructions on later VAXen.
I once traced a severe performance problem to a subroutine where there were
only a few instructions that weren't generating emulator faults. The
culprit was the oddball conversion semantics of PL/I, which caused what
should have been D-float arithmetic to be done in 15-digit packed decimal.
Once I fixed that the program ran 100 times faster.
-Paul W.
On Mon, Jul 8, 2024 at 9:04 PM Aron Insinga <aki(a)insinga.com> wrote:
> I found it sad, but the newest versions of the BLISS compilers do not
> support using it as an expression language. The section bridging pp
> 978-979 (as published) of Brender's history is:
>
> "The expression language characteristic was often highly touted in the
> early years of BLISS. While there is a certain conceptual elegance that
> results, in practice this characteristic is not exploited much.
> The most common applications use the if-then-else expression, for
> example, in something like the maximum calculation illustrated in Figure 5.
> Very occasionally there is some analogous use of a case expression.
> Examples using loops (taking advantage of the value of leave), however,
> tend not to work well on human factors grounds: the value computed tends to
> be visually lost in the surrounding control constructs and too far removed
> from where it will be used; an explicit assignment to a temporary variable
> often seems to work better.
> On balance, the expression characteristic of BLISS was not terribly
> important."
>
> Ron Brender is correct. All of the software development groups at DEC had
programming style guidelines and most of those frowned on the use of BLISS
as an expression language. The issue is maintainability of the code. As
Brender says, a human factors issue.
> Another thing that I always liked (but is still there) is the ease of
> accessing bit fields with V<FOO_OFFSET, FOO_SIZE> which was descended from
> BLISS-10's use of the PDP-10 byte pointers. [Add a dot before V to get an
> rvalue.] (Well, there was this logic simulator which really packed data
> into bit fields of blocks representing gates, events, etc....)
>
> Indeed. BLISS is the best bit-banging language around. The field
reference construct is a lot more straightforward than the and/or bit masks
in most languages. In full the construct is:
expression-1<offset-expr, size-expr, padding-expr>
expression-1 is a BLISS value from which the bits are to be extracted.
offset-expr is start of the field to be extracted (bit 0 being the low bit
of the value) and size-expr is the number of bits to be extracted. The
value of the whole mess is a BLISS value with the extracted field in the
low-order bits. padding-expr controls the value used to pad the high order
bits: if even, zero-padded, if odd, one-padded.
I always wondered how this would work on the IBM S/360/370 architecture.
It is big-endian and bit 0 of a machine word is the most significant bit,
not the least significant as in DEC's architectures.
-Paul W.
[redirecting this to COFF]
On Mon, Jul 8, 2024 at 5:40 PM Aron Insinga <aki(a)insinga.com> wrote:
>
> When DEC chose an implementation language, they knew about C but it had
> not yet escaped from Bell Labs. PL/I was considered, but there were
> questions of whether or not it would be suitable for a minicomputer. On
> the other hand, by choosing BLISS, DEC could start with the BLISS-11
> cross compiler running on the PDP-10, which is described in
> https://en.wikipedia.org/wiki/The_Design_of_an_Optimizing_Compiler
> BLISS-11
> <https://en.wikipedia.org/wiki/The_Design_of_an_Optimizing_CompilerBLISS-11>
> and DEC's Common BLISS had changes necessitated by different
> word lengths and architectures, including different routine linkages
> such as INTERRUPT, access to machine-specific operations such as INSQTI,
> and multiple-precision floating point operations using builtin functions
> which used the addresses of data instead of the values.
>
> In order to port VMS to new architectures, DEC/HP/VSI retargeted and
> ported the BLISS compilers to new architectures.
>
> There have in general been two approaches to achieving language
portability (machine independence).
One of them is to provide only abstract data types and operations on them
and to completely hide the machine implementation. PL/I and especially Ada
use this approach.
BLISS does the exact opposite. It takes the least common denominator. All
machine architectures have machine words and ways to pick them apart.
BLISS has only one data type--the word. It provides a few simple
arithmetic and logical operations and also syntax for operating on
contiguous sets of bits within a word. More complicated things such as
floating point are done by what look like routine calls but are actually
implemented in the compiler.
BLISS is also a true, full-blown expression language. Statement constructs
such as if/then/else have a value and can be used in expressions. In C
terminology, everything in BLISS is a lvalue. A semicolon terminates an
expression and throws its value away.
BLISS is also unusual in that it has an explicit fetch operator, the dot
(.). The assignment expression (=) has the semantics "evaluate the
expression to the right of the equal sign and then store that value in the
location specified by the expression to the left of the equal sign".
Supposing that a and b are identifiers for memory locations, the expression:
a = b;
means "place b (the address of a memory location) at the location given by
a (also a memory location)". This is the equivalent of:
a = &b;
in C. To get C's version of "a = b;" in BLISS you need an explicit fetch
operator:
a = .b;
Forgetting to use the fetch operator is probably the most frequent error
made by new BLISS programmers familiar with more conventional languages.
DEC used four dialects of BLISS as their primary software development
language: BLISS-16, BLISS-32, BLISS-36, and BLISS-64 the numbers
indicating the BLISS word size in bits. BLISS-16 targeted the PDP-11 and
BLISS-36 the PDP-10. DEC did implementations of BLISS-32 for VAX, MIPS,
and x86. BLISS-64 was targeted to both Alpha and Itanium. VSI may have a
version of BLISS-64 that generates x86-64 code.
-Paul W.
I moved this to COFF since it's a TWENEX topic. Chet Ramsey pointed folks
at a wonderful set of memories from Dan Murphy WRT to the development of
TENEX and later become TOPS-20. But one comment caught me as particularly
wise and should be understood and digested by all:
*"... a complex, complicated design for some facility in a software or
hardware product is usually not an indication of great skill and maturity
on the part of the designer. Rather, it is typically evidence of lack of
maturity, lack of insight, lack of understanding of the costs of
complexity, and failure to see the problem in its larger context."*
ᐧ
All, recently I saw on Bruce Schneier "Cryptogram" blog that he has had
to change the moderation policy due to toxic comments:
https://www.schneier.com/blog/archives/2024/06/new-blog-moderation-policy.h…
So I want to take this opportunity to thank you all for your civility
and respect for others on the TUHS and COFF lists. The recent systemd
and make discussions have highlighted significant differences between
people's experiences and opinions. Nonetheless, apart from a few pointed
comments, the discussions have been polite and informative.
These lists have been in use for decades now and, thankfully, I've
only had to unsubscribe a handful of people for offensive behaviour.
That's a testament to the calibre of people who are on the lists.
Cheers and thank you again,
Warren
P.S. I'm a happy Devuan (non-systemd) user for many years now.
[Moved to COFF. Mercifully this really has nothing to do with Unix]
On Wednesday, 19 June 2024 at 22:09:11 -0700, Luther Johnson wrote:
> On 06/19/2024 10:01 PM, Scot Jenkins via TUHS wrote:
>> "Greg A. Woods" <woods(a)robohack.ca> wrote:
>>
>>> I will not ever allow cmake to run, or even exist, on the machines I
>>> control...
>>
>> How do you deal with software that only builds with cmake (or meson,
>> scons, ... whatever the developer decided to use as the build tool)?
>> What alternatives exist short of reimplementing the build process in
>> a standard makefile by hand, which is obviously very time consuming,
>> error prone, and will probably break the next time you want to update
>> a given package?
>>
>> If there is some great alternative, I would like to know about it.
>
> I just avoid tools that build with CMake altogether, I look for
> alternative tools. The tool has already told me, what I can expect from
> a continued relationship, by its use of CMake ...
That's fine if you have the choice. I use Hugin
(https://hugin.sourceforge.io/) a panorama stitcher, and the authors
have made the decision to use cmake. I don't see any useful
alternative to to Hugin, so I'm stuck with cmake.
Greg
--
Sent from my desktop computer.
Finger grog(a)lemis.com for PGP public key.
See complete headers for address and phone numbers.
This message is digitally signed. If your Microsoft mail program
reports problems, please read http://lemis.com/broken-MUA.php
Sort of between kernel and user mode, Unix [zillion trademarks etc] never
used it. but did RSX-11?
I used the latter long enough to hate it until Edition 5 arrived at UNSW,
and I still remember being blown away by the fact that there was nothing
privileged about the Shell :-)
-- Dave
This report [link at end ] about a security issue with VMware Vsphere, stemming from the design/ architecture, resonated with me and the recent TUHS “Unix Philosophy” thread.
Many of the criticisms of Unix relate to not understanding it’s purpose and design criteria:
A platform on which to develop (other) Software. Which implies ‘running, profiling, testing & debugging’ that code.
Complaining that Unix tools/utilities are terse and arcane for non-developers & testers, needing a steep Learning Curve,
is the same as complaining a large truck doesn’t accelerate or corner like a sports car.
Plan 9, by the same core team twenty years later, addresses the same problems with modern hardware & graphics, including with Networking.
The system they developed in 1990 would’ve been proof against both vSphere attacks because of its security-by-design:
No ‘root’ user, hence no ’sudo’
and no complex, heavyweight RPC protocol with security flaws, instead the simple, lightweight & secure 9P protocol.
It seems Eric Raymond’s exposition on the “Unix Philosophy” is the basis of much of the current understanding / view.
In the ESR & other works cited on Wikipedia, I see a lot about “Userland” approaches,
nothing about the Kernel, Security by Design and innovations like ’shells’, ‘pipes’ and the many novel standard tools, which is
being able to Reuse standard tools and ’stand on the shoulders of giants’ [ versus constantly Reinventing the Wheel, poorly ]
ESR was always outside CSRC and from his resume, not involved with Unix until 1983 at best.
He’s certainly been a mover & shaker in the Linux and associated (GNU led) Open Source community.
<http://catb.org/~esr/resume.html>
ESR baldly states "The Unix philosophy is not a formal design method”,
which isn’t strictly untrue, but highly misleading IMHO.
Nor is the self-description by members of CSRC as having “good taste” a full and enlightening description of their process.
There’s not a general appreciation, even in Research & Academic circles, that “Software is Performance Discipline”,
in the same way as Surgery, Rocketry, Aviation, Music, Art and physical disciplines (dance, gymnastics, even rock climbing) are “Performance” based.
It requires both Theory and Practice.
If an educator hasn’t worked on at least one 1M LOC system, how can they teach “Programming in the Large”, the central problem of Software Engineering?
[ an aside: the problem “golang” addressed was improving Software Engineering, not simply a language & coding. ]
There’s a second factor common to all high-performance disciplines,
why flying has become cheaper, safer and faster since the first jet & crashes in 1950’s:
- good professionals deliberately improve, by learning from mistakes & failures and (perhaps) adopting better practices,
- great professionals don’t just ‘improve’, they actively examine how & why they created Errors, Faults & Failures and detect / remove root causes.
The CSRC folk used to hate Corporate attempts at Soft Skills courses, calling them “Charm School”.
CSRC's deliberate and systematic learning, adaption and improvement wasn’t accidental or incidental,
it was the same conscious approach used by Fairchild in its early days, the reason it quickly became the leader in Silicon devices, highly profitable, highly valued.
Noyce & Moore, and I posit CSRC too, applied the Scientific Method to themselves and their practices, not just what their research field.
IMO, this is what made CSRC unique - they were active Practitioners, developing high-quality, highly-performant code, as well as being astute Researchers,
providing quantifiably better solutions with measurable improvements, not prototypes or partial demonstrators.
Gerard Holtzman’s 1127 Alumni page shows the breadth & depth of talent that worked at CSRC.
The group was unusually productive and influential. [ though I’ve not seen a ‘collected works’ ]
<http://spinroot.com/gerard/1127_alumni.html>
CSRC/1127 had a very strong culture and a very deliberate, structured ‘process’
that naturally led to a world-changing product in 1974 from only ~30 man-years of effort, a minor effort in Software Projects.
perfective “iterative design”, rigorous testing, code quality via a variation of pair-programming,
collaborative design with group consultation / discussion
and above all “performant” code - based first on ‘correct’ and ’secure’,
backed by Doug McIlroy’s insistence on good documentation for everything.
[ It’s worth noting that in the original paper on the “Waterfall” development process, it isn’t "Once & Done”, its specifically “do it twice”, ]
[ the Shewhart Cycle, promoted by Deming, Plan - Do - Check - Act, was well known in Engineering circles, known to be very Effective ]
Unix - the kernel & device drivers, the filesystem, the shell, libraries, userland and standard tools - weren’t done in hurry between 1969 & 1974’s CACM article.
It was written and rewritten many times - far more than the ‘versions’, derived from the numbering of the manuals, might suggest.
Ken’s comment on one of his most productive days, “throwing away 1,000 lines of code”,
demonstrates this dynamic environment dominated by trials, redesign and rewriting - backed by embedded ‘instrumentation’ (profiling).
Ken has also commented he had to deliberately forget all his code at one point (maybe after 1974 or 77).
He was able to remember every line of code he’d written, in every file & program.
I doubt that was an innate skill, even if so, it would’ve improved by deliberate practice, just as in learning to play a musical instrument.
There’s a lot of research in Memory & Recall, all of which documents ‘astonishing’ performance by ‘ordinary’ people, with a only little tuition and deliberate practice.
CSRC had a scientific approach to software design and coding, unlike any I’ve seen in commercial practice, academic research or promoted “Methodologies”.
There’s a casual comment by Dennis in “Evolution of Unix”, 1979, about rewriting the kernel, improving its organisation and adding multiprogramming.
By one person in months.. A documented, incontestable level of productivity, 100x-1000x programmers practising mainstream “methodologies”.
Surely that performance alone would’ve been worthy of intensive study as the workforce & marketplace implications are profound.
<https://www.bell-labs.com/usr/dmr/www/hist.pdf>
Perhaps the most important watershed occurred during 1973, when the operating system kernel was rewritten in C.
… The success of this effort convinced us that C was useful as a nearly universal tool for systems programming, instead of just a toy for simple applications.
The CSRC software evolution methodology is summed by perfectly in Baba Brinkman’s Evolution Rap:
"Performance, Feedback, Revision”
<https://www.youtube.com/watch?v=gTXVo0euMe4>
Website: <https://bababrinkman.com/>
ABC Science Show, 2009, 54 min audio, no transcript
This is the performance Baba gave at the Darwin Festival in Cambridge England, July 2009.
<https://www.abc.net.au/listen/programs/scienceshow/the-rap-guide-to-evoluti…>
Ken also commented that they divided up the work coding, seemingly informally but in a disciplined way,
so that there was only ever one time they created the same file. [ "mis-coordination of work”, Turing Award speech ]
To prove they had well defined coding / naming standards and followed them, the two 20-line files were identical…
———————
There’s a few things with the “Unix Philosophy” that are critical and not included in the commentaries I’ve read or seen quoted:
- The Unix kernel was ‘conservative’, not inventive or novel.
It deliberately used only known, proven solutions, with a focus on small, correct, performant. “Just Worked”, not “Worked, Just”.
Swapping was used, while Virtual Memory not implemented because they didn’t know of a definitive solution.
They avoided the “Second System Effect” - showing how clever they were - working as professional engineers producing a robust, reliable, secure system.
- Along with Unix (kernel, fsys, userland), CSRC developed a high-performance high-quality Software Development culture and methodology,
The two are inseparable, IMO.
- Professionals do not, can not, write non-trivial code in a “One and Done” manner. Professional quality code takes time and always evolves.
It takes significant iterative improvement, including redesign, to develop large systems,
with sufficient security, reliability, maintainability and performance.
[ Despite 60 years of failed “Big Bang” projects using “One & Done”, Enterprises persist with this idioticy, wasting billions every year ]
- Unix was developed to provide CSRC with a great environment for their own work. It never attempted to be more, but has been applied ‘everywhere’.
Using this platform, members of the team developed a whole slew of important and useful tools,
now taken as a given in Software Development: editors, type settings, ‘diff’ and Version Control, profile, debug, …
This includes the computer Language Tools, now core to every language & system.
- Collaboration and Sharing, both ways, was central to the Unix Philosophy developed at CSRC.
Both within the team, within Bell Labs and other Unix installations, notably USENIX & UCB and it’s ARPA-IPTO funded CSRG.
The world of Software and Code Development is clearly in two Eras, “Before Unix” and “After”.
Part of this is “Open Source”, not just shared source targeted for a single platform & environment, but source code mechanically ported to new platforms.
This was predicated on the original CSRC / Bell Labs attitude of Sharing the Source…
Source was shared in & out,
directly against the stance of the Legal Dept, intent on tightly controlling all Intellectual Property with a view of extracting “revenue streams” from clients.
Later events proved CSRC’s “Source Code Sharing” was far more powerful and profitable than a Walled Garden approach, endlessly reinvesting the wheel & competing, not cooperating with others.
Senior Management and the old school lawyers arguably overestimated their marketing & product capability
and wildly underestimated the evolution of computing and failed to understand completely the PC era, with Bill Gates admonisment,
“You guys don’t get it, it’s all about Volume”.
In 1974, Unix was described publicly in CACM.
In 1977, USG then later Unix System Labs was formed to work on and sell Unix commercially, locking day the I.P., with no free source code.
In 1984, AT&T ‘de-merged’, keeping Bell Labs, USL and Western Digital - all the hardware and software to “Rule the World” and beat IBM.
In 1994, AT&T gave up being the new IBM and sold its hardware and software divisions.
In 2004, AT&T was bought by one of its spinoff’s, SBC (Southern Bell),
who’d understood Mobile Telephony (passing on to customers savings from new technology), merged and rebranded themselves as “A&T”.
The “Unix Wars” of the 1990’s, where vendors bought AT&T licenses, confusing “Point of Difference” with “Different & Incompatible”.
They attempted Vendor lock-in, a monopoly tactic to create captive markets that could be gouged.
This failed for two reasons, IMO:
- the software (even binaries) and tools were all portable, the barriers to exit were low.
- Unix wasn’t the only competitor
Microsoft used C to write Windows NT and Intel-based hardware to undercut Unix Servers & Workstations by 10x.
Bill Gates understood ‘Volume’ and the combined AT&T and Unix vendors didn’t.
================
VMware by Broadcom warns of two critical vCenter flaws, plus a nasty sudo bug
<https://www.theregister.com/2024/06/18/vmware_criticial_vcenter_flaws/>
VMware's security bulletin describes both of the flaws as "heap-overflow vulnerabilities in the implementation of the DCE/RPC protocol” …
DCE/RPC (Distributed Computing Environment/Remote Procedure Calls)
is a means of calling a procedure on a remote machine as if it were a local machine – just the ticket when managing virtual machines.
================
CHM, 2019
<https://computerhistory.org/blog/the-earliest-unix-code-an-anniversary-sour…>
As Ritchie would later explain:
“What we wanted to preserve was not just a good environment to do programming, but a system around which a fellowship could form.
We knew from experience that the essence of communal computing, as supplied from remote-access, time-shared machines,
is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.”
================
Ken Thompson, 1984 Turing Award paper
Reflections on Trusting Trust To what extent should one trust a statement that
a program is free of Trojan horses?
Perhaps it is more important to trust the people who wrote the software.
That brings me to Dennis Ritchie.
Our collaboration has been a thing of beauty.
In the ten years that we have worked together, I can recall only one case of mis-coordination of work.
On that occasion, I discovered that we both had written the same 20-line assembly language program.
I compared the sources and was astounded to find that they matched character-for-character.
The result of our work together has been far greater than the work that we each contributed.
================
The Art of Unix Programming
by ESR
<http://www.catb.org/~esr/writings/taoup/html/index.html>
Basics of the Unix Philosophy
<http://www.catb.org/~esr/writings/taoup/html/ch01s06.html>
================
Wiki
ESR
<https://en.wikipedia.org/wiki/Eric_S._Raymond>
Unix Philosophy
<https://en.wikipedia.org/wiki/Unix_philosophy>
================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
There seems to be some confusion, but I've heard enough sources now
that I believe it to be confirmed. Notably, faculty at UMich EECS have
shared that it was passed to them internally.
RIP Lynn Conway, architect of the VLSI revolution and long-time
transgender activist. She apparently died from heart failure; she was
86. http://www.myhusbandbetty.com/wordPressNEW/2024/06/11/lynn-conway-january-2…
- Dan C.
Could interest a few OFs here... I've used the -8 and of course the -11,
but not the -10 so I may as well start now.
-- Dave
---------- Forwarded message ----------
From: A fellow geek
To: Dave Horsfall <dave(a)horsfall.org>
Subject: PiDP-10 — The MagPi magazine
RasPi is now masquerading as a PDP-10…
https://magpi.raspberrypi.com/articles/pidp-10