> From: Dan Cross
> These techniques are rather old, and I think go back much further than
> we're suggesting. Knuth mentions nested translations in TAOCP ..
> suggesting the technique was well-known as early as the mid-1960s.
I'm not sure what exactly you're referring to with "[t]hese techniques"; I
gather you are talking about the low-level mechanisms used to implement
'system calls'? If so, please permit me to ramble for a while, and revise
your time-line somewhat.
There are two reasons one needs 'system calls'; low-level 'getting there from
here' (which I'll explain below), and 'switching operating/protection
domains' (roughly, from 'user' to 'kernel').
In a batch OS which only runs in a single mode, one _could_ just use regular
subroutine calls to call the 'operating system', to 'get there from here'.
The low-level reason not to do this is that one would need the addresses of
all the routines in the OS (which could change over time). If one instead
used permanent numbers to identify system calls, and had some sort of 'system
call' mechanism (an instruction - although it could be a subroutine call to a
fixed location in the OS), one wouldn't need the addresses. But this is just
low level mechanistic stuff. (I have yet to research to see if any early OS's
did use subroutine calls as their interface.)
The 'switching operating/protection domains' is more fundamental - and
unavoidable. Obviously, it wouldn't have been needed before there were
machines/OS's that operated in multiple modes. I don't have time to research
which was the _first_ machine/OS to need this, but clearly CTSS was an early
one. I happen to have a CTSS manual (two, actually :-), and in CTSS a system
call was:
TSX NAMEI, 4
..
..
NAMEI: TIA =HNAME
where 'NAME' "is the BCD name of a legitimate supervisor entry point", and
the 'TIA' instruction may be "usefully (but inexactly) read as Trap Into A
core" (remember that in CTSS, the OS lived in the A core). (Don't ask me what
HNAME is, or what a TSX instruction does! :-)
So that would have been 1963, or a little earlier. By 1965 (see the 1965 Fall
Joint Computer Conference papers:
https://multicians.org/history.html
for more), MIT had already moved on to the idea of using a subroutine calls
that could cross protection domain boundaries for 'system calls', for
Multics. The advantage of doing that is that if the machine has a standard
way of passing arguments to subroutines, you natively/naturally get arguments
to system calls.
Noel
All, just a friendly reminder to use the TUHS mailing list for topics
related to Unix, and to switch over to the COFF mailing list when the
topic drifts away from Unix. I think a couple of the current threads
ought to move over to the COFF list.
Thanks!
Warren
> In order to port VMS to new architectures, DEC/HP/VSI ...
> turned the VAX MACRO assembly language (in which
> some of the VMS operating system was written) into a
> portable implementation language by 'compiling' the
> high-level CISC VAX instructions (and addressing modes)
> into sequences of RISC instructions.
Clem Pease did the same thing to port TMG from IBM 7000-series machines to
the GE 600 series for Multics, circa 1967. Although both architectures had
36-bit words, it was a challenge to adequately emulate IBM's accumulator,
which supported 38-bit sign-magnitude addition, 37-bit twos-complement and
36-bit ones-complement.
Doug
I’ve never heard of a Computer Science or Software Engineering program
that included a ‘case study’ component, especially for Software Development & Projects.
MBA programs feature an emphasis on real-world ‘case studies’, to learn from successes & failures,
to give students the possibility of not falling into the same traps.
Creating Unix V6, because it profoundly changed computing & development,
would seem an obvious Case Study for many aspects of Software, Coding and Projects.
There have been many descriptive treatments of Initial Unix,
but I’ve never seen a Case Study,
with explicit lessons drawn, possibly leading to metrics to evaluate Project progress & the coding process.
Developers of Initial Unix arguably were 10x-100x more productive than IBM OS/360, a ‘best practice’ development at the time,
so what CSRC did differently is worth close examination.
I’ve not seen examined the role of the ‘capability’ of individual contributors, the collaborative, collegiate work environment
and the ‘context’, a well funded organisation not dictating deadlines or product specifications for researchers.
USG, then USL, worked under ’normal commercial’ management pressure for deadlines, features and specifications.
The CSRC/1127 group did have an explicit approach & principles for what they did and how they worked,
publishing a number of books & papers on them - nothing they thought or did is secret or unavailable for study.
Unix & Unix tools were deliberately built with explicit principles, such as “Less is More”.
Plan 9 was also built on explicit Design principles.
The two most relevant lessons I draw from Initial Unix are:
- the same as Royce's original “Software Waterfall” paper,
“build one to throwaway” [ albeit, many, many iterations of the kernel & other code ]
- Writing Software is part Research, part Creative ‘Art’:
It’s Done when it's Done, invention & creation can’t be timetabled.
For the most reliable, widely used Open Source projects,
the “Done when it’s Done” principle is universally demonstrated.
I’ve never seen a large Open Source project succeed when attempting to use modern “Project Management” techniques.
These Initial Unix lessons, if correct and substantiated, should cause a revolution in the teaching & practice
of Professional Programming, i.e. Programming In the Large, for both CS & SW.
There are inherent contradictions within the currently taught Software Project Management Methodologies:
- Individual capability & ability is irrelevant
The assumption is ‘programmers’ are fungible/ identical units - all equally able to solve any problem.
Clearly incorrect: course evaluations / tests demonstrate at least a 100x variance in ability in every software dimension.
- Team environment, rewards & penalties and corporate context are irrelevant,
Perverse incentives are widely evident, the cause of many, or all, “Death Marches”.
- The “Discovery & Research Phases” of a project are timetabled, an impossibility.
Any suggestions for Case Studies gratefully accepted.
===========
Professions & Professionals must learn over time:
there’s a negative aspect (don’t do this) and positive aspect (do this) for this deliberate learning & improvement.
Negatives are “Don’t Repeat, or Allow, Known Errors, Faults & Failures”
plus in the Time Dimension, “Avoid Delays, Omissions and Inaction”.
Positives are what’s taught in Case Studies in MBA courses:
use techniques & approaches known to work.
Early Unix, from inception to CACM papers, 1969 to 1974, took probably 30 man-years,
and produced a robust, performant and usable system for it’s design target, “Software Development”.
This in direct comparison to Fred Brooks IBM OS/360 effort around 5 years before that consumed 3,000-4,000 man-years
was known for bugs, poor & inconsistent code quality, needed large resource to run and was, politely, non-performant.
This was a commercial O/S, built by a capable, experienced engineering organisation, betting their company on it,
who assigned their very best to the hardware & software projects. It was “Best of Breed” then, possibly also now.
MULTICS had multiple business partners, without the same, single focus or commercial imperative.
I don’t believe it’s comparable to either system.
Initial Unix wasn’t just edit, compile & run, but filesystems, libraries, debugging & profiling tools, language & compiler construction tools, ‘man’ pages, document prep (nroff/troff) and 'a thousand' general tools leveraging shell / pipe.
This led directly to modern toolchains, config, make & build systems, Version Control, packaging systems, and more.
Nothing of note is built without using descendants or derivatives of these early toolchains.
All this is wrapped around by many Standards, necessary for portable systems, even based on the same platform, kernel and base system.
The “Tower of Babel” problem is still significant & insurmountable at times, even in C-C & Linux-Linux migration,
but without POSIX/IEEE standards the “Software Tarpit” and "Desert of Despair” would’ve been unsolvable.
The early Unix system proved adaptable and extensible to many other environments, well beyond “Software Development”.
===========
[ waterfall model ]
Managing the development of large software systems: concepts and techniques
W. W. Royce, 1970 [ free access ]
<https://dl.acm.org/doi/10.5555/41765.41801>
STEP3: DO IT TWICE, pg 334
After documentation, the second most important criterion for success revolves around whether the product is totally original.
If the computer program in question is being developed for the first time,
arrange matters so that the version finally delivered to the customer for operational deployment
is actually the second version insofar as critical design/operations areas are concerned.
===========
Plan 9, Design
<https://9p.io/sys/doc/9.html>
The view of the system is built upon three principles.
First, resources are named and accessed like files in a hierarchical file system.
Second, there is a standard protocol, called 9P, for accessing these resources.
Third, the disjoint hierarchies provided by different services are joined together into a single private hierarchical file name space.
The unusual properties of Plan 9 stem from the consistent, aggressive application of these principles.
===========
Escaping the software tar pit: model clashes and how to avoid them
Barry Boehm, 1999 [ free access ]
<https://dl.acm.org/doi/abs/10.1145/308769.308775#>
===========
Mythical Man-Month, The: Essays on Software Engineering,
Anniversary Edition, 2nd Edition
Fred Brooks
Chapter 1. The Tar Pit
Large-system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it.
===========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
I suppose this question is best directed at Rob or Doug, but just might as well ask at large: Given AT&T's ownership of Teletype and the involvement (AFAIK) of Teletype with other WECo CRT terminals (e.g. Dataspeed 40 models), was there any direct involvement of folks from the Teletype side of things in the R&D on the Jerq/Blit/DMD? Or was this terminal pure Bell Labs?
- Matt G.
Good afternoon, I was wondering if anyone currently on list is in possession of a copy of the UNIX Programmer's Manual for Program Generic Issue 3 from March of 1977. This is the version nestled between Issue 2 which Al Kossow has preserved here:
http://www.bitsavers.org/pdf/att/unix/6th_Edition/UNIX_Programmers_Manual_1…
and the MERT 0 release provided by I believe Heinz Lycklama here:
https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/
If I might make a request of anyone having such a copy, could I trouble you for at the very least scans of the lines(V), getty(VIII), and init(VIII) pages? I can't 100% confirm the presence of the first page, but the instructions for replacing PG3 pages with MERT0 pages above indicate a page called lines was present in section V of the PG3 manual, and there is a fragment of a lines(5) page in the CB-UNIX 2.3 manual here:
https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/man/man5/lines.5.pdf
In short, lines there appears to be a predecessor to inittab(5) and, if the references in CB and USG PG3 are the same, points to the earliest appearance in the wild of System V-style init in PG3 all the way back in 1977. Granted we don't have earlier CB-UNIX literature to wholly confirm whether this started in PG3 or some pre-'77 issue of CB-UNIX, but I'm quite interested in seeing how these relate.
Thanks for any help!
- Matt G.
> From: Steve Jenkin
> C wasn't the first standardised coding language, FORTRAN & COBOL at
> least were before it
There were a ton; Algol-60 is the most imppotant one I can think of.
(I was thinking that Algol-60 was probably an important precursor to BCPL,
which was the predecessor to C, but Richards' first BCPL paper:
https://dl.acm.org/doi/10.1145/1476793.1476880
"BCPL: A tool for compiler writing and system programming" doesn't call it
out, only CPL. However, CPL does admit its dues to Algol-60: "CPL is to a
large extent based on ALGOL 60".)
Noel
Howdy,
I now have this pictured 3B21D in my facility
http://kev009.com/wp/2024/07/Lucent-5ESS-Rescue/
It will be a moment before I can start work on documentation of the
3B21D and DMERT/UNIX-RTR but wanted to share the news.
Regards,
Kevin
So I'm doing a little bit of the footwork in prep for analyzing manual differences between Research, Program Generic, and PWB/UNIX, and found something interesting.
The LIL Programming Language[1] was briefly available as a user-supported section 6 command on Fifth Edition (1974) UNIX, appearing as a page but not even making it into the TOC. It was gone as quickly as it appeared in the Research stream, not surviving into V6.
However, with Al Kossow's provided Program Generic Issue 2 (1976) manual[2] as well as notes in the MERT Issue 0 (1977) manual [3], it appears that LIL was quite supported in the USG Program Generic line, making it into section 1 of Issue 2 and staying there through to Issue 3. lc(1) happens to be one of the pages excised in the transformation from PG Issue 3 to MERT Issue 0.
This had me curious, so I went looking around the extant V5 sources and couldn't find anything resembling the LIL compiler. Does anyone know if this has survived in any other fashion? Additionally, does anyone have any recollection of whether LIL was in significant use in USG-supported UNIX sites, or if it somehow made it into section 1 and spread around due to the state of use in Research at the time USG sampled userland out.
Finally, one little tidbit from P.J. Plauger's paper[1] stuck out to me: "...the resulting language was used to help write a small PDP-11/10 operating system at Bell Labs." Does anyone have any information about this operating system, whether it was a LIL experiment or something purpose-driven and used in its own right after creation?
[1] - http://www.ultimate.com/phil/lil/
[2] - http://bitsavers.org/pdf/att/unix/6th_Edition/UNIX_Programmers_Manual_19760…
[3] - https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Pgs%2001-…'s%20Manual%20for%20MERT.pdf
> Peter J Denning in 2008 wrote about reforming CACM in 1982/83. [ extract
at end ]
> <https://cacm.acm.org/opinion/dja-vu-all-over-again/>
That "accomplishment" drove me away from the ACM. I hope the following
explanation does not set off yet another long tangential discussion.
The CACM had been the prime benefit of ACM membership. It carried generally
accessible technical content across the whole spectrum of computing. The
"Journal for all Members" (JAM) reform resulted in such content being
thinly spread over several journals. To get the perspective that CACM had
offered, one would have had to subscribe to and winnow a mountain of
specialst literature--assuming the editors of these journals would accept
some ACM-style articles.
I had been an active member of ACM, having serving as associate editor of
three journals, member of the publications planning committee, national
lecturer, and Turing-Award chairman. When the JAM reform cut off my window
into the field at large, I quit the whole organization.
With the advent of WWW, the ACM Digital Library overcame the need to
subscribe to multiple journals for wide coverage. Fortunately I had
institutional acess to that. I rejoined ACM only after its decision to
allow free access to all 20th century content in the DL. This
public-spirited action more than atoned for the damage of the JAM reform
and warranted my support.
I have been happy to find that the current CACM carries one important
technical article in each issue and a couple of interesting columnists
among the generally insipid JAM content. And I am pleased by the news that
ACM will soon also give open access to most 21st-century content.
Doug