Please excuse the wide distribution, but I suspect this will have general
interest in all of these communities due to the loss of the LCM+Labs.
The good folks from SDF.org are trying to create the Interim Computer
Museum:
https://icm.museum/join.html
As Lars pointed out in an earlier message to COFF there is a 1hr
presentation on the plans for the ICM.
https://toobnix.org/w/ozjGgBQ28iYsLTNbrczPVo
FYI: The yearly (Bootstrap) subscription is $36
They need to money to try to keep some of these systems online and
available. The good news is that it looks like many of the assets, such as
Miss Piggy, the Multics work, the Toads, and others, from the old LCM are
going to be headed to a new home.
ᐧ
Just sharing a copy of the Roff Manual that I had forgotten I scanned a little while back:
https://archive.org/details/roff_manual
This appears to be the UNIX complement to the S/360 version of the paper backed up by Doug here: https://www.cs.dartmouth.edu/~doug/roff71/roff71.pdf
From the best I could tell, this predates both 1973's V3 and the 1971 S/360 version of the paper, putting it somewhere prior to 1971. For instance, it is missing the .ar, .hx, .ig, .ix, .ni, .nx, .ro, .ta, and .tc requests found in V3. The .ar and .ro, and .ta requests pop up in the S/360 paper, the rest are in the V3 manpage (prior manpages don't list the request summary).
If anyone has some authoritative date information I can update the archive description accordingly.
Finally, this very well could be missing the last page, the Page offset, Merge patterns, and Envoi sections of Doug's paper are not reflected here, although at the very least, the .mg request is not in this paper so the Merge patterns section probably wasn't there anyway.
- Matt G.
I had meant to copy TUSH on this/
On Wed, Jul 17, 2024 at 2:41 PM Tom Lyon <pugs78(a)gmail.com> wrote:
> I got excited by your mention of a S/360 version, but Doug's link talks
> about the GECOS version (GE/Honeywell hardware).
>
> Princeton had a S/360 version at about that time, it was a re-write of a
> version for the IBM 7094 done by Kernighan after spending a summer at MIT
> with CTSS and RUNOFF. I'm very curious whether the Princeton S/360 version
> spread to other locations. Found this article in the Daily Princetonian
> about the joy and history of ROFF.
> https://photos.app.goo.gl/zMWV1GRLZdNBUuP36
>
> On Wed, Jul 17, 2024 at 1:51 PM segaloco via TUHS <tuhs(a)tuhs.org> wrote:
>
>> Just sharing a copy of the Roff Manual that I had forgotten I scanned a
>> little while back:
>>
>> https://archive.org/details/roff_manual
>>
>> This appears to be the UNIX complement to the S/360 version of the paper
>> backed up by Doug here:
>> https://www.cs.dartmouth.edu/~doug/roff71/roff71.pdf
>>
>> From the best I could tell, this predates both 1973's V3 and the 1971
>> S/360 version of the paper, putting it somewhere prior to 1971. For
>> instance, it is missing the .ar, .hx, .ig, .ix, .ni, .nx, .ro, .ta, and .tc
>> requests found in V3. The .ar and .ro, and .ta requests pop up in the
>> S/360 paper, the rest are in the V3 manpage (prior manpages don't list the
>> request summary).
>>
>> If anyone has some authoritative date information I can update the
>> archive description accordingly.
>>
>> Finally, this very well could be missing the last page, the Page offset,
>> Merge patterns, and Envoi sections of Doug's paper are not reflected here,
>> although at the very least, the .mg request is not in this paper so the
>> Merge patterns section probably wasn't there anyway.
>>
>> - Matt G.
>>
>
> Yeah, but if you do that you have to treat the places
> acquired in the Louisiana Purchase differently because
> they switched in 1582. And Puerto Rico. Bleh.
Then there are all the German city states. And the
shifting borders of Poland. (cal -s country) is a mighty
low-res "solution" to the Julian/Gregorian problem.
Doug
The manpage for "cal" used to have the comment "Try September 1752" (and
yes, I know why); it's no longer there, so when did it disappear? The
SysV fun police?
I remember it in Ed5 and Ed6, but can't remember when I last saw it.
Thanks.
-- Dave
A few folks on the PIDP mailing lists asked me to put scans of the cards I
have on-line. I included TUHS as some of the new to the PDP-11 folks might
find these interesting also.
I also included a few others from other folks. Note my scans are in 3
formats (JPG, TIFF, PDF) as each has advantages. Pick which one you
prefer. I tried to scan in as a high a resolution as I could in case some
one wants to try to print the later.
I may try adding some of my other cards, such as my microprocessor and IBM
collections. In the future
https://drive.google.com/open?id=13dPAlRMQEwNvPwLXwlOC5Q_ZrQp4IpkJ&usp=driv…
ᐧ
I subscribe to the TUHS mailing list, delivered in digest form. I do not
remember having subscribed to COFF, and am not aware of how to do so. Yet
COFF messges come in my TUHS digest. How does COFF differ from TUHS and how
does one subscibe to it (if at all)?
Doug
Just had a quick look at 'man cat' on Uixes I've got 'at hand'. Just a 'cut
and past' of the relevant parts.
SCO UNIX 3.2V4.2
Limitations
Note that ``cal 84'' refers to the year 84, not 1984.
The calendar produced is the Gregorian calendar from September 14 1752
onward. Dates up to and including September 2 1752 use the Julian calen-
dar. (England and her colonies switched from the Julian to the
Gregorian
calendar in September 1752, at which time eleven days were excised from
the year. To see the result of this switch, try cal 9 1752.)
Digital UNIX 4.0g
DESCRIPTION
The cal command writes to standard output a Gregorian calendar for the
specified year or month.
For historical reasons, the cal command's Gregorian calendar is
discontinu-
ous. The display for September 1752 (cal 9 1752) jumps from Wednesday the
2nd to Thursday the 14th.
--
The more I learn the better I understand I know nothing.
All ...
----- Forwarded message from Poul-Henning Kamp -----
Subject: DKUUG, EUUG and 586 issues (3700+ pages) of Unigram-X 1988…1996
(Please forward to the main TUHS list if you think it is warranted)
A brief intro: Datamuseum.dk is a volunteer-run effort to collect,
preserve and present "The Danish IT-history".
UNIX is part of that history, but our interest is seen through the
dual prisms of "Danish IT-History" and the computers in our collection.
My own personal UNIX interest is of course much deeper and broader,
which is why I'm sending this email.
Recently we helped clean out the basement under the Danish Unix
User's Group (DKUUG) which is winding down, and we hauled of a lot
of stuff, which includes much EUUG - (European Unix Users Group)
material.
As I feed it through the scanner, the EUUG-newsletters will appear here:
https://datamuseum.dk/wiki/Bits:Keyword/PERIODICALS/EUUG-NEWSLETTER
And proceedings from EUUG conferences (etc.) will appear here:
https://datamuseum.dk/wiki/Bits:Keyword/DKUUG/EUUG
I also found four boxes full of "Unigram-X" newsletters.
Unigram-X was a newsletter, published weekly out of London. A
typical issue was two yellow A3 sheets folded, or if if news was
slight, a folded A3 with an A4 insert.
… and that is just about all I know about it.
But whoever wrote it, they clearly had an amazing Rolodex.
In total there a tad more than 3700 pages of real-time news and
gossip about the UNIX world from 1986 to 1996.
It's not exactly core material for datamuseum.dk, but it is a
goldmine for UNIX history, so I have spent two full days day scanning
and getting all the pages, sorted, flipped and split into
one-year-per-pdf files.
I should warn that neither the raw material nor the scan is perfect,
but this is it, unless somebody else feels like going through it again.
(The paper stays in our collection, no rush.)
I need to go through and check for pages being upside down or out
of order, before I ingest the PDFSs into the Datamuseum.dk bitarchive,
but here is a preview:
https://phk.freebsd.dk/misc/unigram_x_1986_0034_0058.pdfhttps://phk.freebsd.dk/misc/unigram_x_1987_0059_0108.pdfhttps://phk.freebsd.dk/misc/unigram_x_1988_0109_0159.pdfhttps://phk.freebsd.dk/misc/unigram_x_1989_0160_0211.pdfhttps://phk.freebsd.dk/misc/unigram_x_1990_0212_0262.pdfhttps://phk.freebsd.dk/misc/unigram_x_1991_0263_0313.pdfhttps://phk.freebsd.dk/misc/unigram_x_1992_0314_0365.pdfhttps://phk.freebsd.dk/misc/unigram_x_1993_0366_0416.pdfhttps://phk.freebsd.dk/misc/unigram_x_1994_0417_0467.pdfhttps://phk.freebsd.dk/misc/unigram_x_1995_0468_0518.pdfhttps://phk.freebsd.dk/misc/unigram_x_1996_0519_0616.pdf
My ulterior motives for this preview are several:
If you find any out-of-order or rotated pages, please let me know.
It's not a complete collection, the following issues are missing:
1…33 35 39…49 86…87 105 138 229 321 400 405…406 496 498
507 520 523…524 527…528 548 613 615 617…
It would be nice to fill the holes.
As a matter of principle, we do not store OCR'ed PDF's in the
datamuseum.dk bitarchive[1], and what with me still suffering from
a job etc, I do not have the time to OCR 3700+ pages under any
circumstances.
But even the most crude and buggy OCR reading would be a great
resource to grep(1), so I'm hoping somebody else might find
the time and inclination ?
And a "best-of-unigram-x" page on the TUHS wiki may be warranted,
because there are some seriously great nuggets in there :-)
Enjoy,
Poul-Henning
[1] I'm not entertaining any arguments about this: We're trying
to align with best practice in historical collection world.
The argument goes: Unless the OCR is perfect, people will do
a text-search, not find stuff, and conclude it is not there.
Such interpretations of artifacts belong in peer-reviewed papers,
so there is a name of who to blame or praise, and so that they
can be debated & revised etc.
[2] The PDF's are archive-quality, you can extract the raw images
from them, for instance with XPDF's "pdfimages" program.
----- End forwarded message -----
> From: Dan Cross
> These techniques are rather old, and I think go back much further than
> we're suggesting. Knuth mentions nested translations in TAOCP ..
> suggesting the technique was well-known as early as the mid-1960s.
I'm not sure what exactly you're referring to with "[t]hese techniques"; I
gather you are talking about the low-level mechanisms used to implement
'system calls'? If so, please permit me to ramble for a while, and revise
your time-line somewhat.
There are two reasons one needs 'system calls'; low-level 'getting there from
here' (which I'll explain below), and 'switching operating/protection
domains' (roughly, from 'user' to 'kernel').
In a batch OS which only runs in a single mode, one _could_ just use regular
subroutine calls to call the 'operating system', to 'get there from here'.
The low-level reason not to do this is that one would need the addresses of
all the routines in the OS (which could change over time). If one instead
used permanent numbers to identify system calls, and had some sort of 'system
call' mechanism (an instruction - although it could be a subroutine call to a
fixed location in the OS), one wouldn't need the addresses. But this is just
low level mechanistic stuff. (I have yet to research to see if any early OS's
did use subroutine calls as their interface.)
The 'switching operating/protection domains' is more fundamental - and
unavoidable. Obviously, it wouldn't have been needed before there were
machines/OS's that operated in multiple modes. I don't have time to research
which was the _first_ machine/OS to need this, but clearly CTSS was an early
one. I happen to have a CTSS manual (two, actually :-), and in CTSS a system
call was:
TSX NAMEI, 4
..
..
NAMEI: TIA =HNAME
where 'NAME' "is the BCD name of a legitimate supervisor entry point", and
the 'TIA' instruction may be "usefully (but inexactly) read as Trap Into A
core" (remember that in CTSS, the OS lived in the A core). (Don't ask me what
HNAME is, or what a TSX instruction does! :-)
So that would have been 1963, or a little earlier. By 1965 (see the 1965 Fall
Joint Computer Conference papers:
https://multicians.org/history.html
for more), MIT had already moved on to the idea of using a subroutine calls
that could cross protection domain boundaries for 'system calls', for
Multics. The advantage of doing that is that if the machine has a standard
way of passing arguments to subroutines, you natively/naturally get arguments
to system calls.
Noel
All, just a friendly reminder to use the TUHS mailing list for topics
related to Unix, and to switch over to the COFF mailing list when the
topic drifts away from Unix. I think a couple of the current threads
ought to move over to the COFF list.
Thanks!
Warren
> In order to port VMS to new architectures, DEC/HP/VSI ...
> turned the VAX MACRO assembly language (in which
> some of the VMS operating system was written) into a
> portable implementation language by 'compiling' the
> high-level CISC VAX instructions (and addressing modes)
> into sequences of RISC instructions.
Clem Pease did the same thing to port TMG from IBM 7000-series machines to
the GE 600 series for Multics, circa 1967. Although both architectures had
36-bit words, it was a challenge to adequately emulate IBM's accumulator,
which supported 38-bit sign-magnitude addition, 37-bit twos-complement and
36-bit ones-complement.
Doug
I’ve never heard of a Computer Science or Software Engineering program
that included a ‘case study’ component, especially for Software Development & Projects.
MBA programs feature an emphasis on real-world ‘case studies’, to learn from successes & failures,
to give students the possibility of not falling into the same traps.
Creating Unix V6, because it profoundly changed computing & development,
would seem an obvious Case Study for many aspects of Software, Coding and Projects.
There have been many descriptive treatments of Initial Unix,
but I’ve never seen a Case Study,
with explicit lessons drawn, possibly leading to metrics to evaluate Project progress & the coding process.
Developers of Initial Unix arguably were 10x-100x more productive than IBM OS/360, a ‘best practice’ development at the time,
so what CSRC did differently is worth close examination.
I’ve not seen examined the role of the ‘capability’ of individual contributors, the collaborative, collegiate work environment
and the ‘context’, a well funded organisation not dictating deadlines or product specifications for researchers.
USG, then USL, worked under ’normal commercial’ management pressure for deadlines, features and specifications.
The CSRC/1127 group did have an explicit approach & principles for what they did and how they worked,
publishing a number of books & papers on them - nothing they thought or did is secret or unavailable for study.
Unix & Unix tools were deliberately built with explicit principles, such as “Less is More”.
Plan 9 was also built on explicit Design principles.
The two most relevant lessons I draw from Initial Unix are:
- the same as Royce's original “Software Waterfall” paper,
“build one to throwaway” [ albeit, many, many iterations of the kernel & other code ]
- Writing Software is part Research, part Creative ‘Art’:
It’s Done when it's Done, invention & creation can’t be timetabled.
For the most reliable, widely used Open Source projects,
the “Done when it’s Done” principle is universally demonstrated.
I’ve never seen a large Open Source project succeed when attempting to use modern “Project Management” techniques.
These Initial Unix lessons, if correct and substantiated, should cause a revolution in the teaching & practice
of Professional Programming, i.e. Programming In the Large, for both CS & SW.
There are inherent contradictions within the currently taught Software Project Management Methodologies:
- Individual capability & ability is irrelevant
The assumption is ‘programmers’ are fungible/ identical units - all equally able to solve any problem.
Clearly incorrect: course evaluations / tests demonstrate at least a 100x variance in ability in every software dimension.
- Team environment, rewards & penalties and corporate context are irrelevant,
Perverse incentives are widely evident, the cause of many, or all, “Death Marches”.
- The “Discovery & Research Phases” of a project are timetabled, an impossibility.
Any suggestions for Case Studies gratefully accepted.
===========
Professions & Professionals must learn over time:
there’s a negative aspect (don’t do this) and positive aspect (do this) for this deliberate learning & improvement.
Negatives are “Don’t Repeat, or Allow, Known Errors, Faults & Failures”
plus in the Time Dimension, “Avoid Delays, Omissions and Inaction”.
Positives are what’s taught in Case Studies in MBA courses:
use techniques & approaches known to work.
Early Unix, from inception to CACM papers, 1969 to 1974, took probably 30 man-years,
and produced a robust, performant and usable system for it’s design target, “Software Development”.
This in direct comparison to Fred Brooks IBM OS/360 effort around 5 years before that consumed 3,000-4,000 man-years
was known for bugs, poor & inconsistent code quality, needed large resource to run and was, politely, non-performant.
This was a commercial O/S, built by a capable, experienced engineering organisation, betting their company on it,
who assigned their very best to the hardware & software projects. It was “Best of Breed” then, possibly also now.
MULTICS had multiple business partners, without the same, single focus or commercial imperative.
I don’t believe it’s comparable to either system.
Initial Unix wasn’t just edit, compile & run, but filesystems, libraries, debugging & profiling tools, language & compiler construction tools, ‘man’ pages, document prep (nroff/troff) and 'a thousand' general tools leveraging shell / pipe.
This led directly to modern toolchains, config, make & build systems, Version Control, packaging systems, and more.
Nothing of note is built without using descendants or derivatives of these early toolchains.
All this is wrapped around by many Standards, necessary for portable systems, even based on the same platform, kernel and base system.
The “Tower of Babel” problem is still significant & insurmountable at times, even in C-C & Linux-Linux migration,
but without POSIX/IEEE standards the “Software Tarpit” and "Desert of Despair” would’ve been unsolvable.
The early Unix system proved adaptable and extensible to many other environments, well beyond “Software Development”.
===========
[ waterfall model ]
Managing the development of large software systems: concepts and techniques
W. W. Royce, 1970 [ free access ]
<https://dl.acm.org/doi/10.5555/41765.41801>
STEP3: DO IT TWICE, pg 334
After documentation, the second most important criterion for success revolves around whether the product is totally original.
If the computer program in question is being developed for the first time,
arrange matters so that the version finally delivered to the customer for operational deployment
is actually the second version insofar as critical design/operations areas are concerned.
===========
Plan 9, Design
<https://9p.io/sys/doc/9.html>
The view of the system is built upon three principles.
First, resources are named and accessed like files in a hierarchical file system.
Second, there is a standard protocol, called 9P, for accessing these resources.
Third, the disjoint hierarchies provided by different services are joined together into a single private hierarchical file name space.
The unusual properties of Plan 9 stem from the consistent, aggressive application of these principles.
===========
Escaping the software tar pit: model clashes and how to avoid them
Barry Boehm, 1999 [ free access ]
<https://dl.acm.org/doi/abs/10.1145/308769.308775#>
===========
Mythical Man-Month, The: Essays on Software Engineering,
Anniversary Edition, 2nd Edition
Fred Brooks
Chapter 1. The Tar Pit
Large-system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it.
===========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
I suppose this question is best directed at Rob or Doug, but just might as well ask at large: Given AT&T's ownership of Teletype and the involvement (AFAIK) of Teletype with other WECo CRT terminals (e.g. Dataspeed 40 models), was there any direct involvement of folks from the Teletype side of things in the R&D on the Jerq/Blit/DMD? Or was this terminal pure Bell Labs?
- Matt G.
Good afternoon, I was wondering if anyone currently on list is in possession of a copy of the UNIX Programmer's Manual for Program Generic Issue 3 from March of 1977. This is the version nestled between Issue 2 which Al Kossow has preserved here:
http://www.bitsavers.org/pdf/att/unix/6th_Edition/UNIX_Programmers_Manual_1…
and the MERT 0 release provided by I believe Heinz Lycklama here:
https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/
If I might make a request of anyone having such a copy, could I trouble you for at the very least scans of the lines(V), getty(VIII), and init(VIII) pages? I can't 100% confirm the presence of the first page, but the instructions for replacing PG3 pages with MERT0 pages above indicate a page called lines was present in section V of the PG3 manual, and there is a fragment of a lines(5) page in the CB-UNIX 2.3 manual here:
https://www.tuhs.org/Archive/Distributions/USDL/CB_Unix/man/man5/lines.5.pdf
In short, lines there appears to be a predecessor to inittab(5) and, if the references in CB and USG PG3 are the same, points to the earliest appearance in the wild of System V-style init in PG3 all the way back in 1977. Granted we don't have earlier CB-UNIX literature to wholly confirm whether this started in PG3 or some pre-'77 issue of CB-UNIX, but I'm quite interested in seeing how these relate.
Thanks for any help!
- Matt G.
> From: Steve Jenkin
> C wasn't the first standardised coding language, FORTRAN & COBOL at
> least were before it
There were a ton; Algol-60 is the most imppotant one I can think of.
(I was thinking that Algol-60 was probably an important precursor to BCPL,
which was the predecessor to C, but Richards' first BCPL paper:
https://dl.acm.org/doi/10.1145/1476793.1476880
"BCPL: A tool for compiler writing and system programming" doesn't call it
out, only CPL. However, CPL does admit its dues to Algol-60: "CPL is to a
large extent based on ALGOL 60".)
Noel
Howdy,
I now have this pictured 3B21D in my facility
http://kev009.com/wp/2024/07/Lucent-5ESS-Rescue/
It will be a moment before I can start work on documentation of the
3B21D and DMERT/UNIX-RTR but wanted to share the news.
Regards,
Kevin
So I'm doing a little bit of the footwork in prep for analyzing manual differences between Research, Program Generic, and PWB/UNIX, and found something interesting.
The LIL Programming Language[1] was briefly available as a user-supported section 6 command on Fifth Edition (1974) UNIX, appearing as a page but not even making it into the TOC. It was gone as quickly as it appeared in the Research stream, not surviving into V6.
However, with Al Kossow's provided Program Generic Issue 2 (1976) manual[2] as well as notes in the MERT Issue 0 (1977) manual [3], it appears that LIL was quite supported in the USG Program Generic line, making it into section 1 of Issue 2 and staying there through to Issue 3. lc(1) happens to be one of the pages excised in the transformation from PG Issue 3 to MERT Issue 0.
This had me curious, so I went looking around the extant V5 sources and couldn't find anything resembling the LIL compiler. Does anyone know if this has survived in any other fashion? Additionally, does anyone have any recollection of whether LIL was in significant use in USG-supported UNIX sites, or if it somehow made it into section 1 and spread around due to the state of use in Research at the time USG sampled userland out.
Finally, one little tidbit from P.J. Plauger's paper[1] stuck out to me: "...the resulting language was used to help write a small PDP-11/10 operating system at Bell Labs." Does anyone have any information about this operating system, whether it was a LIL experiment or something purpose-driven and used in its own right after creation?
[1] - http://www.ultimate.com/phil/lil/
[2] - http://bitsavers.org/pdf/att/unix/6th_Edition/UNIX_Programmers_Manual_19760…
[3] - https://www.tuhs.org/Archive/Documentation/Manuals/MERT_Release_0/Pgs%2001-…'s%20Manual%20for%20MERT.pdf
> Peter J Denning in 2008 wrote about reforming CACM in 1982/83. [ extract
at end ]
> <https://cacm.acm.org/opinion/dja-vu-all-over-again/>
That "accomplishment" drove me away from the ACM. I hope the following
explanation does not set off yet another long tangential discussion.
The CACM had been the prime benefit of ACM membership. It carried generally
accessible technical content across the whole spectrum of computing. The
"Journal for all Members" (JAM) reform resulted in such content being
thinly spread over several journals. To get the perspective that CACM had
offered, one would have had to subscribe to and winnow a mountain of
specialst literature--assuming the editors of these journals would accept
some ACM-style articles.
I had been an active member of ACM, having serving as associate editor of
three journals, member of the publications planning committee, national
lecturer, and Turing-Award chairman. When the JAM reform cut off my window
into the field at large, I quit the whole organization.
With the advent of WWW, the ACM Digital Library overcame the need to
subscribe to multiple journals for wide coverage. Fortunately I had
institutional acess to that. I rejoined ACM only after its decision to
allow free access to all 20th century content in the DL. This
public-spirited action more than atoned for the damage of the JAM reform
and warranted my support.
I have been happy to find that the current CACM carries one important
technical article in each issue and a couple of interesting columnists
among the generally insipid JAM content. And I am pleased by the news that
ACM will soon also give open access to most 21st-century content.
Doug
Found these two.
Anyone seen others?
I bought this book soon after it was published.
It’s a detailed study of some major IT projects, doesn’t draw “lessons” & rules like I’d expect of an MBA Case Study.
Why information systems fail: a case study approach
February 1993
<https://dl.acm.org/doi/book/10.5555/174553>
> On 5 Jul 2024, at 09:31, Lawrence Stewart <stewart(a)serissa.com> wrote:
>
> A quick search also shows a number of software engineering case study books.
================
Case Study Research in Software Engineering: Guidelines and Examples
April 2012
<https://dl.acm.org/doi/book/10.5555/2361717>
Based on their own experiences of in-depth case studies of software projects in international corporations,
in this bookthe authors present detailed practical guidelines on
the preparation, conduct, design and reporting of case studies of software engineering.
This is the first software engineering specific book on thecase study research method.
================
Case studies for software engineers
May 2006
<https://dl.acm.org/doi/10.1145/1134285.1134497>
The topic of this full-day tutorial was the correct use and interpretation of case studies as an empirical research method.
Using an equal blend of lecture and discussion, it gave attendees
a foundation for conducting, reviewing, and reading case studies.
There were lessons for software engineers as researchers who
conduct and report case studies, reviewers who evaluate papers,
and practitioners who are attempting to apply results from papers.
The main resource for the course was the book
Case Study Research: Design and Methods by Robert K. Yin.
This text was supplemented with positive and negative examples from the literature.
================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Links for those who’ve not read these articles. Open access, downloadable PDF’s.
================
Peter J Denning in 2008 wrote about reforming CACM in 1982/83. [ extract at end ]
<https://cacm.acm.org/opinion/dja-vu-all-over-again/>
The space shuttle primary computer system
Sept 1984
<https://dl.acm.org/doi/10.1145/358234.358246>
The TWA reservation system
July 1984
<https://dl.acm.org/doi/abs/10.1145/358105.358192>
================
After Editor in Chief of the ACM, in 1993 Denning established "The Center for the New Engineer" (CNE)
<http://www.denninginstitute.com/cne/cne-aug93.pdf>
Great Principles of Computing, paper
<https://denninginstitute.com/pjd/PUBS/ENC/gp08.pdf>
Website
<https://denninginstitute.com/pjd/GP/GP-site/welcome.html>
================
Denning, 2008
Another major success was the case studies conducted by Alfred Spector and David Gifford of MIT,
who visited project managers and engineers at major companies and interviewed them about their projects,
producing no-holds-barred pieces.
This section was wildly popular among the readers.
Unfortunately, the labor-intensive demands of the post got the best of them after three years, and we were not able to replace them.
Also by that time, companies were getting more circumspect about discussing failures and lessons learned in public forums.
================
> On 5 Jul 2024, at 09:31, Lawrence Stewart <stewart(a)serissa.com> wrote:
>
> Alright, apologies for being late.
>
> Back in 1984, David Gifford and Al Spector started a series of case studies for CACM.
> I think only two were published, on the TWA reservation system and on the Space Shuttle primary computer.
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin