> From: Steve Jenkin
> An unanswered question about Silicon Valley is:
> Why did it happen in California and not be successfully cloned
> elsewhere?
One good attempt at answering this is in "Making Silicon Valley: Innovation
and the Growth of High Tech, 1930-1970", by Christophe Lecuyer; it's also a
very good history of the early Silicon Valley (before the mid-1960's).
Most of it's available online, at Google:
https://books.google.com/books?id=5TgKinNy5p8C
I have neither the time nor energy to comment in detail on your very detailed
post, but I think Lecuyer would mostly agree with your points.
> It wasn't just AT&T, IBM & DEC that got run over by commodity DRAM &
> CPU's, it was the entire Minicomputer Industry, effectively extinct by
> 1995.
Same thing for the work-station industry (with Sun being merely the most
notable example). I have a tiny bit of second-hand personal knowldge in this
area; my wife works for NASA, as a structural engineer, and they run a lot of
large computerized mathematical models. In the 70's, they were using CDC
7600's; they moved along through various things as technology changed (IIRC,
at one point they had SGI machines). These days, they seem to mostly be using
high-end personal computers for this.
Some specialized uses (various forms of CAD) I guess still use things that
look like work-stations, but I expect they are stock personal computers
with special I/O (very large displays, etc).
So I guess now there are just supercomputers (themselves mostly built out of
large numbers of commodity CPUs), and laptops. Well, there is also cloud
computing, which is huge, but that also just uses lots of commodity CPUs.
Noel
Ric,
Thanks for the Real World ‘ground truth’!
You’ve woken me up to when the Mythical Golden Years had evaporated.
I’ve had my head buried in historical commentary, mainly the 1950’s & 1960’s.
Before Silicon Valley got rich :(
The net effect of people dealing in Real Estate for 150 years isn’t ‘cheap housing’.
Thanks very much for the correction.
For my own reference, I should really say ‘once cheap real estate’ or ‘historically cheap’.
From the 1850’s to 1900, land was exceedingly cheap in The Wild West, but not for 50 years :(
cheers
steve j
> On 19 May 2025, at 09:05, Rik Farrow <rik(a)rikfarrow.com> wrote:
>
> Hi Steve:
>
> Nice analysis, although I would disagree with you on one point:
>
> > Other people point to the climate, cheap Real Estate, lots of jobs, business opportunities, good pay and other factors…
>
> Cheap Real Estate? I was living near Washington DC in 1978 when a friend told me about the "Gold Coast". I asked her why they called California that, and she said because it was so expensive to live there.
>
> I moved there in 1979, and lived there until 1991, when my wife and I decided to move to a less expensive and crowded area in Arizona (Sedona). We had a family of four, plus needed extra rooms for my home office and her art studio, and have been priced out of living within a couple of hours drive of San Francisco.
>
> We really liked living there, and found people to be generally outgoing, good at communicating, cooperative but also very competitive. It was shocking to move to Arizona, with a much slower pace, but also people who were less friendly to strangers and less cooperative in general. Still, I certainly appreciated being in a place where I could hike and mountain bike, where in Marin County, north of SF, there would actually be cops with radar guns who would ticket bicyclists for coming around a corner on a dirt road faster than 5 mph! And once we had moved north of Marin, the opportunities for being alone in nature were low. The land was private and posted.
>
> So, no cheap Real Estate, not since the mid-70s. Instead, incredible home price inflation (now common where we live), and crazy-heavy traffic on top of that.
>
> Rik Farrow
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
I'm curious if anyone has the scoop on this. To my knowledge the 1984
/usr/group standard constitutes the earliest attempt at a vendor-neutral UNIX
standard. AT&T then comes along in 1985 with the first issue of the SVID, based
largely on SVR2 from what I know.
What I'm not getting a good read on or not is if the SVID was literally a direct
response from AT&T to the creation of the /usr/group standard or if there was
already an impetus in AT&T's sphere of influence to produce such a definitive
document. In either case, AT&T does list the /usr/group standard as an
influence, but doesn't go into the detail of "we made this because /usr/group's
standard exists" or "we made this ourselves and oh /usr/group also happens to
have a standard."
Even outside of this, did AT&T maintain anything comparable to the SVID in prior
years or was the manual essentially the interface definition?
Thanks for any recollections!
- Matt G.
responding to the SVID thread:
<https://www.tuhs.org/mailman3/hyperkitty/list/tuhs@tuhs.org/message/UFYAOAV…>
This is my timeline for AT&T :
1956 - Consent decrees for IBM & AT&T, barring each other from entering the others’ business.
1964 - AT&T engages with MULTICS, to build an "Information and Computing Service”, supplying the customer ‘outlets’
1969 - AT&T / Bell Labs withdraws from MULTICS
1974 - UNIX announced in CACM
1984 - AT&T de-merges, [0] into 7 Regional Operators, keeping ”long lines” and hardware/software development & manufacture.
1991 - IBM declares first loss, increasing losses for another two years.
1994 - Unix sold to Novell, hardware to NCR
2004 - AT&T acquired by SBC [0], a former Baby Bell that’d understood the Mobile phone market & pricing.
Both IBM & AT&T were aware of “Silicon Valley” and the rapid evolution of microelectronics.
AT&T even considered a 1966 Terman proposal, post-Stanford, to create “Silicon Valley East”.
AT&T validly reasoned they couldn’t create it, there wasn’t the needed culture of Cooperation & Collaboration around them.
Gordon Bell at DEC certainly understood the changes in technology
and admitted “he didn’t believe it” [ i.e. predictions from his own models ].
It wasn’t just AT&T, IBM & DEC that got run over by commodity DRAM & CPU’s,
it was the entire Minicomputer Industry, effectively extinct by 1995. [3]
One of the causes of AT&T’s management failure was the “East Coat” management culture,
documented by Tom Wolfe in 1983 [2] in “The Tinkerings of Robert Noyce”.
What Wolfe seems to have missed is the “East Coast” focus on high Gross Margins (50%-90% for product lines, at both IBM & AT&T),
compared to the Silicon Valley focus on “Volume” with implied modest Gross Margins: deliberately sharing cost-savings with Customers.
An unanswered question about Silicon Valley is:
Why did it happen in California and not be successfully cloned elsewhere?
There is something different / special about California that for a century has diversified its economy,
consistently grown its population faster than other US states,
and grown its economy, over a century, faster than any US state or other nation. [ 7 ]
I’ve not seen any definitive explanation of the Californian Miracle,
it’s incontrovertible in the long-run numbers, before & after Silicon Chips,
presumably due to multiple reinforcing factors, that create & maintain exceptional industries.
i.e. virtuous circles, like Silicon Valley remaining an economic powerhouse,
even as ’technology’ has evolved from Silicon devinces to chips, to software, to Internet Services.
The Silicon Revolution didn’t just crush computer companies - it broke other long-term successful innovators:
Kodak invented Digital Cameras in 1972,
only to be forced into bankruptcy around 2009 after a century plus of operations,
continuing after significant restructuring.
<https://en.wikipedia.org/wiki/Kodak#Bankruptcy>
===================
The SVID thread touches on the reasons that AT&T failed:
<https://www.tuhs.org/mailman3/hyperkitty/list/tuhs@tuhs.org/message/UFYAOAV…>
Clem Cole was in the room when Bill Gates said “it’s all about volume” [ implying modest margins ]
Others mention ‘lawyers’ came to dominate the firm, making poor decisions.
Rob Pike has previously mentioned on-list a lawyer killed the music, in PAC format, they’d arranged to put on Plan 9 distribution CD.
<https://www.tuhs.org/mailman3/hyperkitty/list/tuhs@tuhs.org/message/NNKHNKQ…>
Rachel Chalmers [4] in 1999 quotes Dennis Ritchie on the struggles to get the Version 6 code and Lions Commentary released.
The lawyers and Corporate Culture were still hanging on 20 years later, wanting to block public release because they could, not for business reasons.
In 1956, AT&T accounted for ~2% of US GDP, equivalent to ~$500B in sales today [1],
comparable to Apple’s 2024 earnings of $391B.
Tom Wolfe [2] wrote a piece on Noyce at Fairchild, then Intel, and details an “East Coast Management Style”, of opulence, self-indulgence and aggrandisement of ’the ruling class’. Wolfe is excoriating about “East Coast” management, though doesn’t detail the Silicon Valley / California approach in any detail.
Noyce & Moore left Fairchild in 1968 with the invention of the Self-Aligned Silicon Gate for MOSFET to pursue devices made of large, regular arrays of transistors,
also, quite particularly, to run the business without interference from the East Coast, able to reinvest their profits and grow the business as fast as they could.
Fairchild had passed over Noyce in favour of Lester Hogan from Motorola [5] and his management team.
Hogan was given a record renumeration package to move to Fairchild, but didn’t save the business.
Intel has lasted, quickly growing to be a significant force and holding a lead.
Fairchild never recovered after Noyce & Moore left and it sputtered out,
underlining the importance of management style in Semiconductor businesses.
Fairchild Semiconductor had, for over a decade, grown to be the dominant silicon device manufacturer,
despite the parent company consistently using their profits to invest in vanity projects with low returns or losses.
In 1963, Fairchild had announced UHF TV transistors for "$1.05” ( ~$11 now ) after the FAA added UHF band to broadcast TV.
To compete with Vacuum tubes, given for nothing to TV manufacturers, Fairchild had to drop prices ten-fold ( vs mil.spec devices )
[ Valve manufactures made their money on replacement valves, not on original parts. ]
Importantly, Fairchild’s price was below cost at the time.
The engineers who pitched this to Noyce knew they’d sell millions and be able to quickly reduce production costs to make large profits.
Noyce & Moore seemed to have used their Maths ability to solve a business / economics problem for them:
How to optimise long-run revenue and profits
in a dynamic, high CapEx / R&D / Non-Recurring Expenditure technology field?
Their answer is what we’ve christened “Moore’s Law”:
a) run R&D as hard as you can, shortening the ‘generation’ period to 2-3 years, well below the usual 5-10 years for “cost recovery”
b) pass on 100% of cost savings to customers, relying on “Elasticity of Demand” to drive Volumes and increase total revenue.
I assume they understood enough Game Theory to know once they adopted a “high volume / low-cost / rapid generations” strategy,
they’d force all manufacturers in the industry to follow or be left behind.
Kenneth Flamm [6], an economist, has written about the sustained “100% pass-through rate” of Silicon Semiconductors until at least 2010.
I’ve not seen him or others discuss the origins of this strategy, unique to Semiconductors.
Gordon Bell [3] noted the impact of the CMOS CPU revolution created by Silicon Valley.
In 1984, there were 92 US Minicomputer Companies using discrete or Small Scale IC’s.
In 1990, only 4 survived:
Data General, HP, DEC and IBM
[ Bell in [3] notes their fates out to 2002 ]
IBM declared it’s first loss [$2.6B] in 1991, deepening to ~$5B and then ~$8B in 1992 & 1993
- seemingly without warning, they were caught by the changes in the market as commodity chips surpassed everything.
IBM knew microprocessors were rapidly evolving, knew it’s Corporate Culture was antithetical to developing a product quickly & cheaply,
so developed the IBM PC in 1981 in Boca Raton, Florida - away from Manhattan & breaking their traditional fully internal development and high gross margin model.
During the 1970’s and 1980’s, IBM accounted for over 50% of Industry revenue - bigger than everyone else combined.
That they weren’t able to adapt and respond to the challenges they clearly saw speaks of management failure.
Like AT&T, they saw technology evolving and correctly forecast it’s impact on their ’traditional’ business lines,
but were unable to changed their corporate culture to adapt.
=================================================================================
References
=================================================================================
[0] <https://en.wikipedia.org/wiki/Breakup_of_the_Bell_System>
1984: Southwestern Bell Corporation (known as SBC Communications after 1995)
In 2005, SBC acquired former parent AT&T Corporation and took the AT&T name, becoming AT&T Inc.
===================
[1] How Antitrust Enforcement Can Spur Innovation: Bell Labs and the 1956 Consent Decree
<https://assets.aeaweb.org/asset-server/files/13359.pdf>
As described in Section I, the Bell System was the dominant provider of telecommunications services in the United States.
In terms of assets, AT&T was by far the largest private corporation in the world in 1956. AT&T, together with all companies in the Bell system,
employed 746,000 people with a
total revenue of $5.3 billion or
1.9% of the U.S. GDP at the time (Antitrust Subcommittee, 1959; Temin and Galambos, 1987).
===================
[2] THE TINKERINGS OF ROBERT NOYCE
How the sun rose on the Silicon Valley
Tom Wolfe, Dec 1983
<http://classic.esquire.com/the-tinkerings-of-robert-noyce/>
<https://web.stanford.edu/class/e145/2007_fall/materials/noyce.html>
A certain instinct Noyce had about this new industry and the people who worked in it began to take on the outlines of a concept.
Corporations in the East adopted a feudal approach to organization, without even being aware of it.
There were kings and lords, and there were vassals, soldiers, yeomen, and serfs, with layers of protocol and perquisites,
such as the car and driver, to symbolize superiority and establish the boundary lines.
Back east the CEOs had offices with carved paneling, fake fireplaces, escritoires, bergères, leather-bound books, and dressing rooms, like a suite in a baronial manor house.
Fairchild Semiconductor needed a strict operating structure, particularly in this period of rapid growth, but it did not need a social structure.
In fact, nothing could be worse.
Noyce realized how much he detested the eastern corporate system of class and status with its endless gradations,
topped off by the CEOs and vice-presidents who conducted their daily lives as if they were a corporate court and aristocracy.
He rejected the idea of a social hierarchy at Fairchild.
===================
[3] Rise and Fall of Minicomputers
Decline of the Classic Minicomputer (1985-1995)
Gordon Bell
2013
<https://ethw.org/Rise_and_Fall_of_Minicomputers#Decline_of_the_Classic_Mini…>
While the demise of classic minicomputers was clear by 1985, companies continued offering them until the early 1990s,
when the firms went bankrupt or were acquired by more astute competitors. (See Table 1)
Wang declared bankruptcy in 1992.
Compaq bought DEC in 1998, and HP acquired Compaq in 2002.
EMC turned Data General into a data storage business in 1999.
Table 1
<https://ethw.org/File:Bell-MinicomputerTable.JPG>
92 U.S. minicomputer companies, 1968-1985
(created by the author with help from colleagues over many years)
The following list includes all firms making general and special-purpose minicomputers for real-time processing, communications, business etc., sold to end users through OEMs, and bundled for process control and testing.
It does not include scores of military, AT&T, European, and Japanese computers and processing systems developed for niche markets.
49 started up and retained autonomy or died
2 grew at significant rates and continued to grow:
Data General (1969), Prime (1972)
8 grew at diminished or declining rates, or found small niches
39 ceased to manufacture
10 merged with larger companies
8 existing computer companies built minicomputers
2 grew rapidly:
Digital Equipment Corporation (1960), IBM (1965)
25 existing companies built minicomputers for their own use
1 formed a division, Dymec, around an acquisition and grew rapidly:
HP (1966)
===================
[4] Code Critic
John Lions wrote the first, and perhaps only, literary criticism of Unix, sparking one of open source's first legal battles.
Rachel Chalmers
November 30, 1999
<https://www.salon.com/test2/1999/11/30/lions_2/>
"By the time the seventh edition system came out, the company had begun to worry more about the intellectual property issues and 'trade secrets' and so forth," Ritchie explains.
"There was somewhat of a struggle between us in the research group who saw the benefit in having the system readily available,
and the Unix Support Group ...
Even though in the 1970s Unix was not a commercial proposition,
USG and the lawyers were cautious.
At any rate, we in research lost the argument.”
———
Ritchie never lost hope that the Lions books could see the light of day.
He leaned on company after company.
"This was, after all, 25-plus-year-old material, but when they would ask their lawyers,
they would say that they couldn't see any harm at first glance,
but there was a sort of 'but you never know ...' attitude, and they never got the courage to go ahead," he explains.
Finally, at SCO, Ritchie hit paydirt.
He already knew Mike Tilson, an SCO executive.
With the help of his fellow Unix gurus Peter Salus and Berny Goodheart, Ritchie brought pressure to bear.
"Mike himself drafted a 'grant of permission' letter," says Ritchie,
"'to save the legal people from doing the work!'"
Research, at last, had won.
===================
[5] Wikipedia on Fairchild
<https://en.wikipedia.org/wiki/Fairchild_Semiconductor>
Sherman Fairchild hired Lester Hogan, who was the head of Motorola semiconductor division.
Hogan proceeded to hire another hundred managers from Motorola to entirely displace the management of Fairchild.
The loss of these iconic executives, coupled with Hogan's displacement of Fairchild managers
demoralized Fairchild and prompted the entire exodus of employees to found new companies.
===================
[6] Moore's Law and the Economics of Semiconductor Price Trends
Flamm, 2018, NBER
<https://www.nber.org/system/files/working_papers/w24553/w24553.pdf>
———
Flamm posits semiconductors [ DRAM & Microprocessors at least ] have maintained a 100% cost “pass-through rate” since the 1960’s.
He’s been an expert witness for courts many times, as well as writing reports for the NBER and publishing academic papers.
———
In short, if the historic pattern of 2-3 year technology node introductions,
combined with a long run trend of wafer processing costs increasing very slowly were to have continued indefinitely,
a minimum floor of perhaps a 20 to 30 percent annual decline in quality-adjusted costs for manufacturing electronic circuits would be predicted,
due solely to these “Moore’s Law” fabrication cost reductions.
On average, over long periods, the denser, “shrink” version of the same chip design fabricated year earlier
would be expected to cost 20 to 30 percent less to manufacture,
purely because of the improved manufacturing technology.
How would reductions in production cost translate into price declines?
One very simple way to think about it would be in terms of a “pass-through rate,”
defined as dP/dC (incremental change in price per incremental change in production cost).
The pass-through rate for an industry-wide decline in marginal cost is equal to one in a perfectly competitive industry with constant returns to scale,
but can exceed or fall short of 1 in imperfectly competitive industries.
Assuming the perfectly competitive case as a benchmark for long-run pass-through in “relatively competitive” semiconductor product markets,
this would then imply an expectation of 20-30% annual declines in price, due solely to Moore’s Law.
To summarize these results, then,
though there are substantial differences in the magnitude of declines across different time periods and data sources,
all of the various types of price indexes constructed concur in showing substantially higher rates of decline in microprocessor price prior to 2004,
a stop-and-start pattern after 2004,
and a dramatically lower rate of decline since 2010.
Taken at face value, this creates a new puzzle.
Even if the rate of innovation had slowed in general for microprocessors,
if the underlying innovation in semiconductor manufacturing technology has continued at the late 1990s pace
(i.e., a new technology node every two years and roughly constant wafer processing costs in the long run),
then manufacturing costs would continue to decline at a 30 percent annual rate,
and the rates of decline in processor price that are being measured now fall well short of that mark.
Either the rate of innovation in semiconductor manufacturing must also have declined,
or the declining manufacturing costs are no longer being passed along to consumers to the same extent, or both.
The semiconductor industry and engineering consensus seems to be that the pace of innovation derived from continuing feature-size scaling in semiconductor manufacturing has slowed markedly.
—————————
[ submission to a court case ]
Assessment of the Impact of Overcharges on Canadian Direct and Indirect Purchasers of DRAM and Products Containing DRAM
Submitted to: The Honourable Ian Binnie
28 June 2013
<https://www.cfmlawyers.ca/wp-content/uploads/2012/05/DRAM-Ross-Distribution…>
VI. Pass-Through in this Case
48. Based upon my review of the information described above, I am of the view that there would likely be
a very high degree of pass-through in DRAM distribution channels, all the way down to final consumers.
This conclusion is based principally on a review of the structure of the various markets together with an application of standard economic theory.
I also draw on pass-through analyses done by a number of experts in the U.S. action, recognizing that they are clearly contradictory on some key points.
Evidence from the U.S. Case
49. Let me begin with a brief review of the work done by economists for various parties in the American action.
a. In his report Michael Harris (July 10, 2007) estimated pass-through from top to bottom rather than just at one stage.
That is, he looked to see if increases in the price of base DRAM were passed all the way down the distribution channels and increased computer prices.
In one test he estimated that there was more than 100% pass-through of base DRAM price increased to final computer purchasers.
In a second test he studied the effect of increases in spot market DRAM prices on aftermarket DRAM prices.
Again, he found more than 100% pass-through.21
b. Professor Kenneth Flamm, in his report (December 15, 2010) provided estimates of the pass-through
from retailers to final consumers using data from four major U.S. retailers
(Best Buy, Office Depot, CDW, and Tech Depot) for various categories of computer products (desktops, notebooks, servers, memory, printers and graphics).
Most pass-through estimates were in a range between 90% and 113%,
the desktop and notebook rates were all within a few points above or below 100% with one exception.
===================
[7] Links on California, population and economic growth from 1900 to 2000
—————————
Wikipedia
<https://en.wikipedia.org/wiki/Economy_of_California>
The economy of the State of California is the largest in the United States [ in 2024 ].
It is the largest sub-national economy in the world.
If California was an independent nation, it would rank as the fourth largest economy in the world in nominal terms,
behind Germany and ahead of Japan.
As of 2024, California is home to the highest number of Fortune 500 companies of any U.S. state.
As both the most populous US state and one of the most climatologically diverse states
the economy of California is varied, with many sizable sectors.
—————————
California since c. 1900
<https://www.britannica.com/place/California-state/California-since-c-1900>
population in 1900: 2 M.
now largest US state by population & GNP.
—————————
California’s Economy. [ fact sheet 2025 ]
<https://www.ppic.org/publication/californias-economy/>
California is an economic powerhouse, nationally and globally.
• In 2023, California’s gross domestic product (GDP) was about $3.9 trillion,
comprising 14% of national GDP ($27.7 trillion).
Texas and New York are the next largest state economies, at 9% and 8%, respectively.
• California’s economy ranks fifth internationally, behind the US, China, Germany, and Japan.
On a per capita basis, California’s GDP is greater than that of all of these countries.
• California’s economy has grown relatively slowly in recent years, averaging 2.3% per year between 2020 and 2023,
compared to 3.9% on average over the previous four years.
By comparison, Florida (4.6%) and Texas (3.9%) grew faster than California since 2020.
• Over the long term, California’s economy has grown faster than the nation overall
(111% vs 75% over the past 25 years) and faster than other large states except for Texas (128%).
On a per capita basis, California’s economic growth outpaces all other large states over the long term.
Growth in jobs and businesses have powered the state’s economy.
• California’s labor market grew by 4.2 million jobs (30%) between 1998 and the second quarter of 2024;
over the same 25-year period, the number of businesses with employees grew more than 72%.
Both outpaced population growth (18%), leading to robust gains in economic output.
—————————
California’s Population [ fact sheet 2025 ]
<https://www.ppic.org/publication/californias-population/>
One in eight US residents lives in California.
• With just over 39 million people (according to July 2024 estimates),
California is the nation’s most populous state -
its population is much larger than that of second-place Texas (31.3 million)
and third-place Florida (23.4 million).
—————————
===================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
> From: Lyndon Nerenberg
> I think AT&T's real problem was that the post-1984 UNIX business was
> run by the lawyers.
This is a common problem in many companies: executives who do not have deep
knowledge of the company's fundamental business, but rather in some support
area, rise to the top of the business, and proceeed to make bad decisions
about that business (through lack of deep understanding of the business'
fundamentals). The exact same problem arises not just with support functions,
but also with the belief in 'pure managers'.
The biggest recent example of this problem in Boeing; the people running a
business that is fundamentally an engineering business made some bad
decisions in areas that were fundamentally engineering. Car companies have
displayed this disorder too.
Which is not to say that such people _can't_ be effective leaders of such
organizations; the classic example is James Webb, who 'ran' NASA from 1961 to
1968; he was originally a lawyer. I say 'ran' NASA because Webb was smart
enough to understand the limits of his expertise, and he delegated all the
technical decisions to his chief deputy, Hugh Dryden - who had started his
career as an aeronautical scientist (a Ph.D in physics and mathematics).
Noel
I just heard that, after ATC'25, USENIX will be sunsetting the annual
technical conference: this will apparently be the last one.
I can't find any reference for it, though, and the web site mentions
ATC'26 in Seattle?
- Dan C.
> From: Jackson Helie G
> I was wondering if anyone knows Ken Thomson's email address? I was
> wondering if he has any more archives of Unix from 1972 and before.
He does read this ist, and very occasionally posts to it.
But there's no point bothering him, to ask; anything he had, he turned over
many years ago.
(To the point that people have recently been poring through the 'trash' bits
in the "s1-bits" and s2-bit" tapes, IIRC, for lack of anything else to look
at. Look for "A Census of /etc and /sys Prior to V4" in the TUHS archive to
see some discussion of some of this work, by Matt Gilmore. I think somene else
was working on it too, but I couldn't find it; I'm not up for looking through
TUHS archives for it.)
Noel
> From: Al Kossow
> What was the name of the system(s)?
From an early 'hosts' file:
HOST MIT-RTS, [CHAOS 470,LCS 10/11],SERVER,UNIX,PDP11,[RTS,DSSR]
I'd rather depend on that, than on my memory! (The names at the end are
aliases.)
While I was looking for a really early host file (I recall our early TFTP
command had one; I don't think it was built into the command, but it might
have been),~p
I found a /jnk directory in MIT-CSR's root; it had a lot of
interesting stuff in it. One particularly interesting one was 'kov':
The idea of this kernel overlay scheme is to increase the amount of code
that can be included in the UNIX kernel. This is done by reserving one of
the I space segmentation registers (the highest free, usually KISA5 for
non-split systems) and changing its value dynamically so as to map in the
appropriate code as needed. I chose to use only one page register (limiting
each KOV to 4Kw), in order to minimize the mapping overhead.
I wonder if this is an early predecessor to the overlay stuff in BSD 2.9 (and
later BSD's)? That stuff is all poorly documented, and I'm not up for poring
through both to compare them. This one was all done by Ken Harrenstien (KLH).
Noel
Casey Henderson-Ross is the ED of USENIX. Since the list would not accept
her message, she asked me, as a former President, to send this to folks on
the TUHS mailing list.
-------------------------------
Folks,
As you may already be aware, today we made an important announcement about
the USENIX Annual Technical Conference via our mailing list. We want to
share this news with you directly as well in case you do not currently
receive USENIX email. Please read the statement in its entirety here:
https://www.usenix.org/blog/usenix-atc-announcement
If you don't know me, I've served on the USENIX staff since 2002 and have
had the privilege of serving as Executive Director since 2012. USENIX and
the Annual Technical Conference are both dear to me as I know they are to
many of you. This is difficult news to share even as we're grateful to be
celebrating 50 years of USENIX this year.
I hope that you'll share your memories via the form mentioned in the
statement and that we'll also see many of you at USENIX ATC '25 in Boston
in July:
https://www.usenix.org/conference/atc25
If you'd like to contribute to activities there, please contact me
directly.
Thanks to all of you for honoring the history of UNIX and for your support
of USENIX and ATC (in its different names and forms) over the decades.
Best,
Casey Henderson-Ross
ᐧ
> From: Thalia Archibald
> I'm working on building a decompiler from PDP-11 assembly to C to ease
> studying old pre-C Unix sources. To start, I'm translating V5 `as` to
> period-idiomatic C
That's going to be a real trick; 'as' was written in PDP-11 assembler:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/as
Noel
> From: Tom Teixeira
> before the RTS group at Project MAC
(Called the DSSR group at that point om time, if anyone wants to look
anything up.)
> I got a BCPL compiler from somewhere and made some enhancements - such
> as direct support for external variables and routines using a linker
> rather than the pure BCPL global array.
This entire toolchain, including all the source, has been preserved; the BCPL
compiler, the linker ('bind', itself written in BCPL), and a number of tools
(including an extra one, 'ldrel', written by the CSR group on the 5th floor).
I have in the past put up some pieces for download, but I never got around to
doing the whole thing, as there didn't seem to be any interest.
We (CSR) started working with the toolchain because it included support for
MACRO-11 (the BCPL compiler could produce either MACRO-11 or UNIX assembler
source as output). That toolchain used a relocatable formet, .rel':
http://ana-3.lcs.mit.edu/~jnc/tech/unix/man5/rel.5
that I think was based on one DEC had done.
CSR was working with a semi-real-time operating system, called 'MOS':
https://gunkies.org/wiki/MOS_operating_system
for the PDP-11, that we'd gotten from SRI. (MOS was used a lot in early work
on the Internet; e.g. all the BBN 'core' routers used in the early Internet
used it, after the very first ones [written in BCPL, oddly enough], which had
used ELF:
https://gunkies.org/wiki/ELF_operating_system
but BBN soon switched to MOS.) MOS was written in MACRO-11; we started
working with MOS with the same toolchain SRI had used for it, which ran on
TOPS-20. I soon decided I'd rather work on our group's PDP-11, initially a
PDP-11/40 running a clone of the DSSR system (the first UNIX at MIT). So,
we started working with the MACRO-11, and bind, on UNIX.
I then decided I'd rather write code for our PDP-11 packet switches in this
nifty language called C, used on UNIX, which nobody else knew about (at that
point). The first step was to write 'ldrel', to convert a.out files (produced
by the C compiler) to .rel files, so 'bind' could work with them.
After a while, we decided we'd rather use 'ld' as our linker (I think because
it supported demand loading from binary libraries, so we didn't have to
manually specify every file to link). The DSSR group had long ago written
'relld', which converted .rel files (produced by MACRO-11) to a.out files, so
that was pretty easy.
There seems to be a version of the BCPL compiler which produces 8080 code.
It looks like that pre-dates the 8086 C compiler from MIT (the 8080 C
compiler was not done at MIT).
(I just ran across a 6502 emulator written by Rae McLellan of DSSR; he and
they seem to have done a lot of work with various micros. It may not have all
the pieces it needs to work, though; it seems to need
"/usr/simulators/s6502/s6502.body" from the DSSR machine.)
Pity I didn't save a dump of that machine too; I was once looking for the
source of the Algol interpreter, written on the Delphi machine, and made to
run under UNIX, and I asked Steve Ward if any dumps of the DSSR/RTS machine
still existed, and he thought not. So we have the binary of that Algol
interpreter, but not the source. Of course, it was probably written in
MACRO-11, so a disassembler would produce a reasonable facsimile.
And I should ask Lars if the backup tapes of the DSSR/RTS machine went to the
MIT archives - they might well have done, and Prof. Ward just didn't know
that.
Noel
I checked Dennis M. Ritchie's "Users' Reference to B" and found an example
of implementing a B program at the bottom of the manual. It said that bc
generates intermediate code suitable for ba, and then ba generates assembly
code. So, I am curious about what the intermediate code generated by bc is?
Hello everyone,
I'm working on building a decompiler from PDP-11 assembly to C to ease studying
old pre-C Unix sources. To start, I'm translating V5 `as` to period-idiomatic C
and have finished most of pass 1. Then I'll port it to Rust with a design better
suited to static analysis, while keeping exact fidelity to the original. I'll
do the same for `cc` and `ld`, after.
I stumbled upon Warren's disaout[0] today, which made me wonder:
What tools have people for reverse engineering Unix assembly sources or a.out
binaries? Things like disassemblers or decompilers.
I assume there's some versions of programs which are now only extant as
binaries? Are there enough such binaries that were written in C to warrant
writing a decompiler that understands the specific codegen of `cc` to improve
accuracy? For now, I'm focusing on decompiling hand-written assembly, but I'm
keeping this case in mind.
Thanks!
Thalia
[0]: https://github.com/DoctorWkt/unix-jun72/tree/master/tools/disaout
Hi All.
In a book I'm updating, I have the following references for
Unix security.
1. Practical UNIX & Internet Security, 3rd edition, by Simson Garfinkel,
Gene Spafford, and Alan Schwartz, O’Reilly & Associates, Sebastopol,
CA, USA, 2003. ISBN-10: 0-596-00323-4, ISBN-13: 978-0596003234.
2. Building Secure Software: How to Avoid Security Problems the Right Way,
by John Viega and Gary McGraw. Addison-Wesley, Reading, Massachusetts,
USA, 2001. ISBN- 10: 0-201-72152-X, ISBN-13: 978-0201721522.
3. “Setuid Demystified,” by Hao Chen, David Wagner, and Drew
Dean. Proceedings of the 11th USENIX Security Symposium, August 5–9,
2002. http://www.cs.berkeley. edu/~daw/papers/setuid-usenix02.pdf.
One of my reviewers asked if these weren't "dusty references".
So, before I just refer to them as "classics", can anyone recommend
more recent books? Feel free to answer in private.
Thanks,
Arnold
> From: Clem Cole
> The first "C" compiler wa an ephemeral state some time in the process
> of its evolution. Dennis started with his B implementation and began to
> add features he needed
See:
The Development of the C Language
https://www.nokia.com/bell-labs/about/dennis-m-ritchie/chist.html
for detail on the evolution.
Noel
Oh, sorry guys, please forgive me for the off-topic question. I was
wondering if anyone knows Ken Thomson's email address? I was wondering if
he has any more archives of Unix from 1972 and before.
I received this earlier today, and wondered if cross-posting it would be
appropriate. Here goes...
Rik
---------- Forwarded message ---------
From: Casey Henderson-Ross <casey.henderson(a)usenix.org>
Date: Tue, May 6, 2025 at 2:12 PM
Subject: An Announcement about the USENIX Annual Technical Conference
To: Rik Farrow <rik(a)rikfarrow.com>
Read on for important news.
[image: USENIX, the Advanced Computing Systems Association]
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>
Hello, Rik,
USENIX celebrates its 50th anniversary in 2025. We celebrate decades of
innovations, experiments, and gatherings of the advanced computing system
community. And in the spirit of our ever-evolving community, field, and
industry, we announce the bittersweet conclusion of our longest-running
event, the USENIX Annual Technical Conference in July 2025, following
USENIX ATC '25.
Since USENIX's inception in 1975, it has been a key gathering place for
innovators in the advanced computing systems community. The early days of
meetings evolved into the two annual conferences, the USENIX Summer and
Winter Conferences, which in 1995 merged into the single Annual Technical
Conference that has continued to evolve and serve thousands of our
constituents for 30 years.
For the past two decades, as more USENIX conferences have joined the USENIX
calendar by focusing on specific topics that grew out of ATC itself,
attendance at ATC has steadily decreased to the point where there is no
longer a critical mass of researchers and practitioners joining us. Thus,
after many years of experiments to adapt this conference to the
ever-changing tech landscape and community, the USENIX Board of Directors
has made the difficult decision to sunset USENIX ATC.
USENIX ATC in its many iterations has been the home of an incredible list
of "firsts" in our industry:
- In 1979, ONYX, the first attempt at genuine UNIX hardware, was
announced.
- In 1982, DEC unveiled the creation of its UNIX product.
- In 1983, Eric Allman presented the first paper on Sendmail, "Mail
Systems and Addressing in 4.2BSD."
- In 1985, Sun Microsystems presented the first paper on NFS, "Design
and Implementation of the Sun Network Filesystem."
- In 1988, the first light on Kerberos and the X Window system was
presented.
- In 1989, Tom Christiansen made his first Perl presentation as an
Invited Talk.
- In 1990, John Ousterhout presented Tcl.
- In 1995, the first talk on Oak (later JAVA) was given as a
Work-in-Progress report.
- In 1998, Miguel de Icaza presented "The GNOME Desktop Environment."
These examples represent just a few of the many contributions presented at
USENIX ATC over the years, with hundreds of papers that account for
thousands of citations of work that the community has presented, discussed,
learned from, and built upon as the community evolved.
Part of that evolution involved the continued development of
subcommunities, as has been the case with USENIX Security, which began as a
workshop in 1988 and has since grown to an annual symposium and the largest
of our events in terms of both papers published and number of attendees
annually, with 417 papers and 1,015 attendees at USENIX Security '24. The
LISA (Large Installation System Administration) Conference, which also
started as a workshop in 1987, grew in a similar fashion to its peak of
over 1,000 attendees, but like USENIX ATC declined as its community changed
until its own sunset in 2021.
Papers on file and storage systems that would have previously been
presented at USENIX ATC in the early 2000s began to find homes at FAST when
it was founded in 2002. Networked systems papers started flowing to NSDI in
2003. As the biennial OSDI continued to grow alongside ACM's SOSP, OSDI
became annual in 2021 and SOSP followed suit, providing the community with
additional venues. While landmark moments in our community have continued
at USENIX ATC, many others have also occurred at these other USENIX
conferences, as showcased in the USENIX Test of Time Awards
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>
and the ACM SIGOPS Hall of Fame Awards
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>,
which celebrate influential works presented at both SOSP and OSDI. Although
numerous papers continue to be submitted to USENIX ATC with significant
research being reviewed, accepted, and presented, the community has
evolved, and now attends conferences other than USENIX ATC. From 1,698
attendees in San Diego in 2000, ATC attendance dwindled to 165 attendees in
Santa Clara in 2024—even as we had over 4,000 people attend all USENIX
conferences in 2024.
USENIX recognizes the pivotal role that USENIX ATC has played in the
shaping of the Association itself as well as the lives and careers of its
many attendees and members. We also realize that change is inevitable, and
all good things must come to an end: if it weren't for ATC being a "victim
of its own great success"—a foundry of so many other successful conferences
and workshops—USENIX would never have grown and expanded so much over the
decades. Thus our hearts are heavy as we celebrate ATC's history and legacy
alongside the evolution of its younger siblings and face the financial
realities of the post-pandemic world and volatile global economy. USENIX's
resources to support its conferences and communities are not unlimited,
particularly as we maintain our commitment to open-access publications that
are free for our authors to publish. We have been reporting about these
challenges via our Annual Meeting and subsequent reports for the past
several years (2024
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>,
2023
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>,
2022
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>,
2021
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>,
2020
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>).
We deeply appreciate the financial support we have received from our
community in the form of donations, memberships, and conference
registrations. However, we continue to operate at a deficit and ATC
continues to shrink. In making this hard choice, accepting the reality in
front of us that encourages us to innovate in a different direction under
challenging circumstances, we seek to embody the values of this community
that was founded on curiosity, persistence, and collaboration.
As we celebrate 50 years of both USENIX and ATC in its varying forms, we
look towards the future of this vibrant community in the form of the many
conferences that ATC continues to help create in its image: welcoming,
thoughtful environments for the presentation of innovative work that fellow
conference attendees help push forward. We look forward to honoring ATC's
legacy alongside USENIX's history and its future in Boston in July of 2025.
We would love to hear memories of your experience at the USENIX Annual
Technical Conference over the years. Please submit your thoughts
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>
in words, video, or both by *Monday, June 2*. We will share your memories
both at USENIX ATC '25 and afterwards. We hope that you will join us at USENIX
ATC '25
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>,
which will include both a celebration of USENIX's 50th anniversary on the
evening of *Monday, July 7*, and a tribute to USENIX ATC on the
evening of *Tuesday,
July 8*.
[image: Casey Henderson headshot]
Best Regards,
Casey Henderson-Ross
Executive Director
USENIX Association
About this mailing list:
You are receiving this email because you are or have been a member of
USENIX, have attended USENIX conferences, or have signed up to receive
emails from USENIX.
USENIX
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>
never shares, sells, rents, or exchanges email addresses of its members,
conference attendees, and other constituents. Review our privacy policy
<https://s.usenix.org/acton/ct/2452/s-0514-2505/Bct/l-sf-cl-7018Y000001JtF6Q…>
.
We would like to continue sending you occasional email announcements like
this one. However, if you no longer want to receive emails from USENIX,
please click here
<http://s.usenix.org/acton/rif/2452/s-0514-2505/-/l-sf-cl-7018Y000001JtF6QAK…>
to opt out of all communications from USENIX.
If you have any questions about the mailing list, please email
info(a)usenix.org. We may also be reached via postal mail:
USENIX Association
2443 Fillmore St, #380-25600, San Francisco, CA 94115-1814, USA
[looping TUHS back in since I'm correcting a message I sent there]
Hi Dave,
At 2025-05-06T08:36:55-0500, Dave Kemper wrote:
> On Fri, May 2, 2025 at 7:35 PM G. Branden Robinson
> <g.branden.robinson(a)gmail.com> wrote:
> > I guess another way of saying this is that, as I conceive it, a line
> > that is "adequately full" contributes to the page's uniformity of
> > grayness by definition.
>
> For an example of less-than-ideal results if this is _not_ considered
> the case (groff's behavior before this change), see
> http://savannah.gnu.org/bugs/?60673#comment0 (the initial report that
> precipitated the commit Doug is commenting on).
Yes. In my reply to Doug I incorrectly characterized the resolution of
this bug as a "2023" change of mine, but I actually landed the change in
2021. It simply took until 2023 to appear in a released _groff_.
To make this message more TUHS-o-riffic, let's observe that input using
DWB 3.3 troff and Heirloom Doctools troff (descended from Solaris troff,
descended from DWB 2.0 troff [I think]), and both of which descend from
Kernighan's device-independent troff circa 1980.
$ DWBHOME=. ./bin/nroff groff-60673.roff | cat -s
While the example in bug 57836's original report is somewhat
contrived and a bit of an edge case in real life, there turns out
to be a more innate bug in grotty's balancing algorithm. As
mentioned before (and easily observable), when grotty adds spaces
to a line in the process of justifying it, the algorithm it
utilizes adds spaces from opposite ends of each line. But when it
adds this space, it does not take into account lines with no
adjustment at all required. Therefore if space only need be added
to every other line of the text, all the space ends up being
added to the same end of the line, degrading the uniform grayness
of the output, as can be seen in this example. There is one
fairly simple way to address this: grotty shouldn't "count" lines
that don't need to be adjusted; instead, it should apply the
alternation pattern only to those lines that do need adjustment.
$ ./bin/nroff groff-60673.roff | cat -s
While the example in bug 57836's original report is somewhat
contrived and a bit of an edge case in real life, there turns out
to be a more innate bug in grotty's balancing algorithm. As
mentioned before (and easily observable), when grotty adds spaces
to a line in the process of justifying it, the algorithm it
utilizes adds spaces from opposite ends of each line. But when it
adds this space, it does not take into account lines with no
adjustment at all required. Therefore if space only need be added
to every other line of the text, all the space ends up being
added to the same end of the line, degrading the uniform grayness
of the output, as can be seen in this example. There is one
fairly simple way to address this: grotty shouldn't "count" lines
that don't need to be adjusted; instead, it should apply the
alternation pattern only to those lines that do need adjustment.
They are the same, and differ from groff 1.22.4 and earlier only in that
they adjust spaces starting from the right end of the line instead of
the left.
At the risk of tooting our own horn, here's how groff 1.23.0+ handles
the same input.
$ ~/groff-1.23.0/bin/nroff groff-60673.roff | cat -s
While the example in bug 57836’s original report is somewhat
contrived and a bit of an edge case in real life, there turns out
to be a more innate bug in grotty’s balancing algorithm. As
mentioned before (and easily observable), when grotty adds spaces
to a line in the process of justifying it, the algorithm it
utilizes adds spaces from opposite ends of each line. But when it
adds this space, it does not take into account lines with no
adjustment at all required. Therefore if space only need be added
to every other line of the text, all the space ends up being
added to the same end of the line, degrading the uniform grayness
of the output, as can be seen in this example. There is one
fairly simple way to address this: grotty shouldn’t "count" lines
that don’t need to be adjusted; instead, it should apply the
alternation pattern only to those lines that do need adjustment.
Three observations:
1. One can find the input at Dave's URL above.
2. The input disables inter-sentence spacing.
3. The adjustment algorithm is a property not of grotty(1) (the output
driver), but of GNU troff itself.
Regards,
Branden
Aharon Robbins:
So, before I just refer to them as "classics", can anyone recommend
more recent books? Feel free to answer in private.
===
`Unix security' is not the most-specific of terms. Can
you give more context?
Norman Wilson
Toronto ON
> From: Clem Cole
> Yes, that was one of the RTS compilers for the NU machine. John Romkey
> may have done it, as he was the primary person behind PCIP
I decided to poke around in the 'MIT-CSR' dump, since that was the machine
the PC/IP project started on, to see what I could find. Hoo boy! What an
adventure!
In the PC/IP area, I found a 'c86' directory - but it was almost empty. It
did have a shell file, 'grab', which contained:
tftp -g $1 xx "PS:<Wayne>$1"
and a 'graball' file which called 'grab' for the list of compiler source
files. ('xx' was MIT-XX, the TOPS-20 main time-sharing machint of LCS.)
So I did a Web search for Wayne Gramlich (with whom I hadn't communicated in
many decades), and he popped right up. (Amazing thing, this Internet thingy.
Who'd have ever thought, back in the day, that it would turn into what it
did? Well, probably John Brunner, whom I (sadly) never met, who was there
before any of us.)
I took a chance, and called his number, and he was there, and we had a long
chat. He absolutely didn't do it, although he wrote the loader the project
used ('l68', the source for which I did find.) He's virtually certain Romkey
didn't (which would have been my guess too; Romkey was like a sophmore when
the project started). His best (_very_ faded) memory was that they started off
with a commercial compiler. (But see below.)
That leaves several mysteries. 1) Why would a commercial compiler not come
with a linker? 2) Why did people who wanted to work with the PC/IP source
need a Bell license?
I did some more poking, and the list of files for the 86 compiler, from
'graball':
trees.c optim.c pftn.c code.c local.c scan.c xdefs.c
table.c reader.c local2.c order.c match.c allo.c comm1.c
manifest mfile1 common macdefs mfile2 mac2defs
matched the file names from 'pcc', as given in "A Tour Through the Portable C
Compiler":
https://maibriz.de/unix/ultrix/_root/porttour.pdf
(in section "The Source Files"). So whether the 86 compiler was done at MIT
(by someone in RTS), or at a company, it was definitely a 'pcc' descendant.
(Possibly adding to the confusion, we had some other C compilers for various
ISA's in that project [building networking software for various
micro-computers], including an 8080 C compiler from Whitesmiths, Ltd, which I
have also found. It's possible that Wayne's vague memory of a commercial
compiler is of that one?)
I really should reach out to Romkey and Bridgham, to see what they remember.
Later today.
Whether the main motivation for keeping the compiler source on XX was i)
because disk space was short on CSR (we had only a hand-me-down pair of
CalComp Model 215 drives - capacity 58 Mbytes per drive!); ii) to prevent
version skew; or iii) because it was a commercial compiler, and we had to
protect the source (e.g. we didn't have the source to the 8080 compiler, only
the object modules), I have no idea.
> Anyway the MIT RTS folks made hardware and PCC back ends for the 68K,
> Z8000 and 8086. I believe that each had separate assemblers, tjt who
> sometimes reads this list might know more, as he wrote the 68K assembler.
There is an 'a86' directory on CSR, but it too is empty, except for a 'grab'
command file. That contains only:
tftp -g $1 xx "PS:<novick>$1"
I have no memory of who 'novick' might have been. A Web search for 'novick
mit lcs' didn' turn anything up. (I wonder if it might have been Carol
Novitsky; she was in our group at LCS, and I have a vague memory of her being
associated with the networking software for micro-computers project.)
Anyway, it probably doesn't matter; the c86 'grab' referred to Wayne, but he
didn't write c86; 'novick' might not have written a86.
Something else to ask Romkey and Bridgham about.
Noel
Branden,
> The relevant function fits on one screen, if your terminal window is at
> least 36 lines high. :) (Much of it is given over to comments.)
> https://git.savannah.gnu.org/cgit/groff.git/tree/src/roff/troff/env.cpp?id=…
Actually there's still another function, spread_space that contains
the inner R-L and L-R loops. The whole thing has become astonishingly
complicated compared to what I remember as a few (carefully crafted)
lines of code in the early roff. I admire your intrepid forays into
the groff woods, of which this part must be among the less murky.
Doug
So part of Western Electric/AT&Ts developer support for the WE32x00 CPU line was
the WE321SB VME-bus single-board computer. The official operating system for
this was UNIX System V/VME. This version is referenced in several document
catalogues and literature surrounding this VME module. I was curious if anyone
on list happens to know of any surviving System V/VME artifacts floating around
out there. All I've been able to find are references to the system in other
WE32x00 and UNIX documentation.
Thanks for any info!
- Matt G.
> From: Tom Lyon <pugs78(a)gmail.com>
>
> I was pleased to learn that the first port of S to UNIX was on the
> Interdata 8/32, which I had my part in enabling.
I would love to hear more about the Interdata port and what
happened with it afterwards. Interdata seems to have disappeared
into the dustbin of history. And Unix on it apparently never
got out of Bell Labs; I don't think the code for it is in the
TUHS archives.
Was the Interdata system in use at Bell Labs for actual work once
the port was complete?
ISTR there was a meeting with Interdata about changes in the architecture
that Bell Labs wanted, that Interdata didn't want to make. What
was the full story?
Any other info would be welcome.
Thanks,
Arnold