Hello, today I received in the mail a book I ordered apparently by one of the engineers at Sega responsible for their line of consoles. It's all in Japanese but based on the little I know plus tables in the text, it appears to be fairly technical and thorough. I'm excited to start translating it and see what lies within.
In any case, it got me thinking about what company this book might have as far as Japanese literature concerning computing history there, or even just significant literature in general regarding Japanese computer history. While we are more familiar with IBM, DEC, workstations, minis, etc. the Japanese market had their own spate of different systems such as NEC's various "PCs" (not PC-compats, PC-68, PC-88, PC-98), Sharp X68000, MSX(2), etc. and then of course Nintendo, Sega, NEC, Hudson, and the arcade board manufacturers. My general experience is that Japanese companies are significantly more tight-lipped about everything than those in the U.S. and other English-speaking countries, going so far as to require employees to use pseudonyms in any sort of credits to prevent potential poaching. As such, first-party documentation for much of this stuff is incredibly difficult to come by, and secondary materials and memoirs and such, in my experience at least, are virtually non-existent. However, that is also from my perspective here across the seas trying to research an obscure, technical subject in my non-native tongue. Anyone here have a particular eye for Japanese computing? If so, I'd certainly be interested in some discussion, doesn't need to be on list either.
- Matt G.
Howdy folks, just wanted to share a tool I wrote up today in case it might be useful for someone else: https://gitlab.com/segaloco/dis65
This has probably been done before, but this is a bare-bones one-pass MOS 6500 disassembler that does nothing more than convert bytes to mnemonics and parameters, so no labeling, no origins, etc. My rationale is as I work on my Dragon Quest disassembly, there are times I have to pop a couple bytes through the disassembler again because something got misaligned or some other weird issue. My disassembler through the project has been da65, which does all the labeling and origin stuff but as such, requires a lot of seeking and isn't really amenable to a pipeline, which has required me to do something like:
printf "\xAD\xDE\xEF\xBE" > temp.bin && da65 temp.bin && rm temp.bin
to get the assembly equivalent of 0xDEADBEEF.
Enter my tool, it enables stuff like:
printf "\xAD\xDE\xEF\xBE" | dis65
instead. A longer term plan is to then write a second pass that can then do all the more sophisticated stuff without having to bury the mnemonic generation down in there somewhere, plus that second pass could then be architecture-agnostic to a high degree.
Anywho, feel free to do what you want with it, it's BSD licensed. One "bug" I need to address is that all byte values are presented as unsigned, but in the case of indirects and a few other circumstances, it would make more sense for them to be signed. Probably won't jump on that ASAP, but know that's a coming improvement. While common in disassemblers, I have no intention on adding things like printing the binary bytes next to the opcodes. Also this doesn't support any of the undocumented opcodes, although it should be trivial to add them if needed. I went with lower-case since my assembler supports it, but you should have a fine time piping into tr(1) if you need all caps for an older assembler.
- Matt G.
C, BLISS, BCPL, and the like were hardly the only systems programming
languages that targeted the PDP-11. I knew about many system programming
languages of those times and used all three of these, plus a few others,
such as PL/360, which Wirth created at Stanford in the late 1960s to
develop the Algol-W compiler. Recently, I was investigating something
about early systems programming languages, and a couple of questions came
to me that I could use some help finding answers (see below).
In 1971, RD Russell of CERN wrote a child of Wirth's PL/360 called PL-11:
Programming Language for the DEC PDP-11 Computer
<https://cds.cern.ch/record/880468/files/CERN-74-24.pdf> in Fortran IV. It
supposedly ran on CERN's IBM 360 as a cross-compiler and was hosted on
DOS-11 and later RSX. [It seems very 'CARD' oriented if you look at the
manual - which makes sense, given the time frame]. I had once before heard
about it but knew little. So, I started to dig a little.
If I understand some of the history correctly, PL-11 was created/developed
for a real-time test jig that CERN needed. While it was available in
limited cases, since BLISS-11 required a PDP-10 to cross-compile, it was
not considered (I've stated earlier that some poor marketing choices at DEC
hurt BLISS's ability to spread). Anyway, a friend at CERN later in the
70s/80s told me they thought that as soon as UNIX made it on the scene
there since it was interactive and more accessible, C was quickly
preferred as the system programming language of choice. However, a BCPL
that had come from somewhere in the UK was also kicking around.
So, some questions WRT PL-11:
1. Does anyone here know any (more) of the story -- Why/How?
2. Do you know if the FORTRAN source survives?
3. Did anything interesting/lasting get written using it?
Tx
Clem
ᐧ
I thought folks on COFF and TUHS (Bcc'ed) might find this interesting.
Given the overlap between SDF and LCM+L, I wonder what this may mean
for the latter.
- Dan C.
---------- Forwarded message ---------
From: SDF Membership <membership(a)sdf.org>
Date: Thu, Aug 24, 2023 at 9:10 PM
Subject: [SDF] Computer Museum
To:
We're in the process of opening a computer museum in the Seattle area
and are holding our first public event on September 30th - October 1st.
The museum features interactive exhibits of various vintage computers
with a number of systems remotely accessible via telnet/ssh for those
who are unable to visit in person.
If this interests you, please consider replying with comments and
take the ascii survey below. You can mark an X for what interests you.
I would like to know more about:
[ ] visiting the museum
[ ] how to access the remote systems
[ ] becoming a regular volunteer or docent
[ ] restoring and maintaining various vintage systems
[ ] curation and exhibit design
[ ] supporting the museum with an annual membership
[ ] supporting the museum with an annual sponsorship
[ ] funding the museum endowment
[ ] day to day administration and operations
[ ] hosting an event or meet up at museum
[ ] teaching at the museum
[ ] donating an artifact
Info on our first public event can be found at https://sdf.org/icf
Good morning folks, I'm hoping to pick some brains on something that is troubling me in my search for some historical materials.
Was there some policy prior to mass PDF distribution with standards bodies like ANSI that they only printed copies of standards "to order" or something like that? What has me asking is when looking for programming materials prior to when PDF distribution would've taken over, there's a dearth of actual ANSI print publications. I've only come across one actual print standard in all my history of searching, a copy of Fortran 77 which I guard religiously. Compare this with PALLETS'-worth, like I'm talking warehouse wholesale levels of secondary sources for the same things. I could *drown* in all the secondary COBOL 74 books I see all over the place but I've never seen seen a blip of a suggestion of a whisper of an auction of someone selling a legitimate copy of ANSI X3.23-1974. It feels like searching for a copy of the Christian Bible and literally all I can find are self help books and devotional readers from random followers. Are the standards really that scarce, or was it something that most owners of back in the day would've thrown in the wood chipper when the next edition dropped, leading to an artificial narrowing of the amount of physical specimens still extant?
To summarize, why do print copies of primary standards from the elden days of computing seem like cryptids while one can flatten themselves into a pancake under the mountains upon mountains of derivative materials out there? Why is filtered material infinitely more common than the literal rule of law governing the languages? For instance the closest thing to the legitimate ANSI C standard, a world-changing document, that I can find is the "annotated" version, which thankfully is the full text but blown up to twice the thickness just to include commentary. My bookshelf is starting to run out of room to accommodate noise like that when there are nice succint "the final answer" documents that take up much less space but seem to virtually not exist...
- Matt G.
I was wondering if anyone close to Early Unix and Bell Labs would offer some comments on the
evolution of Unix and the quality of decisions made by AT&T senior managers.
Tom Wolfe did an interesting piece on Fairchild / Silicon Valley,
where he highlights the difference between SV’s management style
and the “East Coast” Management style.
[ Around 2000, “Silicon Valley” changed from being ‘chips & hardware’ to ’software’ & systems ]
[ with chip making, every new generation / technology step resets competition, monopolies can’t be maintained ]
[ Microsoft showed that Software is the opposite. Vendor Lock-in & monopolies are common, even easy for aggressive players ]
Noyce & Moore ran Fairchild Semiconductor, but Fairchild Camera & Instrument was ‘East Coast’
or “Old School” - extracting maximum profit.
It seems to me, an outsider, that AT&T management saw how successful Unix was
and decided they could apply their size, “marketing knowhow” and client lists
to becoming a big player in Software & Hardware.
This appears to be the reason for the 1984 divestiture.
In another decade, they gave up and got out of Unix.
Another decade on, AT&T had one of the Baby Bells, SBC, buy it.
SBC had understood the future growth markets for telephony was “Mobile”
and instead of “Traditional” Telco pricing, “What the market will bear” p[lus requiring Gross Margins over 90%,
SBC adopted more of a Silicon Valley pricing approach - modest Gross Margins
and high “pass through” rates - handing most/all cost reductions onto customers.
If you’re in a Commodity market, passing on cost savings to customers is “Profit Maximising”.
It isn’t because Commodity markets are highly competitive, but Volumes drive profit,
and lower prices stimulate demand / Volumes. [ Price Elasticity of Demand ]
Kenneth Flamm has written a lot on “Pass Through” in Silicon Chip manufacture.
Just to close the loop, Bells Labs, around 1966, hired Fred Terman, ex-Dean of Stanford,
to write a proposal for “Silicon Valley East”.
The AT&T management were fully aware of California and perhaps it was a long term threat.
How could they replicate in New Jersey the powerhouse of innovation that was happening in California?
Many places in many countries looked at this and a few even tried.
Apparently South Korea is the only attempt that did reasonably.
I haven’t included links, but Gordon Bell, known for formulating a law of computer ‘classes’,
did forecast early that MOS/CMOS chips would overtake Bipolar - used by Mainframes - in speed.
It gave a way to use all those transistors on a chip that Moore’s Law would provide,
and with CPU’s in a few, or one, chip, the price of systems would plummet.
He forecast the cutover in 1985 and was right.
The MIPS R2000 blazed past every other chip the year it was released.
And of course, the folk at MIPS understood that building their own O/S, tools, libraries etc
was a fool’s errand - they had Unix experience and ported a version.
By 1991, IBM was almost the Last Man Standing of the original 1970’s “IBM & the BUNCH”,
and their mainframe revenues collapsed. In 1991 and 1992, IBM racked up the largest
corporate losses in US history to the time, then managed to survive.
Linux has, in my mind, proven the original mid-1970’s position of CSRC/1127
that Software has to be ‘cheap’, even ‘free’
- because it’s a Commodity and can be ’substituted’ by others.
=================================
1956 - AT&T / IBM Consent decree: 'no computers, no software’
1974 - CACM article, CSRC/1127 in Software Research, no commercial Software allowed
1984 - AT&T divested, doing commercial Software & Computers
1994 - AT&T Sells Unix
1996 - “Tri-vestiture", Bell Labs sold to Lucent, some staff to AT&T Research.
2005 - SBC buys AT&T, long-lines + 4 baby bells
1985 - MIPS R2000, x2 throughput at same clock speed. Faster than bipolar, CMOS CPU's soon overtook ECL
=================================
Code Critic
John Lions wrote the first, and perhaps only, literary criticism of Unix, sparking one of open source's first legal battles.
Rachel Chalmers
November 30, 1999
https://www.salon.com/test2/1999/11/30/lions_2/
"By the time the seventh edition system came out, the company had begun to worry more about the intellectual property issues and trade secrets and so forth," Ritchie explains.
"There was somewhat of a struggle between us in the research group who saw the benefit in having the system readily available,
and the Unix Support Group ...
Even though in the 1970s Unix was not a commercial proposition,
USG and the lawyers were cautious.
At any rate, we in research lost the argument."
This awkward situation lasted nearly 20 years.
Even as USG became Unix System Laboratories (USL) and was half divested to Novell,
which in turn sold it to the Santa Cruz Operation (SCO),
Ritchie never lost hope that the Lions books could see the light of day.
He leaned on company after company.
"This was, after all, 25-plus-year-old material, but when they would ask their lawyers,
they would say that they couldnt see any harm at first glance,
but there was a sort of 'but you never know ...' attitude, and they never got the courage to go ahead," he explains.
Finally, at SCO [ by July 1996 ], Ritchie hit paydirt.
He already knew Mike Tilson, an SCO executive.
With the help of his fellow Unix gurus Peter Salus and Berny Goodheart, Ritchie brought pressure to bear.
"Mike himself drafted a 'grant of permission' letter," says Ritchie,
"'to save the legal people from doing the work!'"
Research, at last, had won.
=================================
Tom Wolfe, Esquire, 1983, on Bob Noyce:
The Tinkerings of Robert Noyce | Esquire | DECEMBER 1983.webarchive
http://classic.esquire.com/the-tinkerings-of-robert-noyce/
=================================
Special Places
IEEE Spectrum Magazine
May 2000
Robert W. Lucky (Bob Lucky)
https://web.archive.org/web/20030308074213/http://www.boblucky.com/reflect/…https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=803583
Why does place matter? Why does it matter where we live and work today when the world is so connected that we're never out of touch with people or information?
The problem is, even if they get da Vinci, it won't work.
There's just something special about Florence, and it doesn't travel.
Just as in this century many places have tried to build their own Silicon Valley.
While there have been some successes in
Boston,
Research Triangle Park, Austin, and
Cambridge in the U.K.,
to name a few significant places, most attempts have paled in comparison to the Bay Area prototype.
In the mid-1960s New Jersey brought in Fred Terman, the Dean at Stanford and architect of Silicon Valley, and commissioned him to start a Silicon Valley East.
[ Terman reited from Stanford in 1965 ]
=================================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
[TUHS to Bcc]
On Wed, Feb 1, 2023 at 3:23 PM Douglas McIlroy
<douglas.mcilroy(a)dartmouth.edu> wrote:
> > In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes
>
> I don't know any Unix examples, but DTSS (Dartmouth Time Sharing
> System) "communication files" were used for the purpose. For a fuller
> story see https://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf
Interesting. This is now being discussed on the Multicians list (which
had a DTSS emulator! Done for use by SIPB). Warren Montgomery
discussed communication files under DTSS for precisely this kind of
thing; apparently he had a chess program he may have run under them.
Barry Margolin responded that he wrote a multiuser chat program using
them on the DTSS system at Grumman.
Margolin suggests a modern Unix-ish analogue may be pseudo-ttys, which
came up here earlier (I responded pointing to your wonderful note
linked above).
> > This is probably a bit more Plan 9-ish than UNIX-ish
>
> So it was with communication files, which allowed IO system calls to
> be handled in userland. Unfortunately, communication files were
> complicated and turned out to be an evolutionary dead end. They had
> had no ancestral connection to successors like pipes and Plan 9.
> Equally unfortunately, 9P, the very foundation of Plan 9, seems to
> have met the same fate.
I wonder if there was an analogy to multiplexed files, which I admit
to knowing very little about. A cursory glance at mpx(2) on 7th
Edition at least suggests some surface similarities.
- Dan C.