Warren Toomey via TUHS <tuhs(a)tuhs.org> once said:
> The history of Unix is not just of the technology, but also of the
> people involved, their successes and travails, and the communities
> that they built. Mary Ann referenced a book about her life and
> journey in her e-mail's .sig. She is a very important figure in the
> history of Unix and I think her .sig is entirely relevant to TUHS.
Are you fine with everyone advertising whatever views
and products they want in their signatures or would I
have to be a very important figure?
If I want to say, for example, that the vast amount of
software related to Unix that came out of Berkeley was
so harmful it should have a retroactive Prop 65 label,
would that be okay to have in my signature?
Cheers,
Anthony
The vast amount of software related to Unix that came
out of Berkeley was so harmful it should have a
retroactive Prop 65 label.
[Quote from some person completely unrelated to Unix.]
[A link to buy my children's picture book about the
tenuous connection between Unix and the NATO terror
bombing of Yugoslavia, direct from Jeff Beelzebub's
bookstore.]
End of signature.
Sorry Warren, I couldn't help myself. I was "triggered"
just like Dan Cross, previously in that thread, who could
not stay silent.
"Silence is violence, folx."
- Sus of size (a.k.a. postmodernist Porky Pig)
Sent to COFF as Dan should have done.
Howdy all, I've made some mention of this in the past but it's finally at a point I feel like formally plugging it. For the past few years I've been tinkering on a disassembly of Dragon Quest as originally released on the Famicom (NES) in Japan. The NA release had already gotten this treatment[1] but I wanted to produce something a little more clean and separated: https://gitlab.com/segaloco/dq1disasm
Another reason for this project is I've longed for a Lions's Commentary on UNIX-quality analysis of a scrollplane/sprite-driven console video game as I find generally analysis of this type of hardware platform lacking outside of random, chance blog posts and incredibly specialized communities where it takes forever to find information you're seeking. In the longer term I intend to produce a commentary on this codebase covering all the details of how it makes the Famicom tick.
So a few of the difficulties involved:
The Famicom is 6502-based with 32768 bytes of ROM address space directly accessible and another 2048 bytes of ROM accessible through the Picture Processing Unit or PPU. The latter is typically graphics ROM being displayed directly by the PPU but data can be copied to CPU memory space as well. To achieve higher sizes, bank swapping is widely employed, with scores of different bank swapping configurations in place. Dragon Quest presents an interesting scenario in that the Japanese and North American releases use *different* mapper schemes, so while much of the code is the same, some parts were apples to oranges between the existing and new disassembly. Where this has been challenging (and a challenge many other Famicom disassemblers haven't taken) is getting a build that resembles a typical software product, i.e. things nicely split out with little concern as to what bank they're going to wind up on. My choice of linker facilitates this by being able to define separate output binaries for different memory segments, so I can produce these banks as distinct binaries and the population thereof is simply based on segment. Moving items between banks then becomes simply changing the segment and adding/removing a bank-in before calling out. I don't know that I've ever seen a Famicom disassembly do this but I also can't say I've gone looking heavily.
Another challenge has been the language, and not necessarily just because it's Japanese and not my native language. So old Japanese computing is largely in ShiftJIS whereas we now have UTF-8. Well, Dragon Quest uses neither of these as they instead use indices into a table of kana character tiles, as they are ordered in memory, as the string format. What this means is to actually effectively manage strings, and I haven't done this yet, is one needs a to-fro encoder that can convert between UTF-8 Japanese and the particular positioning of their various characters. This says nothing of the difficulty then of translating the game. Most Japanese games of this era used exclusively kana for two reasons: These are games for children, so too much complicated Kanji and you've locked out your target audience. The other reason is space; it would be impossible to fit all the necessary Kanji in memory, even as 8x8 pixel tiles (the main graphic primitive on these chips.) Plus, even if you could, an 8x8 pixel tile is hardly enough resolution to read many Kanji, so they'd likely need 16x16, quadrupling the space requirement if you didn't recycle quadrant radicals. In any case, what this means is all of the strings are going to translate to strictly hiragana or katakana strings, which are Japanese, but not as easily intelligible as the groupings of kana then have to be interpreted into their meanings by context clues often times rather than having an exact definition. The good news though is again these are games for children, so the vocabulary shouldn't be too complicated.
Endianness occasionally presents some problems, one of which I suspect because the chip designers didn't talk to each other...So the PPU exposes a register you write two bytes to, the high and low byte of the address to place the next value written to the data register to. Problem is, this is a 6502-driven system, so when grabbing a word the high byte is the second one. This means that every word has to be sent to the PPU in reverse when selecting an address. This PPU to my knowledge is based on some arcade hardware but otherwise was based on designs strictly for this console, so why they didn't present an address register in the same endianness I'll never know. It tripped me up early on but I worked past it.
Finally, back to the mappers, since a "binary" was really a gaggle of potentially overlapping ROM banks, there wasn't really a single ROM image format used by Nintendo back in the day. Rather, you sent along your banks, layout, and what mapper hardware you needed and that last step was a function of cartridge fab, not some software step. Well, the iNES format is an accommodation for that, presenting the string "NES" as a magic number in a 16-byte header that also contains a few bytes describing the physical cartridge in terms of what mapper hardware, how many banks, what kind, and particular jumpers that influence things like nametable mirroring and the like. Countless disassembles out there are built under the (incorrect) assumption that the iNES binary format is how the binary objects are supposed to build, but this is simply not the case, and forcing this "false structure" actually makes then analysis of the layout and organization of the original project more difficult. What I opted towards instead is using the above-described mechanism of linker scripts defining individual binaries in tandem with two nice old tools: printf(1) and cat(1). When all my individual ROM banks are built, I then simply use printf to spit out the 16 bytes of an iNES header that match my memory particulars and then cat it all together as the iNES format is simply that header then all of the PRG and CHR ROM banks organized in that order. The end result is a build that is agnostic (rightfully so) of the fact that it is becoming an iNES ROM, which is a community invention years after the Famicom was in active use.
Note that this is still an ongoing project. I don't consider it "done" but it is more usable than not now. Some things to look out for:
Any relative positioning of specific graphic data in CHR banks outside that which has been extracted into specific files is not dynamic, meaning moving a tileset around in CHR will garble display of the object associated.
A few songs (credits, dungeons, title screen) do not have every pointer massaged out of them yet, so if their addresses change, those songs at the very least won't sound right, and in extreme circumstances playing them with a shift in pointers could crash the game.
Much of the binary data is still BLOB-afied, hence not each and every pointer being fixed. This will be slow moving and there may be a few parts that wind up involving some scripting to keep them all dynamic (for instance, maps include block IDs and such which are ultimately an index into a table, but the maps make more sense to keep as binary data, so perhaps a patcher is needed to "filter" a map to assign final block IDs, hard to say.)
Also, while some data is very heavily coupled to the engine, such as music, and as such has been disassembled into audio engine commands, other data, like graphic tiles, are more generic to the hardware platform and as such do not have any particular dependencies on the code. As such, these items can exist as pure BLOBs without worry of any pointers buried within or ID values that need to track indices in a particular table. As such these flat binary tiles and other comparable components are *not* included in this repository, as they are copyright of their respective owners. However, I have provided a script that allows you to extract these binary components should you have a clean copy of Dragon Quest for the Famicom as an iNES ROM. The script is all dd(1) calls based on the iNES ROM itself, but there should be enough notes contained with in if a bank-based extraction is necessary. YMMV using the script on anything other than the commercial release. It is only intended to bootstrap things, it will not properly work to extract the assets from arbitrary builds.
If anyone has any questions/comments feel free to contact me off list, or if there is worthwhile group discussion to be had, either way, enjoy!
- Matt G.
[1] - https://github.com/nmikstas/dragon-warrior-disassembly
Hello, today I received in the mail a book I ordered apparently by one of the engineers at Sega responsible for their line of consoles. It's all in Japanese but based on the little I know plus tables in the text, it appears to be fairly technical and thorough. I'm excited to start translating it and see what lies within.
In any case, it got me thinking about what company this book might have as far as Japanese literature concerning computing history there, or even just significant literature in general regarding Japanese computer history. While we are more familiar with IBM, DEC, workstations, minis, etc. the Japanese market had their own spate of different systems such as NEC's various "PCs" (not PC-compats, PC-68, PC-88, PC-98), Sharp X68000, MSX(2), etc. and then of course Nintendo, Sega, NEC, Hudson, and the arcade board manufacturers. My general experience is that Japanese companies are significantly more tight-lipped about everything than those in the U.S. and other English-speaking countries, going so far as to require employees to use pseudonyms in any sort of credits to prevent potential poaching. As such, first-party documentation for much of this stuff is incredibly difficult to come by, and secondary materials and memoirs and such, in my experience at least, are virtually non-existent. However, that is also from my perspective here across the seas trying to research an obscure, technical subject in my non-native tongue. Anyone here have a particular eye for Japanese computing? If so, I'd certainly be interested in some discussion, doesn't need to be on list either.
- Matt G.
Howdy folks, just wanted to share a tool I wrote up today in case it might be useful for someone else: https://gitlab.com/segaloco/dis65
This has probably been done before, but this is a bare-bones one-pass MOS 6500 disassembler that does nothing more than convert bytes to mnemonics and parameters, so no labeling, no origins, etc. My rationale is as I work on my Dragon Quest disassembly, there are times I have to pop a couple bytes through the disassembler again because something got misaligned or some other weird issue. My disassembler through the project has been da65, which does all the labeling and origin stuff but as such, requires a lot of seeking and isn't really amenable to a pipeline, which has required me to do something like:
printf "\xAD\xDE\xEF\xBE" > temp.bin && da65 temp.bin && rm temp.bin
to get the assembly equivalent of 0xDEADBEEF.
Enter my tool, it enables stuff like:
printf "\xAD\xDE\xEF\xBE" | dis65
instead. A longer term plan is to then write a second pass that can then do all the more sophisticated stuff without having to bury the mnemonic generation down in there somewhere, plus that second pass could then be architecture-agnostic to a high degree.
Anywho, feel free to do what you want with it, it's BSD licensed. One "bug" I need to address is that all byte values are presented as unsigned, but in the case of indirects and a few other circumstances, it would make more sense for them to be signed. Probably won't jump on that ASAP, but know that's a coming improvement. While common in disassemblers, I have no intention on adding things like printing the binary bytes next to the opcodes. Also this doesn't support any of the undocumented opcodes, although it should be trivial to add them if needed. I went with lower-case since my assembler supports it, but you should have a fine time piping into tr(1) if you need all caps for an older assembler.
- Matt G.
C, BLISS, BCPL, and the like were hardly the only systems programming
languages that targeted the PDP-11. I knew about many system programming
languages of those times and used all three of these, plus a few others,
such as PL/360, which Wirth created at Stanford in the late 1960s to
develop the Algol-W compiler. Recently, I was investigating something
about early systems programming languages, and a couple of questions came
to me that I could use some help finding answers (see below).
In 1971, RD Russell of CERN wrote a child of Wirth's PL/360 called PL-11:
Programming Language for the DEC PDP-11 Computer
<https://cds.cern.ch/record/880468/files/CERN-74-24.pdf> in Fortran IV. It
supposedly ran on CERN's IBM 360 as a cross-compiler and was hosted on
DOS-11 and later RSX. [It seems very 'CARD' oriented if you look at the
manual - which makes sense, given the time frame]. I had once before heard
about it but knew little. So, I started to dig a little.
If I understand some of the history correctly, PL-11 was created/developed
for a real-time test jig that CERN needed. While it was available in
limited cases, since BLISS-11 required a PDP-10 to cross-compile, it was
not considered (I've stated earlier that some poor marketing choices at DEC
hurt BLISS's ability to spread). Anyway, a friend at CERN later in the
70s/80s told me they thought that as soon as UNIX made it on the scene
there since it was interactive and more accessible, C was quickly
preferred as the system programming language of choice. However, a BCPL
that had come from somewhere in the UK was also kicking around.
So, some questions WRT PL-11:
1. Does anyone here know any (more) of the story -- Why/How?
2. Do you know if the FORTRAN source survives?
3. Did anything interesting/lasting get written using it?
Tx
Clem
ᐧ
I thought folks on COFF and TUHS (Bcc'ed) might find this interesting.
Given the overlap between SDF and LCM+L, I wonder what this may mean
for the latter.
- Dan C.
---------- Forwarded message ---------
From: SDF Membership <membership(a)sdf.org>
Date: Thu, Aug 24, 2023 at 9:10 PM
Subject: [SDF] Computer Museum
To:
We're in the process of opening a computer museum in the Seattle area
and are holding our first public event on September 30th - October 1st.
The museum features interactive exhibits of various vintage computers
with a number of systems remotely accessible via telnet/ssh for those
who are unable to visit in person.
If this interests you, please consider replying with comments and
take the ascii survey below. You can mark an X for what interests you.
I would like to know more about:
[ ] visiting the museum
[ ] how to access the remote systems
[ ] becoming a regular volunteer or docent
[ ] restoring and maintaining various vintage systems
[ ] curation and exhibit design
[ ] supporting the museum with an annual membership
[ ] supporting the museum with an annual sponsorship
[ ] funding the museum endowment
[ ] day to day administration and operations
[ ] hosting an event or meet up at museum
[ ] teaching at the museum
[ ] donating an artifact
Info on our first public event can be found at https://sdf.org/icf
Good morning folks, I'm hoping to pick some brains on something that is troubling me in my search for some historical materials.
Was there some policy prior to mass PDF distribution with standards bodies like ANSI that they only printed copies of standards "to order" or something like that? What has me asking is when looking for programming materials prior to when PDF distribution would've taken over, there's a dearth of actual ANSI print publications. I've only come across one actual print standard in all my history of searching, a copy of Fortran 77 which I guard religiously. Compare this with PALLETS'-worth, like I'm talking warehouse wholesale levels of secondary sources for the same things. I could *drown* in all the secondary COBOL 74 books I see all over the place but I've never seen seen a blip of a suggestion of a whisper of an auction of someone selling a legitimate copy of ANSI X3.23-1974. It feels like searching for a copy of the Christian Bible and literally all I can find are self help books and devotional readers from random followers. Are the standards really that scarce, or was it something that most owners of back in the day would've thrown in the wood chipper when the next edition dropped, leading to an artificial narrowing of the amount of physical specimens still extant?
To summarize, why do print copies of primary standards from the elden days of computing seem like cryptids while one can flatten themselves into a pancake under the mountains upon mountains of derivative materials out there? Why is filtered material infinitely more common than the literal rule of law governing the languages? For instance the closest thing to the legitimate ANSI C standard, a world-changing document, that I can find is the "annotated" version, which thankfully is the full text but blown up to twice the thickness just to include commentary. My bookshelf is starting to run out of room to accommodate noise like that when there are nice succint "the final answer" documents that take up much less space but seem to virtually not exist...
- Matt G.
I was wondering if anyone close to Early Unix and Bell Labs would offer some comments on the
evolution of Unix and the quality of decisions made by AT&T senior managers.
Tom Wolfe did an interesting piece on Fairchild / Silicon Valley,
where he highlights the difference between SV’s management style
and the “East Coast” Management style.
[ Around 2000, “Silicon Valley” changed from being ‘chips & hardware’ to ’software’ & systems ]
[ with chip making, every new generation / technology step resets competition, monopolies can’t be maintained ]
[ Microsoft showed that Software is the opposite. Vendor Lock-in & monopolies are common, even easy for aggressive players ]
Noyce & Moore ran Fairchild Semiconductor, but Fairchild Camera & Instrument was ‘East Coast’
or “Old School” - extracting maximum profit.
It seems to me, an outsider, that AT&T management saw how successful Unix was
and decided they could apply their size, “marketing knowhow” and client lists
to becoming a big player in Software & Hardware.
This appears to be the reason for the 1984 divestiture.
In another decade, they gave up and got out of Unix.
Another decade on, AT&T had one of the Baby Bells, SBC, buy it.
SBC had understood the future growth markets for telephony was “Mobile”
and instead of “Traditional” Telco pricing, “What the market will bear” p[lus requiring Gross Margins over 90%,
SBC adopted more of a Silicon Valley pricing approach - modest Gross Margins
and high “pass through” rates - handing most/all cost reductions onto customers.
If you’re in a Commodity market, passing on cost savings to customers is “Profit Maximising”.
It isn’t because Commodity markets are highly competitive, but Volumes drive profit,
and lower prices stimulate demand / Volumes. [ Price Elasticity of Demand ]
Kenneth Flamm has written a lot on “Pass Through” in Silicon Chip manufacture.
Just to close the loop, Bells Labs, around 1966, hired Fred Terman, ex-Dean of Stanford,
to write a proposal for “Silicon Valley East”.
The AT&T management were fully aware of California and perhaps it was a long term threat.
How could they replicate in New Jersey the powerhouse of innovation that was happening in California?
Many places in many countries looked at this and a few even tried.
Apparently South Korea is the only attempt that did reasonably.
I haven’t included links, but Gordon Bell, known for formulating a law of computer ‘classes’,
did forecast early that MOS/CMOS chips would overtake Bipolar - used by Mainframes - in speed.
It gave a way to use all those transistors on a chip that Moore’s Law would provide,
and with CPU’s in a few, or one, chip, the price of systems would plummet.
He forecast the cutover in 1985 and was right.
The MIPS R2000 blazed past every other chip the year it was released.
And of course, the folk at MIPS understood that building their own O/S, tools, libraries etc
was a fool’s errand - they had Unix experience and ported a version.
By 1991, IBM was almost the Last Man Standing of the original 1970’s “IBM & the BUNCH”,
and their mainframe revenues collapsed. In 1991 and 1992, IBM racked up the largest
corporate losses in US history to the time, then managed to survive.
Linux has, in my mind, proven the original mid-1970’s position of CSRC/1127
that Software has to be ‘cheap’, even ‘free’
- because it’s a Commodity and can be ’substituted’ by others.
=================================
1956 - AT&T / IBM Consent decree: 'no computers, no software’
1974 - CACM article, CSRC/1127 in Software Research, no commercial Software allowed
1984 - AT&T divested, doing commercial Software & Computers
1994 - AT&T Sells Unix
1996 - “Tri-vestiture", Bell Labs sold to Lucent, some staff to AT&T Research.
2005 - SBC buys AT&T, long-lines + 4 baby bells
1985 - MIPS R2000, x2 throughput at same clock speed. Faster than bipolar, CMOS CPU's soon overtook ECL
=================================
Code Critic
John Lions wrote the first, and perhaps only, literary criticism of Unix, sparking one of open source's first legal battles.
Rachel Chalmers
November 30, 1999
https://www.salon.com/test2/1999/11/30/lions_2/
"By the time the seventh edition system came out, the company had begun to worry more about the intellectual property issues and trade secrets and so forth," Ritchie explains.
"There was somewhat of a struggle between us in the research group who saw the benefit in having the system readily available,
and the Unix Support Group ...
Even though in the 1970s Unix was not a commercial proposition,
USG and the lawyers were cautious.
At any rate, we in research lost the argument."
This awkward situation lasted nearly 20 years.
Even as USG became Unix System Laboratories (USL) and was half divested to Novell,
which in turn sold it to the Santa Cruz Operation (SCO),
Ritchie never lost hope that the Lions books could see the light of day.
He leaned on company after company.
"This was, after all, 25-plus-year-old material, but when they would ask their lawyers,
they would say that they couldnt see any harm at first glance,
but there was a sort of 'but you never know ...' attitude, and they never got the courage to go ahead," he explains.
Finally, at SCO [ by July 1996 ], Ritchie hit paydirt.
He already knew Mike Tilson, an SCO executive.
With the help of his fellow Unix gurus Peter Salus and Berny Goodheart, Ritchie brought pressure to bear.
"Mike himself drafted a 'grant of permission' letter," says Ritchie,
"'to save the legal people from doing the work!'"
Research, at last, had won.
=================================
Tom Wolfe, Esquire, 1983, on Bob Noyce:
The Tinkerings of Robert Noyce | Esquire | DECEMBER 1983.webarchive
http://classic.esquire.com/the-tinkerings-of-robert-noyce/
=================================
Special Places
IEEE Spectrum Magazine
May 2000
Robert W. Lucky (Bob Lucky)
https://web.archive.org/web/20030308074213/http://www.boblucky.com/reflect/…https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=803583
Why does place matter? Why does it matter where we live and work today when the world is so connected that we're never out of touch with people or information?
The problem is, even if they get da Vinci, it won't work.
There's just something special about Florence, and it doesn't travel.
Just as in this century many places have tried to build their own Silicon Valley.
While there have been some successes in
Boston,
Research Triangle Park, Austin, and
Cambridge in the U.K.,
to name a few significant places, most attempts have paled in comparison to the Bay Area prototype.
In the mid-1960s New Jersey brought in Fred Terman, the Dean at Stanford and architect of Silicon Valley, and commissioned him to start a Silicon Valley East.
[ Terman reited from Stanford in 1965 ]
=================================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
[TUHS to Bcc]
On Wed, Feb 1, 2023 at 3:23 PM Douglas McIlroy
<douglas.mcilroy(a)dartmouth.edu> wrote:
> > In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes
>
> I don't know any Unix examples, but DTSS (Dartmouth Time Sharing
> System) "communication files" were used for the purpose. For a fuller
> story see https://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf
Interesting. This is now being discussed on the Multicians list (which
had a DTSS emulator! Done for use by SIPB). Warren Montgomery
discussed communication files under DTSS for precisely this kind of
thing; apparently he had a chess program he may have run under them.
Barry Margolin responded that he wrote a multiuser chat program using
them on the DTSS system at Grumman.
Margolin suggests a modern Unix-ish analogue may be pseudo-ttys, which
came up here earlier (I responded pointing to your wonderful note
linked above).
> > This is probably a bit more Plan 9-ish than UNIX-ish
>
> So it was with communication files, which allowed IO system calls to
> be handled in userland. Unfortunately, communication files were
> complicated and turned out to be an evolutionary dead end. They had
> had no ancestral connection to successors like pipes and Plan 9.
> Equally unfortunately, 9P, the very foundation of Plan 9, seems to
> have met the same fate.
I wonder if there was an analogy to multiplexed files, which I admit
to knowing very little about. A cursory glance at mpx(2) on 7th
Edition at least suggests some surface similarities.
- Dan C.
Tom,
it stands up very well, 1977 to 2023.
> On 5 Aug 2023, at 13:46, Tom Lyon <pugs78(a)gmail.com> wrote:
>
> Here's my summer activity report on my work porting V6 code to the Interdata, working closely under Steve and Dennis. I left before the nasty bug was discovered. (I think).
> https://akapugsblog.files.wordpress.com/2018/05/inter-unix_portability.pdf
--
So I've been studying the Interdata 32-bit machines a bit more closely lately and I'm wondering if someone who was there at the time has the scoop on what happened to them. The Wikipedia article gives some good info on their history but not really anything about, say, failed follow-ons that tanked their market, significant reasons for avoidance, or anything like that. I also find myself wondering why Bell didn't do anything with the Interdata work after springboarding further portability efforts while several other little streams, even those unreleased like the S/370 and 8086 ports seemed to stick around internally for longer. Were Interdata machines problematic in some sort of way, or was it merely fate, with more popular minis from DEC simply spacing them out of the market? Part of my interest too comes from what influence the legacy of Interdata may have had on Perkin-Elmer, as I've worked with Perkin-Elmer analytical equipment several times in the chemistry-side of my career and am curious if I was ever operating some vague descendent of Interdata designs in the embedded controllers in say one of my mass specs back when.
- Matt G.
P.S. Looking for more general history hence COFF, but towards a more UNIXy end, if there's any sort of missing scoop on the life and times of the Bell Interdata 8/32 port, for instance, whether it ever saw literally any production use in the System or was only ever on the machines being used for the portability work, I'm sure that could benefit from a CC to TUHS if that history winds up in this thread.
So as I was searching around for literature I came across someone selling a 2 volume set of Inferno manuals. I had never seen print manuals so decided to scoop them up, thinking they'd fit nicely with a 9front manual I just ordered too.
That said, I hate to just grab a book for it to sit on my shelf, so I want to explore Inferno once I've got literature in hand. Does anyone here know the best way of VMing Inferno these days, if I can just expect to find a copy of distribution media somewhere that'll work in VirtualBox or QEMU or if there's some particular "path of righteousness" I need to follow to successfully land in an Inferno environment.
Second, and I hope I don't spin up a debate with this, but is this something I'm investing good time in getting familiar with? I certainly don't hear as much about Inferno as I do about Plan9, but it does feel like it's one of the little puzzle pieces in this bigger picture of systems theory and development. Have there been any significant Inferno-adjacent developments or use cases in recent (past 10-15) years?
- Matt G.
I don't know if a thousand users ever logged in there at one time, but
they do tend to have a lot of simultaneous logins.
On Mon, Mar 13, 2023 at 6:16 PM Peter Pentchev <roam(a)ringlet.net> wrote:
>
> On Wed, Mar 08, 2023 at 02:52:43PM -0500, Dan Cross wrote:
> > [bumping to COFF]
> >
> > On Wed, Mar 8, 2023 at 2:05 PM ron minnich <rminnich(a)gmail.com> wrote:
> > > The wheel of reincarnation discussion got me to thinking:
> [snip]
> > > The evolution of platforms like laptops to becoming full distributed systems continues.
> > > The wheel of reincarnation spins counter clockwise -- or sideways?
> >
> > About a year ago, I ran across an email written a decade or more prior
> > on some mainframe mailing list where someone wrote something like,
> > "wow! It just occurred to me that my Athlon machine is faster than the
> > ES/3090-600J I used in 1989!" Some guy responded angrily, rising to
> > the wounded honor of IBM, raving about how preposterous this was
> > because the mainframe could handle a thousand users logged in at one
> > time and there's no way this Linux box could ever do that.
> [snip]
> > For that matter, a
> > thousand users probably _could_ telnet into the Athlon system. With
> > telnet in line mode, it'd probably even be decently responsive.
>
> sdf.org (formerly sdf.lonestar.org) comes to mind...
>
> G'luck,
> Peter
>
> --
> Peter Pentchev roam(a)ringlet.net roam(a)debian.org pp(a)storpool.com
> PGP key: http://people.FreeBSD.org/~roam/roam.key.asc
> Key fingerprint 2EE7 A7A5 17FC 124C F115 C354 651E EFB0 2527 DF13
Howdy folks, I wanted to get some thoughts and experiences with regards to what sort of EOL handling of mainframe/mini hardware was typical. Part of this is to inform what and where to look for old hardware things.
So the details may differ with era, but what I'm curious about is back in the day, when a mainframe or mini was essentially decommissioned, what was more likely to be done with the central unit, and peripherals if they weren't forward compatible with that user's new system.
Were machines typically offloaded for money to smaller ops, or was it more common to simply dispose of/recycle components? As a more pointed example, if you worked in a shop that had IBM S/3x0, PDPs, larger 3B hardware, when those fell out of use, what was the protocol for getting rid of it? Were most machines "disposed of" in a complete way, or was it very typical to parts it out first, meaning most machines that reached EOL simply don't exist anymore, they weren't moved as a unit, rather, they're any number of independent parts floating around anywhere from individual collections to slowly decaying in a landfill somewhere.
My fear is that the latter was more common, as that's what I've seen in my lab days; old instrumentation wasn't just auctioned off or otherwise gotten rid of complete, we'd typically parts the things out resulting in a chassis and some of the paneling going in one waste stream, unsalvageable parts like burnt out boards going in another, and anything reusable like ribbon cables and controller boards being stashed to replace parts on their siblings in the lab. I dunno if this is apples to oranges though because the main instruments I'm thinking of, the HP/Agilent 5890, 6890, and 7890 series, had different lifespan expectations than computing systems had, and share a lot more of the under the hood components like solenoids and gas tubing systems, so that may not be a good comparison, just the closest one I have from my own personal experience.
Thoughts?
- Matt G.
> From: Greg 'groggy' Lehey
> Interdata had instruction sets that were close to the IBM instruction
> set, but my recollection was that they were different enough that IBM
> software wouldn't run on them.
Bitsavers doesn't have a wealth of Interdata documentation, but there is some:
http://bitsavers.org/pdf/interdata/32bit/
Someone who's familiar with the 360 instruction set should be able to look
at e.g.:
http://bitsavers.org/pdf/interdata/32bit/8-32/8-32_Brochure_1977.pdf
and see how compatible it is.
Noel
Hi all, I'm looking for a 16-bit big-endian Unix-like development
environment with a reasonably new C compiler and a symbolic debugger.
And/or, a libc with source code suitable for a 16-bit big-endian environment.
Rationale: I've designed and built a 6809 single board computer (SBC) with
8K ROM, 2K I/O space for a UART and block storage, and 56K RAM. It's a
big-endian platform and the C compiler has 16-bit ints by default. I've
been able to take the filesystem code from XV6 and get it to fit into
the ROM with a hundred bytes spare. The available Unix-like system calls are:
dup, read, write, close, fstat, link,
unlink, open, mkdir, chdir, exit, spawn
and the spawn is like exec(). There is no fork() and no multitasking.
I've got many of the existing XV6 userland programs to run along with a
small shell that can do basic redirection.
Now I'm trying to bring up a libc on the platform. I'm currently trying
the libc from FUZIX but I'm not wedded to it, so alternative libc
recommendations are most welcome.
There's no debugging environment on this SBC. I do have a simulator that
matches the hardware, but I can only breakpoint at addresses and single-step
instructions. It makes debugging pretty tedious! So I was thinking of
using an existing Unix-like platform to bring up the libc. That way, if
there are bugs, I can use an existing symbolic debugger on the platform.
I could use 2.11BSD as the dev platform but it's little-endian; I'm worried
that there might be endian issues that stop me finding bugs that will arise
on the 16-bit 6809 platform.
As for which libc: I looked at the 2.11BSD libc/include and there's so
much stuff I don't need (networking etc.) that it's hard to winnow down
to just what I need. The FUZIX libc looks good. I just came across Elks
and that might be a possible alternative. Are there others to consider?
Anyway, thanks in advance for your suggestions.
Cheers, Warren
References:
XV6: https://github.com/mit-pdos/xv6-public
FUXIZ: https://github.com/EtchedPixels/FUZIX/tree/master/Library
Elks: https://github.com/jbruchon/elks/tree/master/libc
Good afternoon or whichever time of day you find yourself in. I come to you today in my search for some non-UNIX materials for a change. The following have been on my search list lately in no particular priority:
- Standards:
COBOL 68
C 89
C++ 98
Minimal BASIC 78
Full BASIC 87
SQL (any rev)
IS0 9660 (CD FS, any rev)
ISO 5807 (Flow Charts, any rev)
- Manuals:
PDP-11/20 Processor Handbook
(EAE manual too if it's separate)
WE32000 and family literature
GE/Honeywell mainframe and G(E)COS documents
The IBM 704 FORTRAN Manual (The -original- FORTRAN book)
The Codasyl COBOL Report (The -original- COBOL book)
Any Interdata 7 or 8/32 documentation (or other Interdata stuff really)
The Ti TMS9918 manual
The Philips "Red Book" CDDA standard
If it's part of one, the Bell System Practices Issue containing, or separately otherwise, BSP 502-503-101 (2500 and 2554 reference)
If any of these are burning a hole in your bookshelf and you'd like to sell them off, just let me know, I'll take em off your hands and make it worth your while. I'm not hurting for any of them, but rather, I see an opportunity to get things on my shelf that may facilitate expansion of some of my existing projects in new directions in the coming years.
Also, I'm in full understanding of the rarity of some of these materials and would like to stress my interest in quality reference material. Of course, that's not to dismiss legitimate valuation, rather, simply to inform that I intend to turn no profit from these materials, and wherever they wind up after their (hopefully very long) tenure in my library will likely have happened via donation.
- Matt G.
P.S. On that last note, does anyone know if a CHM registration of an artifact[1] means they truly have a physical object in a physical archive somewhere? That's one of the sorts of things I intend to look into in however many decades fate gives me til I need to start thinking about it.
[1] - https://www.computerhistory.org/collections/catalog/102721523
> From: Matt G.
> PDP-11/20 Processor Handbook
> (EAE manual too if it's separate)
Yes and no. There are separate manuals for the EAE (links here:
https://gunkies.org/wiki/KE11-A_Extended_Arithmetic_Elementhttps://gunkies.org/wiki/KE11-B_Extended_Arithmetic_Element
the -B is the same to program as the -A; its implementation is just a single
board, though) but the -11/20 processor handbook (the second version; the one
dated 1972) does have a chapter (Chapter 8; Part I) on the EAE.
(For no reason I can understand, neither the -11/05 nor the -11/04 processor
handbook covers the EAE, even though neither one has the EIS, and if you need
multiply/etc in hardware on either one, the EAE is your only choice).
Noel
Hi,
I'd like some thoughts ~> input on extended regular expressions used
with grep, specifically GNU grep -e / egrep.
What are the pros / cons to creating extended regular expressions like
the following:
^\w{3}
vs:
^(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)
Or:
[ :[:digit:]]{11}
vs:
( 1| 2| 3| 4| 5| 6| 7| 8|
9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31)
(0|1|2)[[:digit:]]:(0|1|2|3|4|5)[[:digit:]]:(0|1|2|3|4|5)[[:digit:]]
I'm currently eliding the 61st (60) second, the 32nd day, and dealing
with February having fewer days for simplicity.
For matching patterns like the following in log files?
Mar 2 03:23:38
I'm working on organically training logcheck to match known good log
entries. So I'm *DEEP* in the bowels of extended regular expressions
(GNU egrep) that runs over all logs hourly. As such, I'm interested in
making sure that my REs are both efficient and accurate or at least not
WILDLY badly structured. The pedantic part of me wants to avoid
wildcard type matches (\w), even if they are bounded (\w{3}), unless it
truly is for unpredictable text.
I'd appreciate any feedback and recommendations from people who have
been using and / or optimizing (extended) regular expressions for longer
than I have been using them.
Thank you for your time and input.
--
Grant. . . .
unix || die
What struck me reading this is the estimated price (~$10K) to build an Alto, elsewhere I’ve seen $12K and 80 built in the first run.
[ a note elsewhere says $4,000 on 128KB of RAM. 4k-bit or 16-kbit chips? unsure ]
I believe the first "PDP-11” bought by 127 at Bell Labs was ~$65k fully configured (Doug M could confirm or not), although the disk drive took some time to come.
Later, that model was called PDP-11/20.
Why the price difference?
PARC was doing DIY - it’s parts only, not a commercial production run with wages, space, tooling & R+D costs and marketing/sales to be amortised,
with a 80%+ Gross Margin required, as per DEC.
Why didn’t Bell Labs build their own “Personal Computer” like PARC?
They had the need, the vision, the knowledge & expertise.
I’d suggest three reasons:
- The Consent Decree. AT&T couldn’t get into the Computer Market, only able to build computers for internal use.
They didn’t need GUI PC’s to run telephone exchanges.
- Bell Labs management:
they’d been burned by MULTICS and, rightly, refused the CSRC a PDP-10 in 1969.
- Nobody ’needed’ to save money building another DIY low-performance device.
A home-grown supercomputer maybe :)
It’s an accident of history that PARC could’ve, but didn’t, port Unix to the Alto in 1974.
By V7 in 1978, my guess it was too late because both sides had locked in ‘commercial’ positions and for PARC to rewrite code wasn’t justified: “If it ain’t Broke”…
Porting Unix before 1974 was possible:
PARC are sure to have had close contact with UC Berkeley and the hardware/software groups there.
Then 10 years later both Apple and Microsoft re-invent Graphical computing using commodity VLSI cpu’s.
Which was exactly the technology innovation path planned by Alan Kay in 1970:
build today what’ll be cheap hardware in 10 years and figure out how to use it.
Ironic that in 1994 there was the big Apple v Microsoft lawsuit over GUI’s & who owned what I.P.
Xerox woke up up midway through and filed their own infringement suit, and lost.
[ dismissed because approx they'd waited too long ]
<https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Microsoft_Corp.>
==============
PDF:
<http://bwl-website.s3-website.us-east-2.amazonaws.com/38a-WhyAlto/Acrobat.p…>
Other formats:
<http://bwl-website.s3-website.us-east-2.amazonaws.com/38a-WhyAlto/Abstract.…>
==============
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
> From: steve jenkin
> What struck me reading this is the estimated price (~$10K) to build an
> .. [ a note elsewhere says $4,000 on 128KB of RAM. 4k-bit or 16-kbit
> chips? unsure ]
16K (4116) - at least, in the Alto II I have images of. Maxc used 1103's
(1K), but they were a few years before the Alto.
> I believe the first "PDP-11" bought by 127 at Bell Labs was ~$65k fully
> configured
I got out my August 1971 -11/20 price sheet, and that sounds about right. The
machine had "24K bytes of core memory .. and a disk with 1K blocks (512K
bytes ... a single .5 MB disk .. every few hours' work by the typists meant
pushing out more information onto DECtape, because of the very small disk."
("The Evolution of the Unix Time-sharing System"):
11,450 Basic machine CPU + 8KB memory
6,000 16KB memory (maybe 7,000, if MM11-F)
4,000 TC11 DECtape controller
4,700 TU56 DECtape transport
5,000 RF11 controller
9,000 RS11 drive
3,900 PC11 paper tape
-------
44,050
(Although Bell probably got a discount?)
The machine later had an RK03:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V1/u0.s
but that wasn't there initially (they are 2.4MB, larger than the stated
disk); it cost 5,900 (RK11 controller) + 9,000 (RK03 drive).
Also, no signs of the KE11-A in the V1 code (1,900 when it eventually
appeared). The machine had extra serial lines (on DC11's), but they weren't
much; 750 per line.
> Why the price difference?
Memory was part of it. The -11/20 used core; $9,000 for the memory alone.
Also, the machine was a generation older, the first DEC machine built out of
IC's - all SSI. (It wasn't micro-coded; rather, a state machine. Cheap PROM
and SRAM didn't exist yet.)
Noel
So this evening I've been tinkering with a WECo 2500 I've been using for playing with telecom stuff, admiring the quality of the DTMF module, and it got me thinking, gee, this same craftsmanship would make for some very nice arcade buttons, which then further had me pondering on the breadth of the Bell System's capabilities and the unique needs of the video game industry in the early 80s.
In many respects, the combination of Western Electric and Bell Laboratories could've been a hotbed of video game console and software development, what with WECo's capability to produce hardware such as coin slots, buttons, wiring harnesses for all sorts of equipment, etc. and then of course the software prowess of the Labs.
Was there to anyone here's knowledge any serious consideration of this market by Bell? The famous story of UNIX's origins includes Space Travel, and from the very first manual, games of various kinds have accompanied UNIX wherever it goes. It seems that out of most companies, the Bell System would've been very well poised, what with their own CPU architecture and other fab operations, manufacturing and distribution chains, and so on. There's a looooot of R&D that companies such as Atari and Nintendo had to engage in that the Bell System had years if not decades of expertise in. Would anti-trust stuff have come into play in that regard? Bell couldn't compete in the computer market, and I suppose it would depend on the legal definitions applicable to video game hardware and software at the time.
In any case, undercurrent here is the 2500 is a fine telephone, if the same minds behind some of this WECo hardware had gone into video gaming, I wonder how different things would've turned out.
- Matt G.