> From: Paul Winalski
> The 1403 attached to S/360/370 via a byte multiplexer channel ...
> The question is, did they have a way to attach the 1403 to any of their
> computer systems?
There's a thing called a DX11:
https://gunkies.org/wiki/DX11-B_System_360/370_Channel_to_PDP-11_Unibus_Int…
which attaches a "selector, multiplexer or block multiplexer channel" to a
UNIBUS machine, which sounds like it could support the "byte multiplexer
channel"?
The DX11 brochure only mentions that it can be "programmed to emulate a 2848,
2703 or 3705 control unit" - i.e. look like a peripheral to a IBM CPU;
whether it could look like an INM CPU to an IBM peripheral, I don't know.
(I'm too lazy to look at the documentation; it seems to be all there, though.)
Getting from the UNIBUS to the -10, there were off-the-shelf boxes for; the DL10 for
the KA10 and KI10 CPUs, and a DTE20 on a KL10.
It all probably needed some coding, though.
Noel
There's been a discussion recently on TUHS about the famous IBM 1403
line printer. It's strayed pretty far off-topic for TUHS so I'm
continuing the topic here in COFF.
DEC marketed its PDP-10 computer systems as their solution for
traditional raised-floor commercial data centers, competing directly
with IBM System 360/370. DEC OEMed a lot of data center peripherals
such as card readers/punches, line printers, 9-track magtape drives,
and disk drives for their computers, but their main focus was low cost
vs. heavy duty. Not really suitable for the data center world.
So DEC OEMed several high-end data center peripherals for use on big,
commercial PDP-10 computer systems. For example, the gold standard
for 9-track tape drives in the IBM world was tape drives from Storage
Technology Corporation (STC). DEC designed an IBM selector
channel-to-MASSBUS adapter that allowed one to attach STC tape drives
to a PDP-10. AFAIK this was never offered on the PDP-11 VAX, or any
other of DEC's computer lines. They had similar arrangements for
lookalikes for IBM high-performance disk drives.
Someone on TUHS recalled seeing an IBM 1403 or similar line printer on
a PDP-10 system. The IBM 1403 was certainly the gold standard for
line printers in the IBM world and was arguably the best impact line
printer ever made. It was still highly sought after in the 1970s,
long after the demise of the 1950s-era IBM 1400 computer system it was
designed to be a part of. Anyone considering a PDP-10 data center
solution would ask about line printers and, if they were from the IBM
world, would prefer a 1403.
The 1403 attached to S/360/370 via a byte multiplexer channel, so one
would need an adapter that looked like a byte multiplexer channel on
one end and could attach to one of DEC's controllers at the other end
(something UNIBUS-based, most likely).
We know DEC did this sort of thing for disks and tapes. The question
is, did they have a way to attach the 1403 to any of their computer
systems?
-Paul W.
> the first DEC machine with an IC processor was the -11/20, in 1970
Clem has reminded me that the first was the PDP-8/I-L (the second was a
cost-reduced version of the -I), from. The later, and much more common,
PDP-8/E-F-M, were contemporaneous with the -11/20.
Oh well, only two years; doesn't really affect my main point. Just about
'blink and you'll miss them'!
Noel
> From: Bakul Shah
> Now I'd probably call them kernel threads as they don't have a separate
> address space.
Makes sense. One query about stacks, and blocking, there. Do kernel threads,
in general, have per-thread stacks; so that they can block (and later resume
exactly where they were when they blocked)?
That was the thing that, I think, made kernel processes really attractive as
a kernel structuring tool; you get code ike this (from V6):
swap(rp->p_addr, a, rp->p_size, B_READ);
mfree(swapmap, (rp->p_size+7)/8, rp->p_addr);
The call to swap() blocks until the I/O operation is complete, whereupon that
call returns, and away one goes. Very clean and simple code.
Use of a kernel process probably makes the BSD pageout daemon code fairly
straightforward, too (well, as straightforward as anything done by Berzerkly
was :-).
Interestingly, other early systems don't seem to have thought of this
structuring technique. I assumed that Multics used a similar technique to
write 'dirty' pages out, to maintain a free list. However, when I looked in
the Multics Storage System Program Logic Manual:
http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageS…
Multics just writes dirty pages as part of the page fault code: "This
starting of writes is performed by the subroutine claim_mod_core in
page_fault. This subroutine is invoked at the end of every page fault." (pg.
8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to
complete dealing with a page fault.)
It makes sense to have a kernel process do this; having the page fault code
do it just makes that code more complicated. (The code in V6 to swap
processes in and out is beautifully simple.) But it's apparently only obvious
in retrospect (like many brilliant ideas :-).
Noel
So Lars Brinkhoff and I were chatting about daemons:
https://gunkies.org/wiki/Talk:Daemon
and I pointed out that in addition to 'standard' daemons (e.g. the printer
spooler daemon, email daemon, etc, etc) there are some other things that are
daemon-like, but are fundamentally different in major ways (explained later
below). I dubbed them 'system processes', but I'm wondering if ayone knows if
there is a standard term for them? (Or, failing that, if they have a
suggestion for a better name?)
Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping)
process"), but the CACM "The UNIX Time-Sharing System" paper:
https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
doesn't even mention it, so no guidance there. Berkeley UNIX also has one,
mentioned in "Design and Implementation of the Berkeley Virtual Memory
Extensions to the UNIX Operating System":
http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
where it is called the "pageout daemon".("During system initialization, just
before the init process is created, the bootstrapping code creates process 2
which is known as the pageout daemon. It is this process that .. writ[es]
back modified pages. The process leaves its normal dormant state upon being
waken up due to the memory free list size dropping below an upper
threshold.") However, I think there are good reasons to dis-favour the term
'daemon' for them.
For one thing, typical daemons look (to the kernel) just like 'normal'
processes: their object code is kept in a file, and is loaded into the
daemon's process when it starts, using the same mechanism that 'normal'
processes use for loading their code; daemons are often started long after
the kernel itself is started, and there is usually not a special mechanism in
the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init'
process, not the kernel); daemons interact with the kernel through system
calls, just like 'ordinary' processes; the daemon's process runs in 'user'
CPU mode (using the same standard memory mapping mechanisms, just like
blah-blah).
'System processes' do none of these things: their object code is linked into
the monolithic kernel, and is thus loaded by the bootstrap; the kernel
contains special provision for starting the system process, which start as
the kernel is starting; they don't do system calls, just call kernel routines
directly; they run in kernel mode, using the same memory mapping as the
kernel itself; etc, etc.
Another important point is that system processes are highly intertwined with
the operation of the kernel; without the system process(es) operating
correctly, the operation of the system will quickly grind to a halt. The loss
of ordinary' daemons is usually not fatal; if the email daemon dies, the
system will keep running indefinitely. Not so, for the swapping process, or
the pageout daemon
Anyway, is there a standard term for these things? If not, a better name than
'system process'?
Noel
For the benefit of Old Farts around here, I'd like to share the good
word that an ITS 138 listing from 1967 has been discovered. A group of
volunteers is busy transcribing the photographed pages to text.
Information and link to the data:
https://gunkies.org/wiki/ITS_138
This version is basically what ITS first looked like when it went into
operation at the MIT AI lab. It's deliciously arcane and primitive.
Mass storage is on four DECtape drives, no disk here. Users stations
consist of five teletypes and four GE Datanet 760 CRT consoles (46
colums, 26 lines). The number of system calls is a tiny subset of what
would be available later.
There are more listings from 1967-1969 for DDT, TECO, LISP, etc. Since
they are fan-fold listings, scanning is a bit tricky, so a more labor-
intensive photographing method is used.
Hello everyone, I was wondering if anyone is aware of any surviving technical diagrams/schematics for the WECo 321EB or WECo 321DS WE32x00 development systems? Bitsavers has an AT&T Data Book from 1987 detailing pin maps, registers, etc. of 32xxx family ICs and then another earlier manual from 1985 that seems to be more focused on a technical overview of the CPU specifically. Both have photographs and surface level block diagrams, but nothing showing individual connections, which bus leads went where, etc. While the descriptions should be enough, diagrams are always helpful.
In any case, I've recently ordered a 32100 CPU and 32101 MMU I saw sitting on eBay to see what I can do with some breadboarding and some DRAM/DMA controllers from other vendors, was thinking of referring to any available design schematics of the 321 development stuff for pointers on integrations. Either way, i'm glad the data books on the hardware have been preserved, that gives me a leg up.
Thanks for any insights!
- Matt G.
Good day everyone, I thought I'd share a new project I've been working on since it is somewhat relevant to old and obscure computing stuff that hasn't gotten a lot of light shed on it.
https://gitlab.com/segaloco/doki
After the link is an in-progress disassembly of Yume Kojo: Doki Doki Panic for the Famicom Disk System, known better in the west as the engine basis for Super Mario Bros. 2 for the NES (the one with 4 playable characters, pick-and-throw radishes, etc.)
What inspired me to start on this project is the Famicom Disk System is painfully under-documented, and what is out there is pretty patchy. Unlike with its parent console, no 1st party development documentation has been archived concerning the Disk System, so all that is known about its programming interfaces have been determined from disassemblies of boot ROMs and bits and pieces of titles over the years. The system is just that, a disk drive that connects to the Famicom via a special adapter that provides some RAM, additional sound functionality, and some handling for matters typically controlled by the cartridge (background scroll-plane mirroring and saving particularly.) The physical disk format is based on Mitsumi's QuickDisk format, albeit with the casing extended in one dimension as to provide physical security grooves that, if not present, will prevent the inserted disk from booting. The hardware includes a permanently-resident boot ROM which maps to 0xE000-0xFFFF (and therefore provides the 6502 vectors). This boot ROM in turn loads any files from the disk that match a specified pattern in the header to header-defined memory ranges and then acts on a secondary vector table at 0xDFFA (really 0xDFF6, the disk system allows three separate NMI vectors which are selected from by a device register.) The whole of the standard Famicom programming environment applies, although the Disk System adds an additional bank of device registers in a reserved memory area and exposes a number of "syscalls" (really just endpoints in the 0xE000-0xFFFF range, it's unknown at present to what degree these entries/addresses were documented to developers.)
I had to solve a few interesting challenges in this process since this particular area gets so little attention. First, I put together a utility and supporting library to extrapolate info from the disk format. Luckily the header has been (mostly) documented, and I was able to document a few consistencies between disks to fill in a few of the parts that weren't as well documented. In any case, the results of that exercise are here: https://gitlab.com/segaloco/fdschunk. One of the more interesting matters is that the disk creation and write dates are stored not only in BCD, but the year is not always Gregorian. Rather, many titles reflect instead the Japanese period at the time the title was released. For instance, the Doki Doki Panic image I'm using as a reference is dated (YY/MM/DD) "61/11/26" which is preposterous, the Famicom was launched in 1983, but applying this knowledge of the Showa period, the date is really "86/11/26" which makes much more sense. This is one of those things I run into studying Japanese computing history time to time, I'm sure the same applies to earlier computing in other non-western countries. We're actually headed for a "2025-problem" with this time-keeping as that is when the Showa calendar rolls over. No ROMs have been recovered from disk writer kiosks employed by Nintendo in the 80s, so it is unknown what official hardware which applies these timestamps does when that counter rolls over. I've just made the assumption that it should roll back to 00, but there is no present way to prove this. The 6502 implementation in the Famicom (the Ricoh 2A03) omitted the 6502 BCD mode, so this was likely handled either in software or perhaps a microcontroller ROM down inside the disk drives themselves.
I then had to solve the complementary problem, how do I put a disk image back together according to specs that aren't currently accessible. Well, to do that, I first chopped the headers off of every first-party Nintendo image I had in my archive and compared them in a table. I diverted them into two groups: pristine images that represent original pressings in a Nintendo facility and "dirty" images that represent a rewrite of a disk at one of the disk kiosks (mind you, Nintendo distributed games both ways, you could buy a packaged copy or you could bring a rewritable disk to a kiosk and "download" a new game.) My criterion for categorization was whether the disk create and modify times were equal or not. This allowed me to get a pretty good picture of what headers getting pumped out of the factory look like, and how they change when the disk is touched by a writer kiosk. I then took the former configuration and wrote a couple tools to consume a very spartan description of the variable pieces and produce the necessary images: https://gitlab.com/segaloco/misc/-/tree/master/fds. These tools, bintofdf and fdtc, apply a single file header to a disk file and create a "superblock" for a disk side respectively. I don't know what the formal terms are, they may be lost to time, but superblock hopefully gets the point across, albeit it's not an exact analog to UNIX filesystems. Frankly I can't find anything regarding what filesystem this might be based on, if at all, or if it is an entirely Nintendo-derived format. In any case, luckily the header describing a file is self-contained on that file, and then the superblock only needs to know how many files are present, so the two steps can be done independently. The result is a disk image, stamped with the current Showa BCD date, that is capable of booting on the system. The only thing I don't add that "pure" disks contain are CRCs of the files. On a physical disk, these header blocks also contain CRCs of the data they describe, these, by convention, are omitted from disk dumps. I'm actually not entirely sure why, but I imagine emulator writers just omit the CRC check as well, so it doesn't matter to folks just looking to play a game.
Finally, there's the matter of disparate files which may or may not necessarily be sitting in memory at runtime. Luckily the linker script setup in cc65 (the compiler suite I'm using) is pretty robust, and just like my Dragon Quest disassembly (which is made up of swappable banks) I was able to use the linker system to produce all of the necessary files in isolation, rather than having to get creative with orgs and compilation order to clobber something together that worked. This allows the code to be broken down into its logical structure rather than just treating a whole disk side as if it was one big binary with .org commands all over the place.
Anywho, I don't intend on a rolling update to this email or anything, but if this is something that piques anyone's interest and you'd like to know more, feel free to shoot me a direct reply. I'd be especially interested in any stories or info regarding Mitsumi QuickDisk, as one possibility is that Nintendo's format is derived from something of their own, with reserved/undefined fields redefined for Nintendo's purposes. That said, it's just a magnetic disk, I would be surprised if a single filesystem was enforced in all implementations.
Thanks for following along!
- Matt G.
P.S. As always contributions to anything I'm working on are welcome and encouraged, so if you have any input and have a GitLab account, feel free to open an issue, fork and raise a PR, etc.
> From: Dan Cross
> This is long, but very interesting: https://spectrum.ieee.org/xerox-parc
That is _very_ good, and I too recommend it.
Irritatingly, for such an otherwise-excellent piece, it contains two glaring,
minor errors: "information-processing techniques office" should be
'Information Processing Techniques Office' (its formal name; it's not a
description); "the 1,103 dynamic memory chips used in the MAXC design" -
that's the Intel 1103 chip.
> Markov's book, "What the Dormouse Said" ... goes into great detail
> about the interplay between Engelbart's group at SRI and PARC. It's a
> very interesting read; highly recommended.
It is a good book; it goes a long way into explaining why the now-dominant form
of computer user experience appeared on the West coast, ad not the East.
One big gripe about it; it doesn't give enough space to Licklider, who more
than anyone had the idea that computers were a tool for _all_ information
(for everyone, from all walks of life), not just number crunching (for
scientists and engineers). Everyone and everything in Dormouse is a
descendant of his. Still, we have Mitchell Waldrop's "Dream Machine", which
does an excellent job of telling his story.
(Personal note: I am sad and ashamed to admit that for several years I had
the office literally right next door next to his - and I had no idea who he
was! This is kind of like a young physicist having the office right next door
next to Einstein, and not knowing who _he_ was! I can only say that the
senior people in my group didn't make much of Lick; which didn't help.)
Still, get "Dream Machine".
Noel
> From: Larry McVoy
> And the mouse unless my boomer memory fails me.
I think it might have; I'm pretty sure the first mice were done by
Engelbart's group at ARC (but I'm too lazy to check). ISTR that they were
used in the MOAD.
PARC's contribution to mice was the first decent mouse. I saw an ARC mouse at
MIT (before we got our Altos), and it was both large, and not smooth to use;
it was a medium-sized box (still one hand, though) with two large wheels
(with axes 90 degrees apart), so moving it sideways, you had to drag the
up/down sheel sideways (and vice versa).
PARC'S design (the inventor is known; I've forgetten his name) with the large
ball bearing, rotation of which was detected by two sensore, was _much_
better, and remained the standard until the invention of the optical mouse
(which was superior because the ball mouse picked up dirt, and had to be
cleaned out regularly).
PARC's other big contribution was the whole network-centric computing model,
with servers and workstations (the Alto). Hints of both of those existed
before, but PARC's unified implementation of both (and in a way that made
them cheap enough to deploy them widely) was a huge jump forward.
Although 'personal computers' had a long (if now poorly remembered) history
at that point (including the LINC, and ARC's station), the Alto showed what
could be done when you added a bit-mapped display to which the CPU had direct
access, and deployed a group of them in a network/server environment; having
so much computing power available, on an individual basis, that you could
'light your cigar with computes' radcally changed everything.
Noel
An Old Farts Question, but answers unrestricted :)
In the late 1990’s I inherited a web hosting site running a number of 300Mhz SPARC SUNs.
Probably 32-bit, didn’t notice then :)
Some were multi-CPU’s + asymmetric memory [ non-uniform memory access (CC-NUMA) ]
We had RAID-5 on a few, probably a hardware controller with Fibre Channel SCSI disks.
LAN ports 100Mbps, IIRC. Don’t think we had 1Gbps switches.
Can’t recall how much RAM or the size of the RAID-5 volume.
I managed to borrow from SUN a couple of drives for 2-3 months & filled all the drive bays for ‘busy time'.
With 300MB drives, at most we had a few GB.
Don’t know the cost of the original hardware - high six or seven figures.
A single additional board with extra CPU’s & DRAM for one machine was A$250k, IIRC.
TB storage & zero ’seek & latency’ with SSD are now cheap and plentiful,
even using “All Flash” Enterprise Storage & SAN’s.
Storage system performance is now 1000x or more, even for cheap M.2 SSD.
Pre-2000, a ‘large’ RAID was GB.
Where did all this new ‘important’ data come from?
Raw CPU speed was once the Prime System Metric, based on an assumption of ‘balanced’ systems.
IO performance and Memory size needed to match the CPU throughput for a desired workload,
not be the “Rate Limiting Step”, because CPU’s were very expensive and their capacity couldn’t be ‘wasted’.
I looked at specs/ benchmarks of the latest R-Pi 5 and it might be ~10,000x cheaper than the SUN machines
while maybe 10x faster.
I never knew the webpages/ second my machines provided,
I had to focus on Application throughput & optimising that :-/
I was wondering if anyone on-list has tracked the Cost/ Performance of systems over the last 25 years.
With Unix / Linux, we really can do “Apples & Apples” comparisons now.
I haven’t done the obvious Internet searches, any comments & pointers appreciated.
============
Raspberry Pi 5 revealed, and it should satisfy your need for speed
No longer super-cheap, but boasts better graphics and swifter storage
<https://www.theregister.com/2023/09/28/raspberry_pi_5_revealed/>
~$150 + PSU & case, cooler.
Raspberry Pi 5 | Review, Performance & Benchmarks
<https://core-electronics.com.au/guides/raspberry-pi/raspberry-pi-5-review-p…>
Benchmark Table
<https://core-electronics.com.au/media/wysiwyg/tutorials/Jaryd/pi-les-go/Ben…>
[ the IO performance is probably to SD-Card ]
64 bit, 4-core, 2.4Ghz,
1GB / 2GB / 4GB / 8GB DRAM
800MHz VideoCore GPU = 2x 4K displays @ 60Hz
single-lane PCI Express 2.0 [ for M.2 SSD ]
2x four-lane 1.5Gbps MIPI transceivers [ camera & display ]
2x USB 3.0 ports,
"RP1 chip reportedly allows for simultaneous 5-gigabit throughput on both the USB 3.0s now."
2x USB 2.0 ports,
1x Gigabit Ethernet,
27W USB-C Power + active cooler (fan)
============
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
Bell Labs Dept 1127 / CSRC qualifies as “Very High Performing” to me (is there a better name?)
Before that, John von Neumann and his team were outstanding in the field.
DARPA, under Licklider then Bob Taylor & Ivan Sutherland and more people I don’t know,
went on to fund game-changing technologies, such TCP/IP, including over Wireless and Satellite links.
Engelbart’s Augmentation Research Centre was funded by DARPA, producing NLS, the "oN-Line System”.
Taylor founded Xerox PARC, taking many of Engelbart’s team when the ARC closed.
PARC invented so many things, it’s hard to list…
Ethernet, Laser printers, GUI & Windowing System, Object Oriented (? good ?), what became ’the PC'
Evans & Sutherland similarly defined the world of Graphics for many years.
MIPS Inc created the first commercial RISC processor with a small team, pioneering using 3rd Party “Fabs”.
At 200 Mhz, it was twice the speed of competitors.
Seymour Cray and his small team built (with ECL) the fastest computers for a decade.
I heard that CDC produced a large, slow Operating System, so Cray went and wrote a better one “in a weekend”.
A hardware & software whizz.
I’ve not intended to leave any of the "Hot Spots” out.
While MIT did produce some good stuff, I don’t see it as “very high performing”.
Happy to hear disconfirming opinion.
What does this has to do with now?
Google, AWS and Space-X have redefined the world of computing / space in the last 10-15 years.
They've become High Performing “Hot Spots”, building technology & systems that out-perform everyone else.
Again, not intentionally leaving out people, just what I know without deeply researching.
================
Is this a topic that’s been well addressed? If so, sorry for wasting time.
Otherwise, would appreicate pointers & comments, especially if anyone has created a ‘definitive’ list,
which would imply some criteria for admission.
================
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
I just realised that in HP-UX there are lots of filesets with various
language messages files and manpages (japanese, korean and chinese).
Normally I don't install these. Therefore I also have no idea what the
format is
If you're interested I could install a few and mail you a bundle. Just let
me know.
Take care,
uncle rubl
--
The more I learn the better I understand I know nothing.
Subject doesn't roll off the tongue like the song, but hey, I got a random thought today and I'd be interested in experiences. I get where this could be a little...controversial, so no pressure to reply publicly or at all.
Was it firmly held lore from the earliest days to keep the air as clean as possible in computer rooms in the earlier decades of computing? What has me asking is I've seen before photos from years past in R&D and laboratory settings where whoever is being photographed is happy dragging away on a cigarette (or...) whilst surrounded by all sorts of tools, maybe chemicals, who knows. It was a simpler time, and rightfully so those sorts of lax attitudes have diminished for the sake of safety. Still I wonder, was the situation the same in computing as photographic evidence has suggested it is in other such technical settings? Did you ever have to deal with a smoked out server room that *wasn't* because of thermal issues with the machinery?
I hope this question is fine by the way, it's very not tech focused but I also have a lot of interest in the cultural shifts in our communities over the years. Thanks as always folks for being a part of one of the greatest stories still being told!
- Matt G.
Good morning, I am going to pick up a few Japanese computing books to get more familiar with translating technical literature and figured I'd see if anyone here has any of these before I go buying from randos on eBay (Not sure all of these exist in Japanese):
The C Programming Language (Either Edition)
The C++ Programming Language
Any AT&T/USL System V Docs
The UNIX System (Bourne)
John Lions's Commentary
Any Hardware Docs from Japanese shops (Sony, NEC, Sharp, JVC, etc) that have English counterparts (e.g. MSX architecture docs, PC-*8 hardware stuff)
Thanks all!
- Matt G.
P.S. Even less likely but any of the above in Chinese I would be interested in as well. Many Kanji and Hanzi overlap in meaning so while it may be like trying to read Chaucer with no knowledge of antiquated English, translation between Hanzi and English may help the Kanji situation along too.
Warren Toomey via TUHS <tuhs(a)tuhs.org> once said:
> The history of Unix is not just of the technology, but also of the
> people involved, their successes and travails, and the communities
> that they built. Mary Ann referenced a book about her life and
> journey in her e-mail's .sig. She is a very important figure in the
> history of Unix and I think her .sig is entirely relevant to TUHS.
Are you fine with everyone advertising whatever views
and products they want in their signatures or would I
have to be a very important figure?
If I want to say, for example, that the vast amount of
software related to Unix that came out of Berkeley was
so harmful it should have a retroactive Prop 65 label,
would that be okay to have in my signature?
Cheers,
Anthony
The vast amount of software related to Unix that came
out of Berkeley was so harmful it should have a
retroactive Prop 65 label.
[Quote from some person completely unrelated to Unix.]
[A link to buy my children's picture book about the
tenuous connection between Unix and the NATO terror
bombing of Yugoslavia, direct from Jeff Beelzebub's
bookstore.]
End of signature.
Sorry Warren, I couldn't help myself. I was "triggered"
just like Dan Cross, previously in that thread, who could
not stay silent.
"Silence is violence, folx."
- Sus of size (a.k.a. postmodernist Porky Pig)
Sent to COFF as Dan should have done.
Howdy all, I've made some mention of this in the past but it's finally at a point I feel like formally plugging it. For the past few years I've been tinkering on a disassembly of Dragon Quest as originally released on the Famicom (NES) in Japan. The NA release had already gotten this treatment[1] but I wanted to produce something a little more clean and separated: https://gitlab.com/segaloco/dq1disasm
Another reason for this project is I've longed for a Lions's Commentary on UNIX-quality analysis of a scrollplane/sprite-driven console video game as I find generally analysis of this type of hardware platform lacking outside of random, chance blog posts and incredibly specialized communities where it takes forever to find information you're seeking. In the longer term I intend to produce a commentary on this codebase covering all the details of how it makes the Famicom tick.
So a few of the difficulties involved:
The Famicom is 6502-based with 32768 bytes of ROM address space directly accessible and another 2048 bytes of ROM accessible through the Picture Processing Unit or PPU. The latter is typically graphics ROM being displayed directly by the PPU but data can be copied to CPU memory space as well. To achieve higher sizes, bank swapping is widely employed, with scores of different bank swapping configurations in place. Dragon Quest presents an interesting scenario in that the Japanese and North American releases use *different* mapper schemes, so while much of the code is the same, some parts were apples to oranges between the existing and new disassembly. Where this has been challenging (and a challenge many other Famicom disassemblers haven't taken) is getting a build that resembles a typical software product, i.e. things nicely split out with little concern as to what bank they're going to wind up on. My choice of linker facilitates this by being able to define separate output binaries for different memory segments, so I can produce these banks as distinct binaries and the population thereof is simply based on segment. Moving items between banks then becomes simply changing the segment and adding/removing a bank-in before calling out. I don't know that I've ever seen a Famicom disassembly do this but I also can't say I've gone looking heavily.
Another challenge has been the language, and not necessarily just because it's Japanese and not my native language. So old Japanese computing is largely in ShiftJIS whereas we now have UTF-8. Well, Dragon Quest uses neither of these as they instead use indices into a table of kana character tiles, as they are ordered in memory, as the string format. What this means is to actually effectively manage strings, and I haven't done this yet, is one needs a to-fro encoder that can convert between UTF-8 Japanese and the particular positioning of their various characters. This says nothing of the difficulty then of translating the game. Most Japanese games of this era used exclusively kana for two reasons: These are games for children, so too much complicated Kanji and you've locked out your target audience. The other reason is space; it would be impossible to fit all the necessary Kanji in memory, even as 8x8 pixel tiles (the main graphic primitive on these chips.) Plus, even if you could, an 8x8 pixel tile is hardly enough resolution to read many Kanji, so they'd likely need 16x16, quadrupling the space requirement if you didn't recycle quadrant radicals. In any case, what this means is all of the strings are going to translate to strictly hiragana or katakana strings, which are Japanese, but not as easily intelligible as the groupings of kana then have to be interpreted into their meanings by context clues often times rather than having an exact definition. The good news though is again these are games for children, so the vocabulary shouldn't be too complicated.
Endianness occasionally presents some problems, one of which I suspect because the chip designers didn't talk to each other...So the PPU exposes a register you write two bytes to, the high and low byte of the address to place the next value written to the data register to. Problem is, this is a 6502-driven system, so when grabbing a word the high byte is the second one. This means that every word has to be sent to the PPU in reverse when selecting an address. This PPU to my knowledge is based on some arcade hardware but otherwise was based on designs strictly for this console, so why they didn't present an address register in the same endianness I'll never know. It tripped me up early on but I worked past it.
Finally, back to the mappers, since a "binary" was really a gaggle of potentially overlapping ROM banks, there wasn't really a single ROM image format used by Nintendo back in the day. Rather, you sent along your banks, layout, and what mapper hardware you needed and that last step was a function of cartridge fab, not some software step. Well, the iNES format is an accommodation for that, presenting the string "NES" as a magic number in a 16-byte header that also contains a few bytes describing the physical cartridge in terms of what mapper hardware, how many banks, what kind, and particular jumpers that influence things like nametable mirroring and the like. Countless disassembles out there are built under the (incorrect) assumption that the iNES binary format is how the binary objects are supposed to build, but this is simply not the case, and forcing this "false structure" actually makes then analysis of the layout and organization of the original project more difficult. What I opted towards instead is using the above-described mechanism of linker scripts defining individual binaries in tandem with two nice old tools: printf(1) and cat(1). When all my individual ROM banks are built, I then simply use printf to spit out the 16 bytes of an iNES header that match my memory particulars and then cat it all together as the iNES format is simply that header then all of the PRG and CHR ROM banks organized in that order. The end result is a build that is agnostic (rightfully so) of the fact that it is becoming an iNES ROM, which is a community invention years after the Famicom was in active use.
Note that this is still an ongoing project. I don't consider it "done" but it is more usable than not now. Some things to look out for:
Any relative positioning of specific graphic data in CHR banks outside that which has been extracted into specific files is not dynamic, meaning moving a tileset around in CHR will garble display of the object associated.
A few songs (credits, dungeons, title screen) do not have every pointer massaged out of them yet, so if their addresses change, those songs at the very least won't sound right, and in extreme circumstances playing them with a shift in pointers could crash the game.
Much of the binary data is still BLOB-afied, hence not each and every pointer being fixed. This will be slow moving and there may be a few parts that wind up involving some scripting to keep them all dynamic (for instance, maps include block IDs and such which are ultimately an index into a table, but the maps make more sense to keep as binary data, so perhaps a patcher is needed to "filter" a map to assign final block IDs, hard to say.)
Also, while some data is very heavily coupled to the engine, such as music, and as such has been disassembled into audio engine commands, other data, like graphic tiles, are more generic to the hardware platform and as such do not have any particular dependencies on the code. As such, these items can exist as pure BLOBs without worry of any pointers buried within or ID values that need to track indices in a particular table. As such these flat binary tiles and other comparable components are *not* included in this repository, as they are copyright of their respective owners. However, I have provided a script that allows you to extract these binary components should you have a clean copy of Dragon Quest for the Famicom as an iNES ROM. The script is all dd(1) calls based on the iNES ROM itself, but there should be enough notes contained with in if a bank-based extraction is necessary. YMMV using the script on anything other than the commercial release. It is only intended to bootstrap things, it will not properly work to extract the assets from arbitrary builds.
If anyone has any questions/comments feel free to contact me off list, or if there is worthwhile group discussion to be had, either way, enjoy!
- Matt G.
[1] - https://github.com/nmikstas/dragon-warrior-disassembly
Hello, today I received in the mail a book I ordered apparently by one of the engineers at Sega responsible for their line of consoles. It's all in Japanese but based on the little I know plus tables in the text, it appears to be fairly technical and thorough. I'm excited to start translating it and see what lies within.
In any case, it got me thinking about what company this book might have as far as Japanese literature concerning computing history there, or even just significant literature in general regarding Japanese computer history. While we are more familiar with IBM, DEC, workstations, minis, etc. the Japanese market had their own spate of different systems such as NEC's various "PCs" (not PC-compats, PC-68, PC-88, PC-98), Sharp X68000, MSX(2), etc. and then of course Nintendo, Sega, NEC, Hudson, and the arcade board manufacturers. My general experience is that Japanese companies are significantly more tight-lipped about everything than those in the U.S. and other English-speaking countries, going so far as to require employees to use pseudonyms in any sort of credits to prevent potential poaching. As such, first-party documentation for much of this stuff is incredibly difficult to come by, and secondary materials and memoirs and such, in my experience at least, are virtually non-existent. However, that is also from my perspective here across the seas trying to research an obscure, technical subject in my non-native tongue. Anyone here have a particular eye for Japanese computing? If so, I'd certainly be interested in some discussion, doesn't need to be on list either.
- Matt G.
Howdy folks, just wanted to share a tool I wrote up today in case it might be useful for someone else: https://gitlab.com/segaloco/dis65
This has probably been done before, but this is a bare-bones one-pass MOS 6500 disassembler that does nothing more than convert bytes to mnemonics and parameters, so no labeling, no origins, etc. My rationale is as I work on my Dragon Quest disassembly, there are times I have to pop a couple bytes through the disassembler again because something got misaligned or some other weird issue. My disassembler through the project has been da65, which does all the labeling and origin stuff but as such, requires a lot of seeking and isn't really amenable to a pipeline, which has required me to do something like:
printf "\xAD\xDE\xEF\xBE" > temp.bin && da65 temp.bin && rm temp.bin
to get the assembly equivalent of 0xDEADBEEF.
Enter my tool, it enables stuff like:
printf "\xAD\xDE\xEF\xBE" | dis65
instead. A longer term plan is to then write a second pass that can then do all the more sophisticated stuff without having to bury the mnemonic generation down in there somewhere, plus that second pass could then be architecture-agnostic to a high degree.
Anywho, feel free to do what you want with it, it's BSD licensed. One "bug" I need to address is that all byte values are presented as unsigned, but in the case of indirects and a few other circumstances, it would make more sense for them to be signed. Probably won't jump on that ASAP, but know that's a coming improvement. While common in disassemblers, I have no intention on adding things like printing the binary bytes next to the opcodes. Also this doesn't support any of the undocumented opcodes, although it should be trivial to add them if needed. I went with lower-case since my assembler supports it, but you should have a fine time piping into tr(1) if you need all caps for an older assembler.
- Matt G.
C, BLISS, BCPL, and the like were hardly the only systems programming
languages that targeted the PDP-11. I knew about many system programming
languages of those times and used all three of these, plus a few others,
such as PL/360, which Wirth created at Stanford in the late 1960s to
develop the Algol-W compiler. Recently, I was investigating something
about early systems programming languages, and a couple of questions came
to me that I could use some help finding answers (see below).
In 1971, RD Russell of CERN wrote a child of Wirth's PL/360 called PL-11:
Programming Language for the DEC PDP-11 Computer
<https://cds.cern.ch/record/880468/files/CERN-74-24.pdf> in Fortran IV. It
supposedly ran on CERN's IBM 360 as a cross-compiler and was hosted on
DOS-11 and later RSX. [It seems very 'CARD' oriented if you look at the
manual - which makes sense, given the time frame]. I had once before heard
about it but knew little. So, I started to dig a little.
If I understand some of the history correctly, PL-11 was created/developed
for a real-time test jig that CERN needed. While it was available in
limited cases, since BLISS-11 required a PDP-10 to cross-compile, it was
not considered (I've stated earlier that some poor marketing choices at DEC
hurt BLISS's ability to spread). Anyway, a friend at CERN later in the
70s/80s told me they thought that as soon as UNIX made it on the scene
there since it was interactive and more accessible, C was quickly
preferred as the system programming language of choice. However, a BCPL
that had come from somewhere in the UK was also kicking around.
So, some questions WRT PL-11:
1. Does anyone here know any (more) of the story -- Why/How?
2. Do you know if the FORTRAN source survives?
3. Did anything interesting/lasting get written using it?
Tx
Clem
ᐧ
I thought folks on COFF and TUHS (Bcc'ed) might find this interesting.
Given the overlap between SDF and LCM+L, I wonder what this may mean
for the latter.
- Dan C.
---------- Forwarded message ---------
From: SDF Membership <membership(a)sdf.org>
Date: Thu, Aug 24, 2023 at 9:10 PM
Subject: [SDF] Computer Museum
To:
We're in the process of opening a computer museum in the Seattle area
and are holding our first public event on September 30th - October 1st.
The museum features interactive exhibits of various vintage computers
with a number of systems remotely accessible via telnet/ssh for those
who are unable to visit in person.
If this interests you, please consider replying with comments and
take the ascii survey below. You can mark an X for what interests you.
I would like to know more about:
[ ] visiting the museum
[ ] how to access the remote systems
[ ] becoming a regular volunteer or docent
[ ] restoring and maintaining various vintage systems
[ ] curation and exhibit design
[ ] supporting the museum with an annual membership
[ ] supporting the museum with an annual sponsorship
[ ] funding the museum endowment
[ ] day to day administration and operations
[ ] hosting an event or meet up at museum
[ ] teaching at the museum
[ ] donating an artifact
Info on our first public event can be found at https://sdf.org/icf
Good morning folks, I'm hoping to pick some brains on something that is troubling me in my search for some historical materials.
Was there some policy prior to mass PDF distribution with standards bodies like ANSI that they only printed copies of standards "to order" or something like that? What has me asking is when looking for programming materials prior to when PDF distribution would've taken over, there's a dearth of actual ANSI print publications. I've only come across one actual print standard in all my history of searching, a copy of Fortran 77 which I guard religiously. Compare this with PALLETS'-worth, like I'm talking warehouse wholesale levels of secondary sources for the same things. I could *drown* in all the secondary COBOL 74 books I see all over the place but I've never seen seen a blip of a suggestion of a whisper of an auction of someone selling a legitimate copy of ANSI X3.23-1974. It feels like searching for a copy of the Christian Bible and literally all I can find are self help books and devotional readers from random followers. Are the standards really that scarce, or was it something that most owners of back in the day would've thrown in the wood chipper when the next edition dropped, leading to an artificial narrowing of the amount of physical specimens still extant?
To summarize, why do print copies of primary standards from the elden days of computing seem like cryptids while one can flatten themselves into a pancake under the mountains upon mountains of derivative materials out there? Why is filtered material infinitely more common than the literal rule of law governing the languages? For instance the closest thing to the legitimate ANSI C standard, a world-changing document, that I can find is the "annotated" version, which thankfully is the full text but blown up to twice the thickness just to include commentary. My bookshelf is starting to run out of room to accommodate noise like that when there are nice succint "the final answer" documents that take up much less space but seem to virtually not exist...
- Matt G.