FYI: Tim was Mr. 36-bit kernel and I/O system until he moved to the Vax and
later Alpha (and Intel).
The CMU device he refers is was the XGP and was a Xerox long-distance fax
(LDX). Stanford and MIT would get them too, shortly thereafter.
---------- Forwarded message ---------
From: Timothe Litt
Date: Thu, Dec 21, 2023 at 1:52 PM
Subject: Re: Fwd: [COFF] IBM 1403 line printer on DEC computers?
To: Clem Cole
I don't recall ever seeing a 1403 on a DECsystem-10 or DECSYSTEM-20. I
suppose someone could have connected one to a systems concepts channel...
or the DX20 Massbus -> IBM MUX/SEL channel used for the STC (TU70/1/2)
tape and disk (RP20=STC 8650) disk drives. (A KMC11-based device.) Not
sure why anyone would.
Most of the DEC printers on the -10/20 were Dataproducts buy-outs, and were
quite competent. 1,000 - 1,250 LPM. Earlier, we also bought from MDS and
Analex; good performance (1,000LPM), but needed more TLC from FS. The
majority were drum printers; the LP25 was a band printer, and lighter duty
(~300LPM).
Traditionally, we had long-line interfaces to allow all the dust and mess
to be located outside the machine room. Despite filters, dust doesn't go
well with removable disk packs. ANF-10 (and eventually DECnet) remote
stations provided distributed printing.
CMU had a custom interface to some XeroX printer - that begat Scribe.
The LN01 brought laser printing - light duty, but was nice for those
endless status reports and presentations. I think the guts were Canon -
but in any case a Japanese buyout. Postscript. Networked.
For high volume printing internally, we used XeroX laser printers when they
became available. Not what you'd think of today - these are huge,
high-volume devices. Bigger than the commercial copiers you'd see in print
shops. I(Perhaps interestingly, internally they used PDP-11s running
11M.) Networked, not direct attach. They also were popular in IBM shops.
We eventually released the software to drive them (DQS) as part of GALAXY.
The TU7x were solid drives - enough so that the SDC used them for making
distribution tapes. The copy software managed to keep 8 drives spinning at
125/200 ips - which was non-trivial on TOPS-20.
The DX20/TX0{2,3}/TU7x *was *eventually made available for VAX - IIRC as
part of the "Migration" strategy to keep customers when the -10/20 were
killed. I think CSS did the work on that for the LCG PL. Tapes only - I
don't think anyone wanted the disks by them - we had cheaper dual-porting
via the HSC/CI, and larger disks.
The biggest issue for printers on VAX was the omission of VFU support.
Kinda hard to print paychecks and custom forms without it - especially if
you're porting COBOL from the other 3-letter company. Technically, the
(Unibuas) LP20 could have been used, but wasn't. CSS eventually solved
that with some prodding from Aquarius - I pushed that among other high-end
I/O requirements.
On 21-Dec-23 12:29, Clem Cole wrote:
Tim - care to take a stab at this?
ᐧ
> From: Paul Winalski
> The 1403 attached to S/360/370 via a byte multiplexer channel ...
> The question is, did they have a way to attach the 1403 to any of their
> computer systems?
There's a thing called a DX11:
https://gunkies.org/wiki/DX11-B_System_360/370_Channel_to_PDP-11_Unibus_Int…
which attaches a "selector, multiplexer or block multiplexer channel" to a
UNIBUS machine, which sounds like it could support the "byte multiplexer
channel"?
The DX11 brochure only mentions that it can be "programmed to emulate a 2848,
2703 or 3705 control unit" - i.e. look like a peripheral to a IBM CPU;
whether it could look like an INM CPU to an IBM peripheral, I don't know.
(I'm too lazy to look at the documentation; it seems to be all there, though.)
Getting from the UNIBUS to the -10, there were off-the-shelf boxes for; the DL10 for
the KA10 and KI10 CPUs, and a DTE20 on a KL10.
It all probably needed some coding, though.
Noel
There's been a discussion recently on TUHS about the famous IBM 1403
line printer. It's strayed pretty far off-topic for TUHS so I'm
continuing the topic here in COFF.
DEC marketed its PDP-10 computer systems as their solution for
traditional raised-floor commercial data centers, competing directly
with IBM System 360/370. DEC OEMed a lot of data center peripherals
such as card readers/punches, line printers, 9-track magtape drives,
and disk drives for their computers, but their main focus was low cost
vs. heavy duty. Not really suitable for the data center world.
So DEC OEMed several high-end data center peripherals for use on big,
commercial PDP-10 computer systems. For example, the gold standard
for 9-track tape drives in the IBM world was tape drives from Storage
Technology Corporation (STC). DEC designed an IBM selector
channel-to-MASSBUS adapter that allowed one to attach STC tape drives
to a PDP-10. AFAIK this was never offered on the PDP-11 VAX, or any
other of DEC's computer lines. They had similar arrangements for
lookalikes for IBM high-performance disk drives.
Someone on TUHS recalled seeing an IBM 1403 or similar line printer on
a PDP-10 system. The IBM 1403 was certainly the gold standard for
line printers in the IBM world and was arguably the best impact line
printer ever made. It was still highly sought after in the 1970s,
long after the demise of the 1950s-era IBM 1400 computer system it was
designed to be a part of. Anyone considering a PDP-10 data center
solution would ask about line printers and, if they were from the IBM
world, would prefer a 1403.
The 1403 attached to S/360/370 via a byte multiplexer channel, so one
would need an adapter that looked like a byte multiplexer channel on
one end and could attach to one of DEC's controllers at the other end
(something UNIBUS-based, most likely).
We know DEC did this sort of thing for disks and tapes. The question
is, did they have a way to attach the 1403 to any of their computer
systems?
-Paul W.
> the first DEC machine with an IC processor was the -11/20, in 1970
Clem has reminded me that the first was the PDP-8/I-L (the second was a
cost-reduced version of the -I), from. The later, and much more common,
PDP-8/E-F-M, were contemporaneous with the -11/20.
Oh well, only two years; doesn't really affect my main point. Just about
'blink and you'll miss them'!
Noel
> From: Bakul Shah
> Now I'd probably call them kernel threads as they don't have a separate
> address space.
Makes sense. One query about stacks, and blocking, there. Do kernel threads,
in general, have per-thread stacks; so that they can block (and later resume
exactly where they were when they blocked)?
That was the thing that, I think, made kernel processes really attractive as
a kernel structuring tool; you get code ike this (from V6):
swap(rp->p_addr, a, rp->p_size, B_READ);
mfree(swapmap, (rp->p_size+7)/8, rp->p_addr);
The call to swap() blocks until the I/O operation is complete, whereupon that
call returns, and away one goes. Very clean and simple code.
Use of a kernel process probably makes the BSD pageout daemon code fairly
straightforward, too (well, as straightforward as anything done by Berzerkly
was :-).
Interestingly, other early systems don't seem to have thought of this
structuring technique. I assumed that Multics used a similar technique to
write 'dirty' pages out, to maintain a free list. However, when I looked in
the Multics Storage System Program Logic Manual:
http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageS…
Multics just writes dirty pages as part of the page fault code: "This
starting of writes is performed by the subroutine claim_mod_core in
page_fault. This subroutine is invoked at the end of every page fault." (pg.
8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to
complete dealing with a page fault.)
It makes sense to have a kernel process do this; having the page fault code
do it just makes that code more complicated. (The code in V6 to swap
processes in and out is beautifully simple.) But it's apparently only obvious
in retrospect (like many brilliant ideas :-).
Noel
So Lars Brinkhoff and I were chatting about daemons:
https://gunkies.org/wiki/Talk:Daemon
and I pointed out that in addition to 'standard' daemons (e.g. the printer
spooler daemon, email daemon, etc, etc) there are some other things that are
daemon-like, but are fundamentally different in major ways (explained later
below). I dubbed them 'system processes', but I'm wondering if ayone knows if
there is a standard term for them? (Or, failing that, if they have a
suggestion for a better name?)
Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping)
process"), but the CACM "The UNIX Time-Sharing System" paper:
https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf
doesn't even mention it, so no guidance there. Berkeley UNIX also has one,
mentioned in "Design and Implementation of the Berkeley Virtual Memory
Extensions to the UNIX Operating System":
http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf
where it is called the "pageout daemon".("During system initialization, just
before the init process is created, the bootstrapping code creates process 2
which is known as the pageout daemon. It is this process that .. writ[es]
back modified pages. The process leaves its normal dormant state upon being
waken up due to the memory free list size dropping below an upper
threshold.") However, I think there are good reasons to dis-favour the term
'daemon' for them.
For one thing, typical daemons look (to the kernel) just like 'normal'
processes: their object code is kept in a file, and is loaded into the
daemon's process when it starts, using the same mechanism that 'normal'
processes use for loading their code; daemons are often started long after
the kernel itself is started, and there is usually not a special mechanism in
the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init'
process, not the kernel); daemons interact with the kernel through system
calls, just like 'ordinary' processes; the daemon's process runs in 'user'
CPU mode (using the same standard memory mapping mechanisms, just like
blah-blah).
'System processes' do none of these things: their object code is linked into
the monolithic kernel, and is thus loaded by the bootstrap; the kernel
contains special provision for starting the system process, which start as
the kernel is starting; they don't do system calls, just call kernel routines
directly; they run in kernel mode, using the same memory mapping as the
kernel itself; etc, etc.
Another important point is that system processes are highly intertwined with
the operation of the kernel; without the system process(es) operating
correctly, the operation of the system will quickly grind to a halt. The loss
of ordinary' daemons is usually not fatal; if the email daemon dies, the
system will keep running indefinitely. Not so, for the swapping process, or
the pageout daemon
Anyway, is there a standard term for these things? If not, a better name than
'system process'?
Noel
For the benefit of Old Farts around here, I'd like to share the good
word that an ITS 138 listing from 1967 has been discovered. A group of
volunteers is busy transcribing the photographed pages to text.
Information and link to the data:
https://gunkies.org/wiki/ITS_138
This version is basically what ITS first looked like when it went into
operation at the MIT AI lab. It's deliciously arcane and primitive.
Mass storage is on four DECtape drives, no disk here. Users stations
consist of five teletypes and four GE Datanet 760 CRT consoles (46
colums, 26 lines). The number of system calls is a tiny subset of what
would be available later.
There are more listings from 1967-1969 for DDT, TECO, LISP, etc. Since
they are fan-fold listings, scanning is a bit tricky, so a more labor-
intensive photographing method is used.
Hello everyone, I was wondering if anyone is aware of any surviving technical diagrams/schematics for the WECo 321EB or WECo 321DS WE32x00 development systems? Bitsavers has an AT&T Data Book from 1987 detailing pin maps, registers, etc. of 32xxx family ICs and then another earlier manual from 1985 that seems to be more focused on a technical overview of the CPU specifically. Both have photographs and surface level block diagrams, but nothing showing individual connections, which bus leads went where, etc. While the descriptions should be enough, diagrams are always helpful.
In any case, I've recently ordered a 32100 CPU and 32101 MMU I saw sitting on eBay to see what I can do with some breadboarding and some DRAM/DMA controllers from other vendors, was thinking of referring to any available design schematics of the 321 development stuff for pointers on integrations. Either way, i'm glad the data books on the hardware have been preserved, that gives me a leg up.
Thanks for any insights!
- Matt G.
Good day everyone, I thought I'd share a new project I've been working on since it is somewhat relevant to old and obscure computing stuff that hasn't gotten a lot of light shed on it.
https://gitlab.com/segaloco/doki
After the link is an in-progress disassembly of Yume Kojo: Doki Doki Panic for the Famicom Disk System, known better in the west as the engine basis for Super Mario Bros. 2 for the NES (the one with 4 playable characters, pick-and-throw radishes, etc.)
What inspired me to start on this project is the Famicom Disk System is painfully under-documented, and what is out there is pretty patchy. Unlike with its parent console, no 1st party development documentation has been archived concerning the Disk System, so all that is known about its programming interfaces have been determined from disassemblies of boot ROMs and bits and pieces of titles over the years. The system is just that, a disk drive that connects to the Famicom via a special adapter that provides some RAM, additional sound functionality, and some handling for matters typically controlled by the cartridge (background scroll-plane mirroring and saving particularly.) The physical disk format is based on Mitsumi's QuickDisk format, albeit with the casing extended in one dimension as to provide physical security grooves that, if not present, will prevent the inserted disk from booting. The hardware includes a permanently-resident boot ROM which maps to 0xE000-0xFFFF (and therefore provides the 6502 vectors). This boot ROM in turn loads any files from the disk that match a specified pattern in the header to header-defined memory ranges and then acts on a secondary vector table at 0xDFFA (really 0xDFF6, the disk system allows three separate NMI vectors which are selected from by a device register.) The whole of the standard Famicom programming environment applies, although the Disk System adds an additional bank of device registers in a reserved memory area and exposes a number of "syscalls" (really just endpoints in the 0xE000-0xFFFF range, it's unknown at present to what degree these entries/addresses were documented to developers.)
I had to solve a few interesting challenges in this process since this particular area gets so little attention. First, I put together a utility and supporting library to extrapolate info from the disk format. Luckily the header has been (mostly) documented, and I was able to document a few consistencies between disks to fill in a few of the parts that weren't as well documented. In any case, the results of that exercise are here: https://gitlab.com/segaloco/fdschunk. One of the more interesting matters is that the disk creation and write dates are stored not only in BCD, but the year is not always Gregorian. Rather, many titles reflect instead the Japanese period at the time the title was released. For instance, the Doki Doki Panic image I'm using as a reference is dated (YY/MM/DD) "61/11/26" which is preposterous, the Famicom was launched in 1983, but applying this knowledge of the Showa period, the date is really "86/11/26" which makes much more sense. This is one of those things I run into studying Japanese computing history time to time, I'm sure the same applies to earlier computing in other non-western countries. We're actually headed for a "2025-problem" with this time-keeping as that is when the Showa calendar rolls over. No ROMs have been recovered from disk writer kiosks employed by Nintendo in the 80s, so it is unknown what official hardware which applies these timestamps does when that counter rolls over. I've just made the assumption that it should roll back to 00, but there is no present way to prove this. The 6502 implementation in the Famicom (the Ricoh 2A03) omitted the 6502 BCD mode, so this was likely handled either in software or perhaps a microcontroller ROM down inside the disk drives themselves.
I then had to solve the complementary problem, how do I put a disk image back together according to specs that aren't currently accessible. Well, to do that, I first chopped the headers off of every first-party Nintendo image I had in my archive and compared them in a table. I diverted them into two groups: pristine images that represent original pressings in a Nintendo facility and "dirty" images that represent a rewrite of a disk at one of the disk kiosks (mind you, Nintendo distributed games both ways, you could buy a packaged copy or you could bring a rewritable disk to a kiosk and "download" a new game.) My criterion for categorization was whether the disk create and modify times were equal or not. This allowed me to get a pretty good picture of what headers getting pumped out of the factory look like, and how they change when the disk is touched by a writer kiosk. I then took the former configuration and wrote a couple tools to consume a very spartan description of the variable pieces and produce the necessary images: https://gitlab.com/segaloco/misc/-/tree/master/fds. These tools, bintofdf and fdtc, apply a single file header to a disk file and create a "superblock" for a disk side respectively. I don't know what the formal terms are, they may be lost to time, but superblock hopefully gets the point across, albeit it's not an exact analog to UNIX filesystems. Frankly I can't find anything regarding what filesystem this might be based on, if at all, or if it is an entirely Nintendo-derived format. In any case, luckily the header describing a file is self-contained on that file, and then the superblock only needs to know how many files are present, so the two steps can be done independently. The result is a disk image, stamped with the current Showa BCD date, that is capable of booting on the system. The only thing I don't add that "pure" disks contain are CRCs of the files. On a physical disk, these header blocks also contain CRCs of the data they describe, these, by convention, are omitted from disk dumps. I'm actually not entirely sure why, but I imagine emulator writers just omit the CRC check as well, so it doesn't matter to folks just looking to play a game.
Finally, there's the matter of disparate files which may or may not necessarily be sitting in memory at runtime. Luckily the linker script setup in cc65 (the compiler suite I'm using) is pretty robust, and just like my Dragon Quest disassembly (which is made up of swappable banks) I was able to use the linker system to produce all of the necessary files in isolation, rather than having to get creative with orgs and compilation order to clobber something together that worked. This allows the code to be broken down into its logical structure rather than just treating a whole disk side as if it was one big binary with .org commands all over the place.
Anywho, I don't intend on a rolling update to this email or anything, but if this is something that piques anyone's interest and you'd like to know more, feel free to shoot me a direct reply. I'd be especially interested in any stories or info regarding Mitsumi QuickDisk, as one possibility is that Nintendo's format is derived from something of their own, with reserved/undefined fields redefined for Nintendo's purposes. That said, it's just a magnetic disk, I would be surprised if a single filesystem was enforced in all implementations.
Thanks for following along!
- Matt G.
P.S. As always contributions to anything I'm working on are welcome and encouraged, so if you have any input and have a GitLab account, feel free to open an issue, fork and raise a PR, etc.