Friend asked an odd question:
Were VAXen ever used to send/receive faxes large-scale? What software was
used and how was it configured?
Was any of this run on any of the UCB VAXen?
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On 2015-01-06 23:56, Clem Cole<clemc(a)ccc.com> wrote:
>
> On Tue, Jan 6, 2015 at 5:45 PM, Noel Chiappa<jnc(a)mercury.lcs.mit.edu>
> wrote:
>
>> >I have no idea why DEC didn't put it in the 60 - probably helped kill that
>> >otherwise intersting machine, with its UCS, early...
>> >
> ?"Halt and confuse ucode" had a lot to do with it IMO.
>
> FYI: The 60 set the record of going from production to "traditional
> products" faster than? anything else in DEC's history. As I understand
> it, the 11/60 was expected to a business system and run RSTS. Why the WCS
> was put in, I never understood, other than I expect the price of static RAM
> had finally dropped and DEC was buying it in huge quantities for the
> Vaxen. The argument was that they could update the ucode cheaply in the
> field (which to my knowledge the never did). But I asked that question
> many years ago to one of the HW manager, who explained to me that it was
> felt separate I/D was not needed for the targeted market and would have
> somehow increased cost. I don't understand why it would have cost any
> more but I guess it was late.
No, field upgrade of microcode can not have been it. The WCS for the
11/60 was an option. Very few machines actually had it. It was for
writing your own extra microcode as addition to the architecture.
The basic microcode for the machine was in ROM, just like on all the
other PDP-11s. And DEC sold a compiler and other programs required to
develop microcode for the 11/60. Not that I know of anyone who had them.
I've "owned" four PDP-11/60 systems in my life. I still have a set of
boards for the 11/60 CPU, but nothing else left around.
The 11/60 was, by the way, not the only PDP-11 with WCS. The 11/03 (if I
remember right) also had such an option. Obviously the microcode was not
compatible between the two machines, so you couldn't move it over from
one to the other.
Also, reportedly, someone at DEC implemented a PDP-8 on the 11/60,
making it the fastest PDP-8 ever manufactured. I probably have some
notes about it somewhere, but I'd have to do some serious searching if I
were to dig that up.
But yes, the 11/60 went from product to "traditional" extremely fast.
Split I/D space was one omission from the machine, but even more serious
was the decision to only do 18-bit addressing on it. That killed it very
fast.
Someone else mentioned the MFPI/MFPD instructions as a way of getting
around the I/D restrictions. As far as I know (can tell), they are
possible to use to read/write instruction space on a machine. I would
assume that any OS would set both current and previous mode to user when
executing in user space.
The documentation certainly claims they will work. I didn't think of
those previously, but they would allow you to read/write to instruction
space even when you have split I/D space enabled.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Yep, the only time this was ever trully useful was so you could put an
> a.out directly into the boot block I think.
Well, sort of. If you had non position-independent code, it would blow out
(it would be off by 020). Also, some bootstraps (e.g. the RL, IIRC) were so
close to 512. bytes long that the extra 020 was a problem. And it was so easy
to strip off:
dd if=a.out of=fooboot bs=1 skip=16
I'm not sure that anything actually used the fact that 407 was 'br .+020', by
the V6 era; I think it was just left over from older Unixes (where it was not
in fact stripped on loading). Not just on executables running under Unix; the
boot-loader also stripped it, so it wasn't even necessary to strip the a.out
header off /unix.
Noel
On 2015-01-06 20:57, Milo Velimirovi?<milov(a)cs.uwlax.edu> wrote:
> Bringing a conversation back online.
> On Jan 6, 2015, at 6:22 AM,arnold(a)skeeve.com wrote:
>
>>> >>Peter Jeremy scripsit:
>>>> >>>But you pay for the size of $TERMCAP in every process you run.
>> >
>> >John Cowan<cowan(a)mercury.ccil.org> wrote:
>>> >>A single termcap line doesn't cost that much, less than a KB in most cases.
>> >
>> >In 1981 terms, this has more weight. On a non-split I/D PDP-11 you only
>> >have 32KB to start with. (The discussion a few weeks ago about cutting
>> >yacc down to size comes to mind?)
> (Or even earlier than ?81.) How did pdp11 UNIXes handle per process memory? It?s suggested above that there was a 50-50 split of the 64KB address space between instructions and data. My own recollection is that you got any combination of instruction and data space that was <64KB. This would also be subject to limits of pdp11 memory management unit.
>
> Anyone have a definitive answer or pointer to appropriate man page or source code?
You are conflating two things. :-)
A standard PDP-11 have 64Kb of virtual memory space. This can be divided
any way you want between data and code.
Later model PDP-11 processors had a hardware feature called split I/D
space. This meant that you could have one 64Kb virtual memory space for
instructions, and one 64Kb virtual memory space for data.
(This also means that the text you quoted was incorrect, as it stated
that you had 32Kb, which is incorrect. It was/is 32 Kword.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2015-01-06 22:59, random832(a)fastmail.us wrote:
> On Tue, Jan 6, 2015, at 15:20, Johnny Billquist wrote:
>> >Later model PDP-11 processors had a hardware feature called split I/D
>> >space. This meant that you could have one 64Kb virtual memory space for
>> >instructions, and one 64Kb virtual memory space for data.
> Was it possible to read/write to the instruction space, or execute the
> data space? From what I've seen, the calling convention for PDP-11 Unix
> system calls read their arguments from directly after the trap
> instruction (which would mean that the C wrappers for the system calls
> would have to write their arguments there, even if assembly programs
> could have them hardcoded.)
Nope. A process cannot read or write to instruction space, nor can it
execute from data space.
It's inherent in the MMU. All references related to the PC will be done
from I-space, while everything else will be done through D-space.
So the MMU have two sets of page registers. One set maps I-space, and
another maps D-space. Of course, you can have them overlap, in which
case you get the traditional appearance of older models.
The versions of Unix I am aware of push arguments on the stack. But of
course, the kernel can remap memory, and so can of course read the
instruction space. But the user program itself would not be able to
write anything after the trap instruction.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Clem Cole <clemc(a)ccc.com>
> Depends the processor. For the 11/45 class processors, you had a 17th
> address bit, which was the I/D choice. For the 11/40 class you shared
> the instructions and data space.
To be exact, the 23, 24, 34, 35/40 and 60 all had a single shared space.
(I have no idea why DEC didn't put it in the 60 - probably helped kill that
otherwise intersting machine, with its UCS, early...). The 44, 45/50/55, 70,
73, 83/84, and 93/94 had split.
> From: random832(a)fastmail.us
> the calling convention for PDP-11 Unix system calls read their
> arguments from directly after the trap instruction (which would mean
> that the C wrappers for the system calls would have to write their
> arguments there, even if assembly programs could have them hardcoded.)
Here's the code for a typical 'wrapper' (this is V6, not sure if V7 changed
the trap stuff):
_lseek:
jsr r5,csv
mov 4(r5),r0
mov 6(r5),0f
mov 8(r5),0f+2
mov 10.(r5),0f+4
sys indir; 9f
bec 1f
jmp cerror
1:
jmp cret
.data
9:
sys lseek; 0:..; ..; ..
Note the switch to data space for storing the arguments (at the 0: label
hidden in the line of data), and the 'indirect' system call.
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Some access at the kernel level can be done with MFPI and MPFD
> instructions.
Unless you hacked your hardware, in which case it was possible from user mode
too... :-)
I remember how freaked out we were when we tried to use MFPI to read
instruction space, and it didn't work, whereupon we consulted the 11/45
prints, only to discover that DEC had deliberately made it not work!
> From: Ronald Natalie <ron(a)ronnatalie.com>
> After the changes to the FS, you'd get missing blocks and a few 0-0
> inodes (or ones where the links count was higher than the links). These
> while wasteful were not going to cause problems.
It might be worth pointing out that due to the way pipes work, if a system
crashed with pipes open, even (especially!) with the disk perfectly sync'd,
you'll be left with 0-0 inodes. Although as you point out, those were merely
crud, not potential sourdes of file-rot.
Noel
Apparently the message I was replying to was off-list, but it seems like
a waste to have typed all this out (including answering my own question)
and have it not go to the list.
On Tue, Jan 6, 2015, at 17:35, random832(a)fastmail.us wrote:
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/factor.s
> wrchar:
> mov r0,ch
> mov $1,r0
> sys write; ch; 1
> rts r5
>
> Though it looks like the C wrappers use the "indirect" system call which
> reads a "fake" trap instruction from the data segment. Looking at the
> implementation of that, my question is answered:
>
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/sys/trap.c
> if (callp == sysent) { /* indirect */
> a = (int *)fuiword((caddr_t)(a));
> pc++;
> i = fuword((caddr_t)a);
> a++;
> if ((i & ~077) != SYS)
> i = 077; /* illegal */
> callp = &sysent[i&077];
> fetch = fuword;
> } else {
> pc += callp->sy_narg - callp->sy_nrarg;
> fetch = fuiword;
> }
>
> http://minnie.tuhs.org/TUHS/Archive/PDP-11/Trees/V7/usr/man/man2/indir.2
> The main purpose of indir is to allow a program to
> store arguments in system calls and execute them
> out of line in the data segment.
> This preserves the purity of the text segment.
>
> Note also the difference between V2 and V5 libc - clearly support for
> split I/D machines was added some time in this interval.
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V2/lib/write.s
> http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/source/s4/write.s
Quoting Dan Cross <crossd(a)gmail.com>:
> On Tue, Jan 6, 2015 at 12:33 PM, Johnny Billquist <bqt(a)update.uu.se> wrote:
>
>> On 2015-01-06 17:56, Dan Cross wrote:
>>>
>>> I believe that Mary Ann is referring to repeatedly looking up
>>> (presumably different) elements in the entry. Assuming that e.g. `vi`
>>> looks up O(n) elements, where $n$ is the number of elements, doing a
>>> linear scan for each, you'd end up with quadratic behavior.
>>>
>>
>> Assuming that you'd look up all the elements of the termcap entry at
>> startup, and did each one from scratch, yes.
>
>
> Yes. Isn't that exactly what Mary Ann said was happening? :-)
Yes
> But that would beg the question, why is vi doing a repeated scan of the
>> terminal entry at startup, if not to find all the capabilities and store
>> this somewhere? And if doing a look for all of them, why not scan the
>> string from start to finish and store the information as it is found? At
>> which point we move from quadratic to linear time.
>>
>
> I don't think she said it did things intelligently, just that that's how it
> did things. :-)
>
> But now we're getting into the innards of vi, which I never looked at
> anyway, and I guess is less relevant in this thread anyway.
vi does indeed look up all the various capabilities it will need,
once, when it starts up. It uses the documented interface, which
is tgetent followed by tgetstr/tgetnum/tgetflag for each capability.
tgetent did a linear search.
>> The short of it (from what I got out of it) is that the move from termcap
>> to terminfo was mostly motivated by attribute name changing away from fixed
>> 2 character names.
>>
>> A secondary motivation would be performance, but I don't really buy that
>> one. Since we only moved to terminfo on systems with plenty of memory,
>> performance of termcap could easily be on par anyway.
>>
>
> I tend to agree with you and I'll go one further: it seems that frequently
> we tend to identify a problem and then go to 11 with the "solution." I can
> buy that termcap performance was an issue; I don't know that going directly
> to hashed terminfo files was the optimal solution. A dbm file of termcap
> data and a hash table in whatever library parsed termcap would go a long
> way towards fixing the performance issues. Did termcap have to be
> discarded just to add longer names? I kind of tend to doubt it, but I
> wasn't there and don't know what the design criteria were, so my
> very-much-after-the-fact second guessing is just that.
It's been 30+ years, so the memory is a little fuzzy. But as I recall,
I did measure the performance and that's how I discovered that the
quadratic algorithm was causing a big performance hit on the hardware
available at the time (I think I was on a VAX 11/750, this would have
been about 1982.)
I was making several improvements at the same time. The biggest one
was rewriting curses to improve the screen update optimization, so it
would use insert/delete line/character on terminals supporting it.
Cleaning up the mess termcap had become (the format had become horrible
to update, and I was spending a lot of time making updates with all
the new terminals coming out) and improving startup time (curses also
had to read in a lot of attributes) were part of an overall cleanup.
IIRC, there may also have been some restrictions on parameters to string
capabilities that needed to be generalized.
Hashing could have been done differently, using DBM or some other method.
In fact, I'd used DBM to hash /etc/aliases in delivermail years earlier
(I have an amusing story about the worlds slowest email I'll tell some
other time) but at the time, it seemed best to break with termcap
and go with a cleaner format.
On 2015-01-01 17:11, Mary Ann Horton<mah(a)mhorton.net> wrote:
>
> The move was from termcap to terminfo. Termlib was the library for termcap.
Doh! Thanks for the correction. Finger fart.
> There were two problems with termcap. One was that the two-character
> name space was running out of room, and the codes were becoming less and
> less mnemonic.
Ah. Yes, that could definitely be a problem.
> But the big motivator was performance. Reading in a termcap entry from
> /etc/termcap was terribly slow. First you had to scan all the way
> through the (ever-growing) termcap file to find the correct entry. Once
> you had it, every tgetstr (etc) had to scan from the beginning of the
> entry, so starting up a program like vi involved quadratic performance
> (time grew as the square of the length of the termcap entry.) The VAX
> 11/780 was a 1 MIPS processor (about the same as a 1 MHz Pentium) and
> was shared among dozens of timesharing users, and some of the other
> machines of the day (750 and 730 Vaxen, PDP-11, etc.) were even slower.
> It took forever to start up vi or more or any of the termcap-based
> programs people were using a lot.
Hum. That seems like it would be more of an implementation issue. Why
wouldn't you just cache all the entries for the terminal once and for
all? terminfo never came to 16-bit systems anyway, so we're talking
about systems with plenty of memory. Caching the terminal information
would not be a big memory cost.
Thanks for the insight.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol