Hi folks,
I was digging around trying to figure out which Unixes would run on a
PDP-11 with QBUS. It seems that the very early stuff like v5 was
strictly UNIBUS and that the first version of Unix that supported QBUS
was v7m (please correct me if this is wrong).
I was thinking that the MicroPDP-11's were all QBUS and that it would
be easier to run a Unix on a MicroPDP because they are the most
compact. So I figured I would try to obtain a Unix v7m distribution
tape image. I see the Jean Huens files on tuhs but I'm not sure what
to do with them.
I have hopes to eventually run a Unix on real hardware but for now I'm
going to stick with simh. It seems like DEC just didn't make a desktop
that could run Bell Labs Unix, e.g. we can't just grab a DEC Pro-350
and stick Unix v7 on it. Naturally I'll still have fun checking out
Unix v5 on the emulator but it would be nice to eventually run a Unix
with all the source code at hand on a real machine.
Mark
Many Q-bus devices were indeed programmed exactly as if
on a UNIBUS. This isn't surprising: Digital wanted their
own operating systems to port easily as well.
That won't help make UNIX run on a Pro-350 or Pro-380,
though. Those systems had standard single-chip PDP-11
CPUs (F11, like that in the 11/23, for the 350; J11,
like that in the 11/73, for the 380), but they didn't
have a Q-bus; they used the CTI (`computing terminal
interconnect'), a bus used only for the Pro-series
systems. DEC's operating systems wouldn't run on
the Pro either without special hacks. I think the
P/OS, the standard OS shipped with those systems, was
a hacked-up RSX-11M. I don't know whether there was
ever an RT-11 for the Pro. There were UNIX ports but
they weren't just copies of stock V7.
I vaguely remember, from my days at Caltech > 30 years
ago, helping someone get a locally-hacked-up V7
running on an 11/24, the same as an 11/23 except is
has a UNIBUS instead of a Q-bus. I don't think they
chose the 11/24 over the 11/23 to make it easier to
get UNIX running; probably it had something to do with
specific peripherals they wanted to use. It was a
long time ago and I didn't keep notebooks back then,
so the details may be unrecoverable.
Norman Wilson
Toronto ON
>> the downstream process is in the middle of a read call (waiting for
>> more data to be put in the pipe), and it has already computed a pointer
>> to the pipe's inode, and it's looping waiting for that inode to have
>> data.
> I think it would be necessary to make non-trivial adjustments to the
> pipe and file reading/writing code to make this work; either i) some
> sort of flag bit to say 'you've been spliced, take appropriate action'
> which the pipe code would have to check on being woken up, and then
> back out to let the main file reading/writing code take another crack
> at it
> ...
> I'm not sure I want to do the work to make this actually work - it's
> not clear if anyone is really that interested? And it's not something
> that I'm interested in having for my own use.
So I decided that it was silly to put all that work into this, and not get it
to work. I did 'cut a corner', by not handling the case where it's the first
or last process which is bailing (which requires a file-pipe splice, not a
pipe-pipe; the former is more complex); i.e. I was just doing a 'working proof
of concept', not a full implementation.
I used the 'flag bit on the inode' approach; the pipe-pipe case could be dealt
with entirely inside pipe.c/readp(). Here's the added code in readp() (at the
loop start):
if ((ip->i_flag & ISPLICE) != 0) {
closei(ip, 0);
ip = rp->f_inode;
}
It worked first time!
In more detail, I had written a 'splicetest' program that simply passed input
to its output, looking for a line with a single keyword ("stop"); at that
point, it did a splice() call and exited. When I did "cat input | splicetest
| cat > output", with appropriate test data in "input", all of the test data
(less the "stop" line) appeared in the output file!
For the first time (AFAIK) a process succesfully departed a pipeline, which
continued to operate! So it is do-able. (If anyone has any interest in the
code, let me know.)
Noel
Hi folks,
I've been typing sync;sync at the shell prompt then hitting ctrl-e to
get out of simh to shutdown v5 and v6 unix.
So far this has worked fairly well but I was wondering if there might
be a better way to do a shutdown on early unix.
There's a piece of code for Unix v7 that I came across for doing a shutdown:
http://www.maxhost.org/other/shutdown.c
I doesn't work on pre-v7 unix, but maybe it could be modified to work?
Mark
> the downstream process is in the middle of a read call (waiting for
> more data to be put in the pipe), and it has already computed a pointer
> to the pipe's inode, and it's looping waiting for that inode to have
> data.
> So now I have to regroup and figure out how to deal with that. My most
> likely approach is to copy the inode data across
So I've had a good look at the pipe code, and it turns out that the simple
hack won't work, for two reasons.
First, the pipe on the _other_ side of the middle process is _also_ probably
in the middle of a write call, and so you can't snarf its inode out from
underneath it. (This whole problem reminds me of 'musical chairs' - I just
want the music to stop so everything will go quiet so I can move things
around! :-)
Second, if the process that wants to close down and do a splice is either the
start or end process, its neighbour is going to go from having a pipe to
having a plain file - and the pipe code knows the inode for a pipe has two
users, etc.
So I think it would be necessary to make non-trivial adjustments to the pipe
and file reading/writing code to make this work; either i) some sort of flag
bit to say 'you've been spliced, take appropriate action' which the pipe code
would have to check on being woken up, and then back out to let the main file
reading/writing code take another crack at it, or ii) perhaps some sort of
non-local goto to forcefully back out the call to readp()/writep(), back to
the start of the read/write sequence.
(Simply terminating the read/write call will not work, I think, because that
will often, AFAICT, return with 0 bytes transferred, which will look like an
EOF, etc; so the I/O will have to be restarted.)
I'm not sure I want to do the work to make this actually work - it's not
clear if anyone is really that interested? And it's not something that I'm
interested in having for my own use.
Anyway, none of this is in any way a problem with the fundamental service
model - it's purely kernel implementation issues.
Noel
Ok, this is cheating a bit but I was wondering if I could possibly
compile my unix v6 version of unirubik which has working file IO and
run it under unix v5.
At first I couldn't figure out how to send a binary from unix v6 to
unix v5 but I did some experimenting and found:
tp m1r unirubik
which would output unirubik to mag tape #1 and
tp m1x unirubik
which would input unirubik from mag tape #1.
I don't know what cc does exactly but I thought "well if it compiles
to PDP-11 machine code and it's statically linked it could work". And
it actually does work!
I still want to try to get unirubik to compile under Unix v5 cc but
it's interesting that a program that uses iolib functions can work
under unix v5.
Mark
> From: Norman Wilson <norman(a)oclsc.org>
> I believe that when sync(2) returned, all unflushed I/O had been queued
> to the device driver, but not necessarily finished
Yes. I have just looked at update() (the internal version of 'sync') again,
and it does three things: writes out super-blocks, any modified inodes, and
(finally) any cached disk blocks (in that order).
In all three cases, the code calls (either directly or indirectly) bwrite(),
the exact operation of which (wait for completion, or merely schedule the
operation) on any given buffer depends on the flag bits on that buffer.
At least one of the cases (the third), it sets the 'ASYNC' bit on the buffer,
i.e. it doesn't wait for the I/O to complete, merely schedules it. For the
first two, though, it looks like it probably waits.
> so the second sync was just a time-filling no-op. If all the disks were
> in view, it probably sufficed just to watch them until all the lights
> ... had stopped blinking.
Yes. If the system is single-user, and you say 'sync', if you wait a bit for
the I/O to complete, any later syncs won't actually do anything.
I don't know of any programmatic way to make sure that all the disk I/O has
completed (although obviously one could be written); even the 'unmount' call
doesn't check to make sure all the I/O is completed (it just calls update()).
Watching the lights was as good as anything.
> I usually typed sync three or four times myself.
I usually just type it once, wait a moment, and then halt the machine. I've
never experienced disk corruption from so doing.
With modern ginormous disk caches, you might have to wait more than a moment,
but we're talking older machines here...
Noel
After a day and an evening of fighting with modern hardware,
the modern tangle that passes for UNIX nowadays, and modern
e-merchandising, I am too lazy to go look up the details.
But as I remember it, two syncs was indeed probably enough.
I believe that when sync(2) returned, all unflushed I/O had
been queued to the device driver, but not necessarily finished,
so the second sync was just a time-filling no-op. If all the
disks were in view, it probably sufficed just to watch them
until all the lights (little incandescent bulbs in those days,
not LEDs) had stopped blinking.
I usually typed sync three or four times myself. It gave me
a comfortable feeling (the opposite of a syncing feeling, I
suppose). I still occasionally type `sync' to the shell as
a sort of comfort word while thinking about what I'm going
to do next. Old habits die hard.
(sync; sync; sync)
Norman Wilson
Toronto ON
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Process A spawns process B, which reads stdin with buffering. B gets
> all it deserves from stdin and exits. What's left in the buffer,
> intehded for A, is lost.
Ah. Got it.
The problem is not with buffering as a generic approach, the problem is that
you're trying to use a buffering package intended for simple,
straight-forward situations in one which doesn't fall into that category! :-)
Clearly, either B has to i) be able to put back data which was not for it
('ungets' as a system call), or ii) not read the data that's not for it - but
that may be incompatible with the concept of buffering the input (depending
on the syntax, and thus the ability to predict the approaching of the data B
wants, the only way to avoid the need for ungetc() might be to read a byte at
a time).
If B and its upstream (U) are written together, that could be another way to
deal with it: if U knows where B's syntatical boundaries are, it can give it
advance warning, and B could then use a non-trivial buffering package to do
the right thing. E.g. if U emits 'records' with a header giving the record
length X, B could tell its buffering package 'don't read ahead more than X
bytes until I tell you to go ahead with the next record'.
Of course, that's not a general solution; it only works with prepared U's.
Really, the only general, efficient way to deal with that situation that I can
see is to add 'ungets' to the operating system...
Noel