Arrow keys? Vi does arrow keys? But then I'd have to move my hand from home.
That's not vi.
--
Ed Skinner, ed(a)flat5.net, http://www.flat5.net/
-------- Original message --------
From: Doug McIlroy <doug(a)cs.dartmouth.edu>
Date: 08/02/2014 8:38 AM (GMT-07:00)
To: tuhs@minnie.tuhs.org,lm@mcvoy.com
Subject: Re: [TUHS] Unix taste
> So Doug, ed? Or what?
Yes, ed for small things. It loads instantly and works in the
current window without disturbing it. And it has been ingrained
in my fingers since Multics days.
But for heavy duty work, I use sam, in Windows as well as Linux.
Sam marries ed to screen editing much more cleanly than vi.
It has recursive global commands and infinite undo. Like qed
(whence came ed's syntax) and Larry's xvi it can work on
several files (or even several areas in one file) at once.
I would guess that a vi adept would miss having arrow keys
as well as the mouse, but probably not much else. Sam offers
one answer for my question about examples of taste reigning
in featurism during the course of Unix evolution.
Doug
_______________________________________________
TUHS mailing list
TUHS(a)minnie.tuhs.org
https://minnie.tuhs.org/mailman/listinfo/tuhs
> So Doug, ed? Or what?
Yes, ed for small things. It loads instantly and works in the
current window without disturbing it. And it has been ingrained
in my fingers since Multics days.
But for heavy duty work, I use sam, in Windows as well as Linux.
Sam marries ed to screen editing much more cleanly than vi.
It has recursive global commands and infinite undo. Like qed
(whence came ed's syntax) and Larry's xvi it can work on
several files (or even several areas in one file) at once.
I would guess that a vi adept would miss having arrow keys
as well as the mouse, but probably not much else. Sam offers
one answer for my question about examples of taste reigning
in featurism during the course of Unix evolution.
Doug
> From: Dave Horsfall <dave(a)horsfall.org>
> I recall that there were other differences as well, but only minor. In
> my paper in AUUGN titled "Unix on the LSI-11/23" it will reveal all
> about porting V6 to the thing.
I did a google for that, but couldn't find it. Is it available anywhere
online? (I'd love to read it.) I seem to recall vaguely that AUUGN stuff were
online, but if so, I'm not sure why the search didn't turn it up.
> I vaguely remember that the LTC had to be disabled during the boot
> process, for example, with an external switch.
I think you might be right, which means the simulated 11/23 I tested on
wasn't quite right - but keep reading!
I remember being worried about this when I started doing the V6 11/23 version
a couple of months back, because I remembered the 11/03's didn't have a
programmable clock, just a switch. So I was reading through the 11/23
documentation (I had used 11/23s, but on this point my memory had faded),
trying to see if they too did not have a programmable clock.
As best I can currently make out, the answer is 'yes/no, depending on the
exact model'! E.g. the 11/23-PLUS _does_ seem to have a programmable clock
(see pg. 610 of the 1982 edition of "microcomputers and memories"), but the
base 11/23 _apparently_ does not.
Anyway, the simulated 11/23 (on Ersatz11) does have the LTC (I just checked,
and 'lks' contains '0177546', so it thinks it has one :-).
But this will be easy to code around; if no link clock is found (in main.c),
I'd probably set 'lks' to point somewhere harmless (054, say - I'm using
050/052 to hold the pointer to the CSW, and the software CSW if there isn't a
hardware one). That way I can limit the changes to be in main.c, I won't have
to futz with clock.c too.
Noel
PS: On at least the 11/40 (and maybe the /45 too), the line clock was an
option! It was a single-height card, IIRC.
> From: Mark Longridge <cubexyz(a)gmail.com>
> I was digging around trying to figure out which Unixes would run on a
> PDP-11 with QBUS. It seems that the very early stuff like v5 was
> strictly UNIBUS and that the first version of Unix that supported QBUS
> was v7m (please correct me if this is wrong).
That may or may not be true; let me explain. The 11/23 is almost
indistinguishable, in programming terms, from an 11/40. There is only one
very minor difference (which UNIX would care about) that I know of - the
11/23 does not have a hardware switch register.
Yes, UNIBUS devices can't be plugged into a QBUS, and vice versa, _but_ i)
there a programming-compatible QBUS versions of many UNIBUS devices, and ii)
there were UNIBUS-QBUS converters which actually allowed a QBUS processor to
have UNIBUS peripherals.
So I don't know which version of Unix was the first run on an 11/23 - but it
could have been almost any.
It is quite possible to run V6 on an 11/23, provided you make a very small
number of very minor changes, to avoid use of the CSWR. I have done this, and
run V6 on a simulated 11/23 (I have a short note explaining what one needs to
do, if anyone is interested.) Admittedly, this is not the same as running it
on a real 11/23, but I see no resons the latter would not be doable.
I had started in on the work needed to get V6 running on a real 11/23, which
was the (likely) need to load Unix into the machine over a serial line. WKT
has done this for V7:
http://www.tuhs.org/Archive/PDP-11/Tools/Tapes/Vtserver/
but it needs a little tweaking for V6; I was about to start in on that.
> I have hopes to eventually run a Unix on real hardware
As do a lot of us... :-)
> It seems like DEC just didn't make a desktop that could run Bell Labs
> Unix, e.g. we can't just grab a DEC Pro-350 and stick Unix v7 on it.
I'm not sure about that; I'd have to check into the Pro-350. If it has memory
mapping, it should not be hard.
Also, even if it doesn't have memory mapping, there was a Mini-Unix done for
PDP-11's without memory mapping; I can dig up some URLs if you're interested.
The feeling is, I gather, very similar.
> it would be nice to eventually run a Unix with all the source code at
> hand on a real machine.
Having done that 'back in the day', I can assure you that it doesn't feel
that different from the simulated experience (except that the latter are
noticeably faster :-).
In fact, even if/when I do have a real 11, I'll probably still mostly use the
simulator, for a variety of reasons; e.g. the ability to edit source with a
nice modern editor, etc, etc is just too nice to pass up! :-)
Noel
Hi folks,
I was digging around trying to figure out which Unixes would run on a
PDP-11 with QBUS. It seems that the very early stuff like v5 was
strictly UNIBUS and that the first version of Unix that supported QBUS
was v7m (please correct me if this is wrong).
I was thinking that the MicroPDP-11's were all QBUS and that it would
be easier to run a Unix on a MicroPDP because they are the most
compact. So I figured I would try to obtain a Unix v7m distribution
tape image. I see the Jean Huens files on tuhs but I'm not sure what
to do with them.
I have hopes to eventually run a Unix on real hardware but for now I'm
going to stick with simh. It seems like DEC just didn't make a desktop
that could run Bell Labs Unix, e.g. we can't just grab a DEC Pro-350
and stick Unix v7 on it. Naturally I'll still have fun checking out
Unix v5 on the emulator but it would be nice to eventually run a Unix
with all the source code at hand on a real machine.
Mark
Many Q-bus devices were indeed programmed exactly as if
on a UNIBUS. This isn't surprising: Digital wanted their
own operating systems to port easily as well.
That won't help make UNIX run on a Pro-350 or Pro-380,
though. Those systems had standard single-chip PDP-11
CPUs (F11, like that in the 11/23, for the 350; J11,
like that in the 11/73, for the 380), but they didn't
have a Q-bus; they used the CTI (`computing terminal
interconnect'), a bus used only for the Pro-series
systems. DEC's operating systems wouldn't run on
the Pro either without special hacks. I think the
P/OS, the standard OS shipped with those systems, was
a hacked-up RSX-11M. I don't know whether there was
ever an RT-11 for the Pro. There were UNIX ports but
they weren't just copies of stock V7.
I vaguely remember, from my days at Caltech > 30 years
ago, helping someone get a locally-hacked-up V7
running on an 11/24, the same as an 11/23 except is
has a UNIBUS instead of a Q-bus. I don't think they
chose the 11/24 over the 11/23 to make it easier to
get UNIX running; probably it had something to do with
specific peripherals they wanted to use. It was a
long time ago and I didn't keep notebooks back then,
so the details may be unrecoverable.
Norman Wilson
Toronto ON
>> the downstream process is in the middle of a read call (waiting for
>> more data to be put in the pipe), and it has already computed a pointer
>> to the pipe's inode, and it's looping waiting for that inode to have
>> data.
> I think it would be necessary to make non-trivial adjustments to the
> pipe and file reading/writing code to make this work; either i) some
> sort of flag bit to say 'you've been spliced, take appropriate action'
> which the pipe code would have to check on being woken up, and then
> back out to let the main file reading/writing code take another crack
> at it
> ...
> I'm not sure I want to do the work to make this actually work - it's
> not clear if anyone is really that interested? And it's not something
> that I'm interested in having for my own use.
So I decided that it was silly to put all that work into this, and not get it
to work. I did 'cut a corner', by not handling the case where it's the first
or last process which is bailing (which requires a file-pipe splice, not a
pipe-pipe; the former is more complex); i.e. I was just doing a 'working proof
of concept', not a full implementation.
I used the 'flag bit on the inode' approach; the pipe-pipe case could be dealt
with entirely inside pipe.c/readp(). Here's the added code in readp() (at the
loop start):
if ((ip->i_flag & ISPLICE) != 0) {
closei(ip, 0);
ip = rp->f_inode;
}
It worked first time!
In more detail, I had written a 'splicetest' program that simply passed input
to its output, looking for a line with a single keyword ("stop"); at that
point, it did a splice() call and exited. When I did "cat input | splicetest
| cat > output", with appropriate test data in "input", all of the test data
(less the "stop" line) appeared in the output file!
For the first time (AFAIK) a process succesfully departed a pipeline, which
continued to operate! So it is do-able. (If anyone has any interest in the
code, let me know.)
Noel
Hi folks,
I've been typing sync;sync at the shell prompt then hitting ctrl-e to
get out of simh to shutdown v5 and v6 unix.
So far this has worked fairly well but I was wondering if there might
be a better way to do a shutdown on early unix.
There's a piece of code for Unix v7 that I came across for doing a shutdown:
http://www.maxhost.org/other/shutdown.c
I doesn't work on pre-v7 unix, but maybe it could be modified to work?
Mark
> the downstream process is in the middle of a read call (waiting for
> more data to be put in the pipe), and it has already computed a pointer
> to the pipe's inode, and it's looping waiting for that inode to have
> data.
> So now I have to regroup and figure out how to deal with that. My most
> likely approach is to copy the inode data across
So I've had a good look at the pipe code, and it turns out that the simple
hack won't work, for two reasons.
First, the pipe on the _other_ side of the middle process is _also_ probably
in the middle of a write call, and so you can't snarf its inode out from
underneath it. (This whole problem reminds me of 'musical chairs' - I just
want the music to stop so everything will go quiet so I can move things
around! :-)
Second, if the process that wants to close down and do a splice is either the
start or end process, its neighbour is going to go from having a pipe to
having a plain file - and the pipe code knows the inode for a pipe has two
users, etc.
So I think it would be necessary to make non-trivial adjustments to the pipe
and file reading/writing code to make this work; either i) some sort of flag
bit to say 'you've been spliced, take appropriate action' which the pipe code
would have to check on being woken up, and then back out to let the main file
reading/writing code take another crack at it, or ii) perhaps some sort of
non-local goto to forcefully back out the call to readp()/writep(), back to
the start of the read/write sequence.
(Simply terminating the read/write call will not work, I think, because that
will often, AFAICT, return with 0 bytes transferred, which will look like an
EOF, etc; so the I/O will have to be restarted.)
I'm not sure I want to do the work to make this actually work - it's not
clear if anyone is really that interested? And it's not something that I'm
interested in having for my own use.
Anyway, none of this is in any way a problem with the fundamental service
model - it's purely kernel implementation issues.
Noel
Ok, this is cheating a bit but I was wondering if I could possibly
compile my unix v6 version of unirubik which has working file IO and
run it under unix v5.
At first I couldn't figure out how to send a binary from unix v6 to
unix v5 but I did some experimenting and found:
tp m1r unirubik
which would output unirubik to mag tape #1 and
tp m1x unirubik
which would input unirubik from mag tape #1.
I don't know what cc does exactly but I thought "well if it compiles
to PDP-11 machine code and it's statically linked it could work". And
it actually does work!
I still want to try to get unirubik to compile under Unix v5 cc but
it's interesting that a program that uses iolib functions can work
under unix v5.
Mark
> From: Norman Wilson <norman(a)oclsc.org>
> I believe that when sync(2) returned, all unflushed I/O had been queued
> to the device driver, but not necessarily finished
Yes. I have just looked at update() (the internal version of 'sync') again,
and it does three things: writes out super-blocks, any modified inodes, and
(finally) any cached disk blocks (in that order).
In all three cases, the code calls (either directly or indirectly) bwrite(),
the exact operation of which (wait for completion, or merely schedule the
operation) on any given buffer depends on the flag bits on that buffer.
At least one of the cases (the third), it sets the 'ASYNC' bit on the buffer,
i.e. it doesn't wait for the I/O to complete, merely schedules it. For the
first two, though, it looks like it probably waits.
> so the second sync was just a time-filling no-op. If all the disks were
> in view, it probably sufficed just to watch them until all the lights
> ... had stopped blinking.
Yes. If the system is single-user, and you say 'sync', if you wait a bit for
the I/O to complete, any later syncs won't actually do anything.
I don't know of any programmatic way to make sure that all the disk I/O has
completed (although obviously one could be written); even the 'unmount' call
doesn't check to make sure all the I/O is completed (it just calls update()).
Watching the lights was as good as anything.
> I usually typed sync three or four times myself.
I usually just type it once, wait a moment, and then halt the machine. I've
never experienced disk corruption from so doing.
With modern ginormous disk caches, you might have to wait more than a moment,
but we're talking older machines here...
Noel
After a day and an evening of fighting with modern hardware,
the modern tangle that passes for UNIX nowadays, and modern
e-merchandising, I am too lazy to go look up the details.
But as I remember it, two syncs was indeed probably enough.
I believe that when sync(2) returned, all unflushed I/O had
been queued to the device driver, but not necessarily finished,
so the second sync was just a time-filling no-op. If all the
disks were in view, it probably sufficed just to watch them
until all the lights (little incandescent bulbs in those days,
not LEDs) had stopped blinking.
I usually typed sync three or four times myself. It gave me
a comfortable feeling (the opposite of a syncing feeling, I
suppose). I still occasionally type `sync' to the shell as
a sort of comfort word while thinking about what I'm going
to do next. Old habits die hard.
(sync; sync; sync)
Norman Wilson
Toronto ON
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> Process A spawns process B, which reads stdin with buffering. B gets
> all it deserves from stdin and exits. What's left in the buffer,
> intehded for A, is lost.
Ah. Got it.
The problem is not with buffering as a generic approach, the problem is that
you're trying to use a buffering package intended for simple,
straight-forward situations in one which doesn't fall into that category! :-)
Clearly, either B has to i) be able to put back data which was not for it
('ungets' as a system call), or ii) not read the data that's not for it - but
that may be incompatible with the concept of buffering the input (depending
on the syntax, and thus the ability to predict the approaching of the data B
wants, the only way to avoid the need for ungetc() might be to read a byte at
a time).
If B and its upstream (U) are written together, that could be another way to
deal with it: if U knows where B's syntatical boundaries are, it can give it
advance warning, and B could then use a non-trivial buffering package to do
the right thing. E.g. if U emits 'records' with a header giving the record
length X, B could tell its buffering package 'don't read ahead more than X
bytes until I tell you to go ahead with the next record'.
Of course, that's not a general solution; it only works with prepared U's.
Really, the only general, efficient way to deal with that situation that I can
see is to add 'ungets' to the operating system...
Noel
>> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
>> The spec below isn't hard: just hook two buffer chains together and
>> twiddle a couple of file desciptors.
> In thinking about how to implement it, I was thinking that if there was
> any buffered data in an output pipe, that the process doing the
> splice() would wait (inside the splice() system call) on all the
> buffered data being read by the down-stream process.
> ...
> As a side-benefit, if one adopted that line, one wouldn't have to deal
> with the case (in the middle of the chain) of a pipe-pipe splice with u
> buffered data in both pipes (where one would have to copy the data
> across); instead one could just use the exact same code for both cases
So a couple of days ago I suffered a Big Hack Attack and actually wrote the
code for splice() (for V6, of course :-).
It took me a day or so to get 'mostly' running. (I got tripped up by pointer
arithmetic issues in a number of places, because V6 declares just about
_everything_ to be "int *", so e.g. "ip + 1" doesn't produce the right value
for sleep() if ip is declared to be "struct inode *", which is what I did
automatically.)
My code only had one real bug so far (I forgot to mark the user's channels as
closed, which resulted in their file entries getting sub-zero usage counts
when the middle (departing) process exited).
However, now I have run across a real problem: I was just copying the system
file table entry for the middle process' input channel over to the entry for
the downstream's input (so further reads on its part would read the channel
the middle process used to be reading). Copying the data from one entry to
another meant I didn't have to go chase down file table pointers in the other
process' U structure, etc.
Alas, this simple approach doesn't work.
Using the approach I outlined (where the middle channel waits for the
downstream pipe to be empty, so it can discard it and do the splice by
copying the file table entries) doesn't work, because the downstream process
is in the middle of a read call (waiting for more data to be put in the
pipe), and it has already computed a pointer to the pipe's inode, and it's
looping waiting for that inode to have data.
So now I have to regroup and figure out how to deal with that. My most likely
approach is to copy the inode data across (so I don't have to go mess with the
downstream process to get it to go look at another inode), but i) I want to
think about it a bit first, and ii) I have to check that it won't screw
anything else up if I move the inode data to another slot.
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I was wondering if there might be a better way to do a shutdown on
> early unix.
Not really; I don't seem to recall our having one on the MIT V6 machine.
(We did add a 'reboot' system call so we could reboot the machine without
having to take the elevator up to the machine room [the console was on our
floor, and the reboot() call just jumped into the hardware bootstrap], but in
the source it doesn't even bother to do an update(). Well, I should't say
that: I only have the source for the kernel, which doesn't; I don't at the
moment have access to the source for the rest of the system - although I do
have some full dump tapes, once I can work out how to read them. Anyway, so
maybe the user command for rebooting the system did a sync() first.)
I suppose you could set the switch register to 173030 and send a 'kill -1 1',
which IIRC kills of all shells except the one on the console, but somehow
I doubt you're running multi-user anyway... :-)
Noel
>> the cp command seems different from all other versions, I'm not sure I
>> understand it so I used the mv command instead which worked as expected.
>
> I'm intrigued; in what way is it different?
It seems that one must first cp a file to another file then do a mv to
actually put it into a different directory:
e.g. while in /usr/src
as ctr0.s
cp a.out ctr0.o
mv ctr0.o /usr/lib
...rather than trying to just "cp a.out /usr/lib/ctr0.o"
Mark
Yes, an evil necessary to get things going.
The very definition of original sin.
Doug
Larry McVoy wrote:
>>>> For stdio, of course, one would need fsplice(3), which must flush the
>>>> in-process buffers--penance for stdio's original sin of said buffering.
>>> Err, why is buffering data in the process a sin? (Or was this just a
>>> humourous aside?)
>> Process A spawns process B, which reads stdin with buffering. B gets
>> all it deserves from stdin and exits. What's left in the buffer,
>> intehded for A, is lost. Sinful.
> It really depends on what you want. That buffering is a big win for
> some use cases. Even on today's processors reading a byte at a time via
> read(2) is costly. Like 5000x more costly on the laptop I'm typing on:
> Err, why is buffering data in the process a sin? (Or was this just a
humourous aside?)
Process A spawns process B, which reads stdin with buffering. B gets
all it deserves from stdin and exits. What's left in the buffer,
intehded for A, is lost. Sinful.
> From: Doug McIlroy <doug(a)cs.dartmouth.edu>
> The spec below isn't hard: just hook two buffer chains together and
> twiddle a couple of file desciptors.
How amusing! I was about to send a message with almost the exact same
description - it even had the exact same syntax for the splice() call! A
couple of points from my thoughts which were not covered in your message:
In thinking about how to implement it, I was thinking that if there was any
buffered data in an output pipe, that the process doing the splice() would
wait (inside the splice() system call) on all the buffered data being read by
the down-stream process.
The main point of this is for the case where the up-stream is the head of the
chain (i.e. it's reading from a file), where one more or less has to wait,
because one will want to set the down-streams' file descriptor to point to
the file - but one can't really do that until all the buffered data was
consumed (else it will be lost - one can't exactly put it into the file :-).
As a side-benefit, if one adopted that line, one wouldn't have to deal with
the case (in the middle of the chain) of a pipe-pipe splice with buffered
data in both pipes (where one would have to copy the data across); instead
one could just use the exact same code for both cases, and in that case the
wait would be until the down-stream pipe can simply be discarded.
One thing I couldn't decide is what to do if the upstream is a pipe with
buffered data, and the downstream is a file - does one discard the buffered
data, write it to the file, abort the system call so the calling process can
deal with the buffered data, or what? Perhaps there could be a flag argument
to control the behaviour in such cases.
Speaking of which, I'm not sure I quite grokked this:
> If file descriptor fd0 is associated with a pipe and fd1 is not, then
> fd1 is updated to reflect the effect of buffered data for fd0, and the
> pipe's other descriptor is replaced with a duplicate of fd1.
But what happens to the data? Is it written to the file? (That's the
implication, but it's not stated directly.)
> The same statement holds when "fd0" is exchanged with "fd1" and "write"
> is exchanged with "read".
Ditto - what happens to the data? One can't simply stuff it into the input
file? I think the 'wait in the system call until it drains' approach is
better.
Also, it seemed to me that the right thing to do was to bash the entry in the
system-wide file table (i.e. not the specific pointers in the u area). That
would automatically pick up any children.
Finally, there are 'potential' security issues (I say 'potential' because I'm
not sure they're really problems). For instance, suppose that an end process
(i.e. reading/writing a file) has access to that file (e.g. because it
executed a SUID program), but its neighbour process does not. If the end
process wants to go away, should the neighbour process be allowed access to
the file? A 'simple' implementation would do so (since IIRC file permissions
are only checked at open time, not read/write time).
I don't pretend that this is a complete list of issues - just what I managed
to think up while considering the new call.
> For stdio, of course, one would need fsplice(3), which must flush the
> in-process buffers--penance for stdio's original sin of said buffering.
Err, why is buffering data in the process a sin? (Or was this just a
humourous aside?)
Noel
Larry wrote in separate emails
> If you really think that this could be done I'd suggest trying to
> write the man page for the call.
> I already claimed splice(2) back in 1998; the Linux guys did
> implement part of it ...
I began to write the following spec without knowing that Linux had
appropriated the name "splice" for a capability that was in DTSS
over 40 years ago under a more accurate name, "copy". The spec
below isn't hard: just hook two buffer chains together and twiddle
a couple of file desciptors. For stdio, of course, one would need
fsplice(3), which must flush the in-process buffers--penance for
stdio's original sin of said buffering.
Incidentally, the question is not abstract. I have code that takes
quadratic time because it grows a pipeline of length proportional
to the input, though only a bounded number of the processes are
usefully active at any one time; the rest are cats. Splicing out
the cats would make it linear. Linear approaches that don't need
splice are not nearly as clean.
Doug
SPLICE(2)
SYNOPSIS
int splice(int fd0, int fd1);
DESCRIPTION
Splice connects the source for a reading file descriptor fd0
directly to the destination for a writing file descriptor fd1
and closes both fd0 and fd1. Either the source or the destination
must be another process (via a pipe). Data buffered for fd0 at
the time of splicing follows such data for fd1. If both source
and destination are processes, they become connected by a pipe. If
the source (destination) is a process, the file descriptor
in that process becomes write-only (read-only).
If file descriptor fd0 is associated with a pipe and fd1 is not,
then fd1 is updated to reflect the effect of buffered data for fd0,
and the pipe's other descriptor is replaced with a duplicate of fd1.
The same statement holds when "fd0" is exchanged with "fd1" and
"write" is exchanged with "read".
Splice's effect on any file descriptor propagates to shared file
descriptors in all processes.
NOTES
One file must be a pipe lest the spliced data stream have no
controlling process. It might seem that a socket would suffice,
ceding control to a remote system; but that would allow the
uncontrolled connection file-socket-socket-file.
The provision about a file descriptor becoming either write-only or
read-only sidesteps complications due to read-write file descriptors.
> From: Dave Horsfall <dave(a)horsfall.org>
> crt0.s -> C Run Time (support). It jiggers the stack pointer in some
> obscure manner
It's the initial startup; it sets up the arguments into the canonical C form,
and then calls main(). (It does not do the initial stack frame, a canonical
call to CSV from inside main() will do that.) Here are the exact details:
On an exec(), once the exec() returns, the arguments are available at the
very top of memory: the arguments themselves are at the top, as a sequence of
zero-terminated byte strings. Below them is an array of word pointers to the
arguments, with a -1 in the last entry. (I.e. if there are N arguments, the
array of pointers has N+1 entries, with the last being -1.) Below that is a
word containing the size of that array (i.e. N+1).
The Stack Pointer register points to that count word; all other registers
(including the PC) are cleared.
All CRT0.s does is move that argument count word down one location on the
stack, adjust the SP to point to it, and put a pointer to the argument
pointer table in the now-free word (between the argument count, and the first
element of the argument pointer table). Hence the canonical C main() argument
list of:
int argc;
int **argv;
If/when main() returns, it takes the return value (passed in r0) and calls
exit() with it. (If using the stdio library, that exit() flushes the buffers
and closes all open files.) Should _that_ return, it does a 'sys exit'.
There are two variant forms: fcrt0.s arranges for the floating point
emulation to be loaded, and hooked up; mcrt0.s (much more complicated)
arranges for process monitoring to be done.
Noel
Hi folks,
Yes I have managed to compile Hello World on v1/v2.
the cp command seems different from all other versions, I'm not sure I
understand it so I used the mv command instead which worked as
expected.
I had to "as crt0.s" and put crt0.o in /usr/lib and then it compiled
without issue.
Is the kernel in /etc? I saw a core file in /etc that looked like it
would be about the right size. No unix file in the root directory
which surprised me.
At least I know what crt0.s does now. I guess a port of unirubik to
v1/v2 is in the cards (maybe).
Mark
Hi folks,
I'm interested in comparing notes with C programmers who have written
programs for Unix v5, v6 and v7.
Also I'm interested to know if there's anything similar to the scanf
function for unix v5. Stdio and iolib I know well enough to do file IO
but v5 predates iolib.
Back in 1988 I tried to write a universal rubik's cube program which I
called unirubik and after discovering TUHS I tried to backport it to
v7 (which was easy) and v6 (which was a bit harder) and now I'm trying
to backport it to v5. The v5 version currently doesn't have the any
file IO capability as yet. Here are a few links to the various
versions:
http://www.maxhost.org/other/unirubik.c.v7http://www.maxhost.org/other/unirubik.c.v6http://www.maxhost.org/other/unirubik.c.v5
Also I've compiled the file utility from v6 in v5 and it seemed to
work fine. Once I got /dev/mt0 working for unix v5 (thanks to Warren's
help) I transferred the binary for the paging utility pg into it. This
version of pg I believe was from 1BSD.
I did some experimenting with math functions which can be seen here:
http://www.maxhost.org/other/math1.c
This will compile on unix v5.
My initial impression of Unix v5 was that it was a primitive and
almost unusable version of Unix but now that I understand it a bit
better it seems a fairly complete system. I'm a bit foggy on what the
memory limits are with v5 and v6. Unix v7 seems to run under simh
emulating a PDP-11/70 with 2 megabytes of ram (any more than that and
the kernel panics).
Also I'd be interested in seeing the source code for Ken Thompson's
APL interpreter for Unix v5. I know it does exist as it is referenced
in the Unix v5 manual. The earliest version I could find was dated Oct
1976 and I've written some notes on it here:
http://apl.maxhost.org/getting-apl-11-1976-to-work.txt
Ok, that's about it for now. Is there any chance of going further back
to v4, v3, v2 etc?
Mark
here's the e-mail that I sent on to Mark in the hope that it would
give him enough information to get his 5th Edition kernel working
with a tape device. He has also now joined the list. Welcome aboard, Mark.
Warren
----- Forwarded message from Warren Toomey <wkt(a)tuhs.org> -----
On Thu, Jul 10, 2014 at 05:56:04PM -0400, Mark Longridge wrote:
> There was no m40.s in v5 so I substituted mch.s for m40.s and that
> seemed to create a kernel and it booted but I can't access /dev/mt0.
Mark, glad to hear you were able to rebuild the kernel. I've never tried
on 5th Edition. Just reading through the 6th Edition docs, it says this:
-----
Next you must put in all of the special files in the
directory /dev using mknod‐VIII. Print the configuration
file c.c created above. This is the major device switch of
each device class (block and character). There is one line
for each device configured in your system and a null line
for place holding for those devices not configured. The
block special devices are put in first by executing the fol‐
lowing generic command for each disk or tape drive. (Note
that some of these files already exist in the directory
/dev. Examine each file with ls‐I with −l flag to see if
the file should be removed.)
/etc/mknod /dev/NAME b MAJOR MINOR
The NAME is selected from the following list:
c.c NAME device
rf rf0 RS fixed head disk
tc tap0 TU56 DECtape
rk rk0 RK03 RK05 moving head disk
tm mt0 TU10 TU16 magtape
rp rp0 RP moving head disk
hs hs0 RS03 RS04 fixed head disk
hp hp0 RP04 moving head disk
The major device number is selected by counting the line
number (from zero) of the device’s entry in the block con‐
figuration table. Thus the first entry in the table bdevsw
would be major device zero.
The minor device is the drive number, unit number or
partition as described under each device in section IV. The
last digit of the name (all given as 0 in the table above)
should reflect the minor device number. For tapes where the
unit is dial selectable, a special file may be made for each
possible selection.
The same goes for the character devices. Here the
names are arbitrary except that devices meant to be used for
teletype access should be named /dev/ttyX, where X is any
character. The files tty8 (console), mem, kmem, null are
already correctly configured.
The disk and magtape drivers provide a ‘raw’ interface
to the device which provides direct transmission between the
user’s core and the device and allows reading or writing
large records. The raw device counts as a character device,
and should have the name of the corresponding standard block
special file with ‘r’ prepended. Thus the raw magtape files
would be called /dev/rmtX.
When all the special files have been created, care
should be taken to change the access modes (chmod‐I) on
these files to appropriate values.
-----
Looking at the c.c generated, it has:
int (*bdevsw[])()
{
&nulldev, &nulldev, &rkstrategy, &rktab,
&tmopen, &tmclose, &tmstrategy, &tmtab, /* 1 */
&nulldev, &tcclose, &tcstrategy, &tctab,
0
};
int (*cdevsw[])()
{
&klopen, &klclose, &klread, &klwrite, &klsgtty,
&nulldev, &nulldev, &mmread, &mmwrite, &nodev,
&nulldev, &nulldev, &rkread, &rkwrite, &nodev,
&tmopen, &tmclose, &tmread, &tmwrite, &nodev, /* 3 */
&dcopen, &dcclose, &dcread, &dcwrite, &dcsgtty,
&lpopen, &lpclose, &nodev, &lpwrite, &nodev,
0
};
Following on from the docs, you should be able to make the /dev/mt0
device file by doing:
/etc/mknod /dev/tm0 b 1 0
And possibly also:
/etc/mknod /dev/rmt0 c 3 0
Cheers,
Warren