> From: Clem Cole <clemc(a)ccc.com>
> Once people started to partition them, then all sort of new things
> occurred and I that's when the idea of a dedicated swap partition came
> up. I've forgotten if that was a BSDism or UNIX/TS.
Well, vanilla V6 had i) partitioned drives (that was the only way to support
an RP03), and ii) the swap device in the c.c config file. That's all you need
to put swap in its own partition. (One of the other MIT-LCS groups with V6
did that; we didn't, because it was better to swap off the RK, which did
multi-block transfers in a single I/O operation.)
> As I recall in V6 and I think V7, the process was first placed in the
> swap image before the exec (or at least space reserved for it).
As best I understand it, the way fork worked in V6 was that if there was not
enough memory for an in-core fork (in which case the entire memory of the
process was just copied), the process was 'swapped out', and the swapped out
image assumed the identity of the child.
But this is kind of confusing, because when I was bringing up V6 under the
Ersatz11 simulator, I had a problem where the swapper (process 0) was trying
to fork (with the child becoming 1 and running /etc/init), and it did the
'swap out' thing. But there was a ton of free memory at that point, so... why
was it doing a swap? Eh, something to investigate sometime...
Noel
> From: Jacob Ritorto
> I'm having trouble understanding how to get my swap configured. Since
> rl02s are so little, the MAKE file in /dev doesn't partition them into
> a, b, c, etc. However, when MAKE makes the /dev/rl0 device, it uses
> only 8500 of its 10000 blocks, so what would presumably be intended as
> swap space does exist. Swap is usually linked to the b partition,
> right? So how do I create this b partition on an rl02?
I don't know how the later systems work, but in V6, the swap device, and the
start block / # of blocks are specified in the c.c configuration file (i.e.
they are compiled into the system). So you can take one partition, and by
specifying less than the full size to 'mkfs', you can use the end of the
partition for swap space (which is presumably what's happening with /dev/rl0
here).
Noel
Dave Horsfall:
I also wrote a paper on a "bad block" system, where something like inum
"-1" contained the list of bad sectors, but never saw it through.
====
During the file system change from V6 to V7, the i-number of
the root changed from 1 to 2, and i-node 1 became unused.
At least some versions of the documentation (I am too harried
to look it up at the moment) claimed i-node 1 was reserved
for holding bad blocks, to keep them out of the free list,
but that the whole thing was unimplemented.
I vaguely remember implementing that at some point: writing
a tool to add named sectors to i-node 1. Other tools needed
fixing too, though: dump, I think, still treated i-node 1
as an ordinary file, and tried to dump the contents of
all the bad blocks, more or less defeating the purpose.
I left all that behind when I moved to systems with MSCP disks,
having written my own driver code that implemented DEC's
intended port/class driver split, en passant learning how
to inform the disk itself of a bad block so it would hide it
from the host.
I'd write more but I need to go down to the basement and look
at one of my modern* 3TB SATA disks, which is misbehaving
even though modern disks are supposed to be so much better ...
Norman Wilson
Toronto ON
* Not packaged in brass-bound leather like we had in the old days.
You can't get the wood, you know.
what about using another minor device? Is xp0d mapped elsewhere?
Since it's a BSD, won't it try by default to read a partition
table from the first few sectors of the disk?
Norman Wilson
Toronto ON
Hi,
I'm having trouble understanding how to get my swap configured. Since
rl02s are so little, the MAKE file in /dev doesn't partition them into a,
b, c, etc. However, when MAKE makes the /dev/rl0 device, it uses only 8500
of its 10000 blocks, so what would presumably be intended as swap space
does exist. Swap is usually linked to the b partition, right? So how do I
create this b partition on an rl02? Or am I getting this horribly wrong?
thx
jake
Hi,
I'm running 2.9BSD on a pdp11/34 with an Emulex sc21 controller to some
Fuji160 disks. Booting with root on RL02 for now, but want to eventually
have the whole system on the Fujis and disconnect the rl02s.
While the previous owner of the disks appears to have suffered a
headcrash near cylinder 0, I'm having an impressive degree of success
writing to other parts of the disk.
However, when I try to mkfs, I can see the heads trying to write on the
headcrashed part of the disk. (Nice having those plexiglass covers!)
Is there a way to tell mkfs (or perhaps some other program) to not try to
write on the damaged cylinders?
thx
jake
So, I have a chance to buy a copy of a Version 5 manual, but it will be a
lot. I looked, and the Version 5 manual doesn't appear to be online. So while
normally at the price this is at, I would pass, it might be worth it for me
to buy it, and scan it to make it available.
But, I looked in the "FAQ on the Unix Archive and Unix on the PDP-11", and it
says:
5th Edition has its on-line manual pages missing. ... Fortunately, we do
have paper copies of all the research editions of UNIX from 1st to 7th
Edition, and these will be scanned in and OCR'd.
Several questions: First, when it says "we do have paper copies of all the
research editions of UNIX", I assume it means 'we do have paper copies of
_the manuals for_ all the research editions of UNIX', not 'we do have paper
copies of _the source code for_ all the research editions of UNIX'?
Second, if it is 'manuals', did the scan/OCR thing ever happen, or is it
likely to anytime in the moderate future (next couple of years)?
Third, would a scanned (which I guess we could OCR) version of this manual be
of much use (it would not, after all, be the NROFF source, although probably
a lot of the commands will be identical to the V6 ones, for which we do have
the NROFF)?
Advice, please? Thanks!
Noel
> From: Tom Ivar Helbekkmo
> There was no fancy I/O order juggling, so everything was written in the
> same chronological order as it was scheduled.
> ...
> What this means is that the second sync, by waiting for its own
> superblock writes, will wait until all the inode and file data flushes
> scheduled by the first one have completed.
Ah, I'm not sure this is correct. Not all disk drivers handled requests in a
'first-come, first-served' order (i.e. where a request for block X, which was
scheduled before a request for block Y, actually happened before the
operation on block Y). It all depends on the particular driver; some drivers
(e.g. the RP driver) re-organized the waiting request queue to optimize head
motion, using a so-called 'elevator algorithm'.
(PS: For a good time, do "dd if=/dev/[large_partition] of=/dev/null" on a
running system with such a disk, and a lot of users on - the system will
apparently come to a screeching halt while the 'up' pass on the disk
completes... I found this out the hard way, needless to say! :-)
Since the root block is block 1 in the partition, one might think that even
with an elevator algorithm, this would tend to guarantee that doing it would
more or less guarantee that all other pending operations would have completed
(since it could only happen at the end of 'down' pass); _but_ the elevator
algorithm is in terms of actual physical block numbers, so blocks in another
lower partition might still remain to be written.
But now that I think about it a bit, if such blocks existed, that partition's
super-block would also need to be written, so when that one completed, the
disk queue would be empty.
But the point remains - because there's no guarantee of _overall_ disk
operation ordering in V6, scheduling a disk request and waiting for it to
complete does not guarantee that all previously-requested disk operations
will have completed before it does.
I really think the whole triple-sync thing is mythology. Look through the V6
documentation and although IIRC there are instructions on how to shut the
system down, it's not mentioned. We certainly never used it at MIT (and I
still don't), and I've never seen a problem with disk corruption _when the
system was deliberately shut down_.
Noel
Yo Jacob,
I'm ex-sun but I don't know too much about Illumos. Care to give us
the summary of why I might care about it?
On Wed, Dec 31, 2014 at 01:16:00AM -0500, Jacob Ritorto wrote:
> Hey, thanks, Derrik.
> I don't mess with Linux much (kind of an Illumos junkie by trade ;), but
> I bet gcc would. I did out of curiosity do it with the Macintosh cc (Apple
> LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)) and it throws
> warnings about our not type-defining functions because you're apparently
> supposed to do this explicitly these days, but it dutifully goes on to
> assume int and compiles our test K&R stuff mostly fine. It does
> unfortunately balk pretty badly at the naked returns we initially had,
> though. Wish it didn't because it strikes me as being beautifully simple..
>
> thx again for the encouragement!
> jake
>
>
> On Wed, Dec 31, 2014 at 1:02 AM, Derrik Walker v2.0 <dwalker(a)doomd.net>
> wrote:
>
> > On Wed, 2014-12-31 at 00:44 -0500, Jacob Ritorto wrote:
> >
> > >
> > > P.S. if anyone's bored enough, you can check out what we're up to at
> > > https://github.com/srphtygr/dhb. I'm trying to get my 11yo kid to
> > > spend a little time programming rather than just playing video games
> > > when he's near a computer. He'a actually getting through this stuff
> > > and is honestly interested when he understands it and sees it work --
> > > and he even spotted a bug before me this afternoon! Feel free to
> > > raise issues, pull requests, etc. if you like -- I'm putting him
> > > through the git committing and pair programming paces, so outside
> > > interaction would be kinda fun :)
> > >
> > >
> > > P.P.S. We're actually using 2.11bsd after all..
> > >
> > I'm curious, will gcc on a modern Linux system compile K&R c?
> >
> > Maybe when I get a little time, I might try to see if I can compile it
> > on a modern Fedora 21 system with gcc.
> >
> > BTW: Great job introducing him to such a classic environment. A few
> > years ago, my now 18 year old had expressed some interest in graphics
> > programming and was in awe over an SGI O2 I had at the time, so I got
> > him an Indy. He played around with a bit of programming, but
> > unfortunately, he lost interest.
> >
> > - Derrik
> >
> >
> > _______________________________________________
> > TUHS mailing list
> > TUHS(a)minnie.tuhs.org
> > https://minnie.tuhs.org/mailman/listinfo/tuhs
> >
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
> when you - say - run less to display a file, it switches to a dedicated
> region in the terminal memory buffer while printing its output, then
> restores the buffer to back where you were to begin with when you exit
> the pager
Sorry for veering away from Unix history, but this pushed one of the hottest
of my buttons. Less is the epitome of modern Unix decadence. Besides the
maddening behavior described above, why, when all screens have a scroll bar,
should a pager do its own scrolling? But for a quantitative measure of
decadence, try less --help | wc. It takes excess to a level not dreamed of
in Pike's classic critique, "cat -v considered harmful".
Doug
Hi all, I came across this last week:
http://svnweb.freebsd.org/
It's a Subversion VCS of all the CSRG releases. I'm not sure if it
has been mentioned here before.
Cheers, Warren
<much discussion about quadratic search removed>
All I remember (and still support to this day) is that I’ve got a TERMCAP=‘string’ in my login scripts to set termcap to the specific terminal I’m logging in with.
Long ago this made things much faster. Today I think that it is just a holdover that I’m not changing due to inertia, rather than any real need for it.
David
—
David Barto
david(a)kdbarto.org
> Noel Chiappa
> The change is pretty minor: in this piece of code:
>
> case reboot:
> termall();
> execl(init, minus, 0);
> reset();
>
> just change the execl() line to say:
>
> execl(init, init, 0);
I patched init in v5 and now ps shows /etc/init as expected, even
after going from multi to single to multi mode.
Looks like init.c was the same in v5 and v6.
> Noel Chiappa:
> Just out of curiousity, why don't you set the SR before you boot the machine?
> That way it'll come up single user, and you can icheck/dcheck before you go
> to multi-user mode. I prefer doing it that way, there's as little as possible
> going on, in case the disk is slightly 'confused', so less chance any bit-rot
> will spread...
I actually do file system checks on v5 as it's the early unix I use the most:
check -l /dev/rk0
check -u /dev/rk0
same for rk1, rk2.
The v5 manual entry for check references the 'restor' command,
although the man page for that is missing.
Your idea of starting up in single user mode is a good one although
I'm not sure if it's necessary to check the file system on each boot
up. I've been running this disk image of v5 for about two years and no
blow-ups as yet. I also keep various snapshots of v5, v6 and v7 disk
images for safety reasons.
And there are text files of all the source code changes I've made, so
if disaster strikes I can redo it all.
Mark
> From: Clem Cole
> ps "knew" about some kernel data structures and had to compiled with
> the same sources that your kernel used if you want all the command
> field in particular to look reasonable.
Not just the command field!
The real problem was that all the parameters (e.g. NPROC) were not stored in
the kernel anywhere, so if you wanted to have one copy of the 'ps' binary
which would work on two different machines (but which were running the same
version of the kernel)... rotsa ruck.
I have hacked my V6 to include lines like:
int ninode NINODE;
int nfile NFILE;
int nproc NPROC;
etc so 'ps' can just read those variables to find the table sizes in the
running kernel. (Obviously if you modify a table format, then you're SOL.)
> From: Ronald Natalie
> The user structure of the currently running process is the only one
> that is guaranteed to be in memory ... Any processes that were swapped
> you could read the user structure so things that were stored there were
> often unavailable (particularly the command name).
Well, 'ps' (even the V6 stock version) was actually prepared to poke around
on the swap device to look at the images of swapped-out processes. And the
command name didn't come from the U area (it wasn't saved there in stock V6),
'ps' actually had to look on the top of the user stack (which is why it
wasn't guaranteed to be accurate - the user process could smash that).
> From: Clem cole
> IIRC we had a table of sleep addresses so that ps could print the
> actual thing you were waiting for not just an address.
I've hacked my copy of 'ps' to grovel around in the system's symbol table,
and print 'wchan' symbolically. E.g. here's some of the output of 'ps' on
my system:
TTY F S UID PID PPID PRI NIC CPU TIM ADDR SZ TXT WCHAN COMMAND
?: SL S 0 0 0-100 0 -1 127 1676 16 runout <swapper>
?: L W 0 1 0 40 0 0 127 1774 43 0 proc+26 /etc/init
?: L W 0 11 1 90 0 0 127 2405 37 tout /etc/update
8: L W 0 12 1 10 0 0 127 2772 72 2 kl11 -
a: L W 0 13 1 40 0 0 127 3122 72 2 proc+102 -
a: L R 0 22 13 100 0 10 0 3422 138 3 ps axl
b: L W 0 14 1 10 0 0 127 2120 41 1 dz11+40 - 4
It's pretty easy to interpret this to see what each process is waiting for.
Noel
> From: Noel Chiappa
> For some reason, the code for /etc/init .. bashes the command line so
> it just has '-' in it, so it looks just like a shell.
BTW, that may be accidental, not a deliberate choice - e.g. someone copied
another line of code which exec'd a shell, and didn't change the second arg.
> I fixed my copy so it says "/etc/init", or something like that. ... I
> can upload the 'fixed' code tomorrow.
The change is pretty minor: in this piece of code:
case reboot:
termall();
execl(init, minus, 0);
reset();
just change the execl() line to say:
execl(init, init, 0);
>> I'm not sure if unix of the v6 or v5 era was designed to go from multi
>> user to single user mode and then back again.
> I seem to recall there's some issue, something like in some cases
> there's an extra shell left running attached to the console
So the bug is that in going from single-user to multi-user, by using "kill -1
1" in single-user with the switch register set for multi-user, it doesn't
kill the running single-user shell on the console. The workaround to that bug
which I use is to set the CSWR and then ^D the running shell.
In general, the code in init.c isn't quite as clean/clear as would be optimal
(which is part of why I haven't tried to fix the above bug), but _in general_
it does support going back and forth.
> From: Ronald Natalie
> our init checked the switch register to determine whether to bring up
> single or multiuser
I think that's standard from Bell, actually.
> I believe our system shutdown if you kill -1-1 (HUP to init).
The 'stock' behaviour is that when that happens, it checks the switch
register, and there are three options (the code is a little hard to follow,
but I'm pretty sure this is right):
- if it's set for single-user, it shuts down all the other children, and
brings up a console shell; when done, it does the next
- if it's set for 'reboot', it just shuts down all children, and restarts
the init process (presumably so one can switch to a new version of the init
without restarting the whole machine);
- if it's not set for either, it re-reads /etc/ttys, and for any lines which
have switched state in that file, it starts/kills the process listening to
that line (this allows one to add/drop lines dynamically).
> From: Clem Cole
> it's probably worth digging up the v6 version of fsck.
That's on that MIT V6 tape, too. Speaking of which, time to write the code to
grok the tape...
Noel
> From: Mark Longridge <cubexyz(a)gmail.com>
> I've finally managed to get Unix v5 and v6 to go into single user mode
> while running under simh.
> ...
> dep system sr 173030 (simh command)
Just out of curiousity, why don't you set the SR before you boot the machine?
That way it'll come up single user, and you can icheck/dcheck before you go
to multi-user mode. I prefer doing it that way, there's as little as possible
going on, in case the disk is slightly 'confused', so less chance any bit-rot
will spread...
> Now I'm in muti user mode .. but then if I do a "ps -alx" I get:
>
> TTY F S UID PID PRI ADDR SZ WCHAN COMMAND
> ?: 3 S 0 0-100 1227 2 5676 ????
> ?: 1 W 0 1 40 1324 6 5740 -
> The ps command doesn't show the /etc/init process explicitly, although
> I'm pretty sure it is running.
No, it's there: the second line (PID 1). For some reason, the code for
/etc/init (in V6 at least, I don't know anything about V5) bashes the command
line so it just has '-' in it, so it looks just like a shell.
I fixed my copy so it says "/etc/init", or something like that. The machine
my source is on is powered down at the moment; if you want, I can upload the
'fixed' code tomorrow.
> I'm not sure if unix of the v6 or v5 era was designed to go from multi
> user to single user mode and then back again.
I seem to recall there's some issue, something like in some cases there's an
extra shell left running attached to the console, but I don't recall the
details (too lazy to look for the note I made about the bug; I can look it up
if you really want to know).
> Would it be safer to just go to single user and then shut it down?
I don't usually bother; I just log out all the shells except the one on the
console, so the machine is basically idle; then do a 'sync', and shortly
after than completes, I just halt the machine.
Noel
adding the list back
On Tue, Jan 6, 2015 at 10:42 AM, Michael Kerpan <mjkerpan(a)kerpan.com> wrote:
> This is a cool development. Does this code build into a working version of
> Coherent or is this mainly useful to study? Either way, it should be
> interesting to look at the code for a clone specifically aimed at low-end
> hardware.
>
> Mike
>
Ok, I've finally managed to get Unix v5 and v6 to go into single user
mode while running under simh.
I boot up unix as normal, that is to say in multi-user mode.
Then a ctrl-e and
dep system sr 173030 (simh command)
then c to continue execution of the operating system and finally "kill -1 1".
This gets me from multi user mode to single user mode. I can also go
back to multi user mode with:
ctrl-e and dep system sr 000000
then once again c to continue execution of the operating system and "kill -1 1".
Now I'm in muti user mode, and I can telnet in as another user so it
seems to be working but then if I do a "ps -alx" I get:
TTY F S UID PID PRI ADDR SZ WCHAN COMMAND
?: 3 S 0 0-100 1227 2 5676 ????
?: 1 W 0 1 40 1324 6 5740 -
8: 1 W 0 51 40 2456 19 5766 -
?: 1 W 0 55 10 1377 6 42066 -
?: 1 W 0 5 90 1734 5 5440 /etc/update
?: 1 W 0 32 10 2001 6 42126 -
?: 1 W 0 33 10 2054 6 42166 -
?: 1 W 0 34 10 2127 6 42226 -
?: 1 W 0 35 10 2202 6 42266 -
?: 1 W 0 36 10 2255 6 42326 -
?: 1 W 0 37 10 2330 6 42366 -
?: 1 W 0 38 10 2403 6 42426 -
8: 1 R 0 59 104 1472 17 ps alx
The ps command doesn't show the /etc/init process explicitly, although
I'm pretty sure it is running. I'm not sure if unix of the v6 or v5
era was designed to go from multi user to single user mode and then
back again. Would it be safer to just go to single user and then shut
it down?
Mark
Friend asked an odd question:
Were VAXen ever used to send/receive faxes large-scale? What software was
used and how was it configured?
Was any of this run on any of the UCB VAXen?
--
Cory Smelosky
http://gewt.net Personal stuff
http://gimme-sympathy.org Projects
On 2015-01-06 23:56, Clem Cole<clemc(a)ccc.com> wrote:
>
> On Tue, Jan 6, 2015 at 5:45 PM, Noel Chiappa<jnc(a)mercury.lcs.mit.edu>
> wrote:
>
>> >I have no idea why DEC didn't put it in the 60 - probably helped kill that
>> >otherwise intersting machine, with its UCS, early...
>> >
> ?"Halt and confuse ucode" had a lot to do with it IMO.
>
> FYI: The 60 set the record of going from production to "traditional
> products" faster than? anything else in DEC's history. As I understand
> it, the 11/60 was expected to a business system and run RSTS. Why the WCS
> was put in, I never understood, other than I expect the price of static RAM
> had finally dropped and DEC was buying it in huge quantities for the
> Vaxen. The argument was that they could update the ucode cheaply in the
> field (which to my knowledge the never did). But I asked that question
> many years ago to one of the HW manager, who explained to me that it was
> felt separate I/D was not needed for the targeted market and would have
> somehow increased cost. I don't understand why it would have cost any
> more but I guess it was late.
No, field upgrade of microcode can not have been it. The WCS for the
11/60 was an option. Very few machines actually had it. It was for
writing your own extra microcode as addition to the architecture.
The basic microcode for the machine was in ROM, just like on all the
other PDP-11s. And DEC sold a compiler and other programs required to
develop microcode for the 11/60. Not that I know of anyone who had them.
I've "owned" four PDP-11/60 systems in my life. I still have a set of
boards for the 11/60 CPU, but nothing else left around.
The 11/60 was, by the way, not the only PDP-11 with WCS. The 11/03 (if I
remember right) also had such an option. Obviously the microcode was not
compatible between the two machines, so you couldn't move it over from
one to the other.
Also, reportedly, someone at DEC implemented a PDP-8 on the 11/60,
making it the fastest PDP-8 ever manufactured. I probably have some
notes about it somewhere, but I'd have to do some serious searching if I
were to dig that up.
But yes, the 11/60 went from product to "traditional" extremely fast.
Split I/D space was one omission from the machine, but even more serious
was the decision to only do 18-bit addressing on it. That killed it very
fast.
Someone else mentioned the MFPI/MFPD instructions as a way of getting
around the I/D restrictions. As far as I know (can tell), they are
possible to use to read/write instruction space on a machine. I would
assume that any OS would set both current and previous mode to user when
executing in user space.
The documentation certainly claims they will work. I didn't think of
those previously, but they would allow you to read/write to instruction
space even when you have split I/D space enabled.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
> From: Ronald Natalie <ron(a)ronnatalie.com>
> Yep, the only time this was ever trully useful was so you could put an
> a.out directly into the boot block I think.
Well, sort of. If you had non position-independent code, it would blow out
(it would be off by 020). Also, some bootstraps (e.g. the RL, IIRC) were so
close to 512. bytes long that the extra 020 was a problem. And it was so easy
to strip off:
dd if=a.out of=fooboot bs=1 skip=16
I'm not sure that anything actually used the fact that 407 was 'br .+020', by
the V6 era; I think it was just left over from older Unixes (where it was not
in fact stripped on loading). Not just on executables running under Unix; the
boot-loader also stripped it, so it wasn't even necessary to strip the a.out
header off /unix.
Noel
On 2015-01-06 20:57, Milo Velimirovi?<milov(a)cs.uwlax.edu> wrote:
> Bringing a conversation back online.
> On Jan 6, 2015, at 6:22 AM,arnold(a)skeeve.com wrote:
>
>>> >>Peter Jeremy scripsit:
>>>> >>>But you pay for the size of $TERMCAP in every process you run.
>> >
>> >John Cowan<cowan(a)mercury.ccil.org> wrote:
>>> >>A single termcap line doesn't cost that much, less than a KB in most cases.
>> >
>> >In 1981 terms, this has more weight. On a non-split I/D PDP-11 you only
>> >have 32KB to start with. (The discussion a few weeks ago about cutting
>> >yacc down to size comes to mind?)
> (Or even earlier than ?81.) How did pdp11 UNIXes handle per process memory? It?s suggested above that there was a 50-50 split of the 64KB address space between instructions and data. My own recollection is that you got any combination of instruction and data space that was <64KB. This would also be subject to limits of pdp11 memory management unit.
>
> Anyone have a definitive answer or pointer to appropriate man page or source code?
You are conflating two things. :-)
A standard PDP-11 have 64Kb of virtual memory space. This can be divided
any way you want between data and code.
Later model PDP-11 processors had a hardware feature called split I/D
space. This meant that you could have one 64Kb virtual memory space for
instructions, and one 64Kb virtual memory space for data.
(This also means that the text you quoted was incorrect, as it stated
that you had 32Kb, which is incorrect. It was/is 32 Kword.)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
On 2015-01-06 22:59, random832(a)fastmail.us wrote:
> On Tue, Jan 6, 2015, at 15:20, Johnny Billquist wrote:
>> >Later model PDP-11 processors had a hardware feature called split I/D
>> >space. This meant that you could have one 64Kb virtual memory space for
>> >instructions, and one 64Kb virtual memory space for data.
> Was it possible to read/write to the instruction space, or execute the
> data space? From what I've seen, the calling convention for PDP-11 Unix
> system calls read their arguments from directly after the trap
> instruction (which would mean that the C wrappers for the system calls
> would have to write their arguments there, even if assembly programs
> could have them hardcoded.)
Nope. A process cannot read or write to instruction space, nor can it
execute from data space.
It's inherent in the MMU. All references related to the PC will be done
from I-space, while everything else will be done through D-space.
So the MMU have two sets of page registers. One set maps I-space, and
another maps D-space. Of course, you can have them overlap, in which
case you get the traditional appearance of older models.
The versions of Unix I am aware of push arguments on the stack. But of
course, the kernel can remap memory, and so can of course read the
instruction space. But the user program itself would not be able to
write anything after the trap instruction.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol