Born on this day in 1930, he gave us ALGOL, the basis of all decent
programming languages today, and the entire concept of structured
programming. Ah, well I remember the ALGOLW compiler on the 360...
There's a beaut article here:
https://www.cs.utexas.edu/users/EWD/ewd02xx/EWD215.PDF
And remember, testing can show the presence of errors, but not their
absence...
--
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)
People who fail to / understand security / surely will suffer. (tks: RichardM)
I'm curious as UNIX folks if somewhere can enlighten me. I sometimes
answer things on Quora and a few years ago the following question was
posted:
What does "Experience working in Unix/Linux environments" mean when given
as a required skill in company interviews? What do they want from us?
<https://www.quora.com/What-does-Experience-working-in-Unix-Linux-environmen…>
Why would this be considered a spam violation - which I was notified today
as being so.
It all depends the job for the specific experiences the hiring mangers want
to see. The #1 thing I believe they will looking for is something that does
not need to have a GUI to be useful. If you a simple user, it means you are
comfortable in a text based, shell environment and are at least familiar
with, if not proficient with the UNIX tools such as, ed, vi or emacs, grep,
tail, head, sed, awk, cut, join, tr, etc. You should be able to use one or
more of the Bourne Shell/Korn Shell/Bash family or CShell. You should be
familiar with the UNIX file tree and basic layout and its protection
scheme. It helps if you understand the differences between BSD, SysV, Mac
OSx, and Linux layout; but for many jobs in the UNIX community that may not
be required. You should also understand how to use the Unix man command to
get information on the tools you are using —* i.e.* you should have read,
if not own a copy of Kernighan and Pike The Unix Programming Environment
(Prentice-Hall Software Series): Brian W. Kernighan, Rob Pike:
9780139376818: Amazon.com: Books
<https://www.amazon.com/Unix-Programming-Environment-Prentice-Hall-Software/…>
and
be proficient in the first four chapters. If the job requires you writing
scripts to be able to write Shell script (*i.e.* program the shell) using
the Bourne Shell syntax *i.e.* Chapter 5 (Shell Programming).
If you are a programmer, then you need to be used to using the UNIX/Linux
toolchains and probably not require an IDE - again as a programmer
knowledge if not our proficiency of at least source code control system
from SCCS, RCS, CVS, SVN, Mercurial, git or the like needed. Kernighan and
Pike’s Chapter’s 6–8 should be common knowledge. But to be honest, you also
should be familiar with the contents contained within it, if not own and
keep a copy of the Rich Steven’s Advanced Programming in the UNIX
Environment, 3rd Edition: W. Richard Stevens, Stephen A. Rago:
9780321637734: Amazon.com: Books
<https://www.amazon.com/Advanced-Programming-UNIX-Environment-3rd/dp/0321637…>
(*aka* APUE) on your desk.
If you are system administrator, then the required set of tools get much
larger and besides the different way to “generate” (build) a system is a
good idea; but less tools for user maintenance. In this case you should be
familiar with, again if not own have a copy on your desk of Evi
Nemeth’s Amazon.com:
UNIX and Linux System Administration Handbook, 4th Edition (8580001058917):
Evi Nemeth, Garth Snyder, Trent R. Hein, Ben Whaley: Books
<https://www.amazon.com/UNIX-Linux-System-Administration-Handbook/dp/0131480…>
- which is and has been one of if not the best UNIX admin work for many,
many years.
Updated 05/07/18: to explain I am not shilling for anyone. I am trying to
honestly answer the question and make helpful recommendations of how to
learn what the person asked to help them be better equipped to be employed
in the Unix world. I used Amazon’s URL’s because they are global and easy
to use as a reference. But I am not suggesting you purchase from them. In
fact, if you can borrow a copy from you library to start, that might be a
good idea.
Grant Taylor:
(Maybe the 3' pipe wrench has something to do with it.)
=======
Real pipes don't need wrenches. Maybe those in Windows do.
Norman Wilson
Toronto ON
Hi all,
Forgive this cross-post from cctalk, if you're seeing this message twice. TUHS seems like a very appropriate list for this question.
I'm experimenting with setting up UUCP and Usenet on a cluster of 3B2/400s, and I've quickly discovered that while it's trivial to find old source code for Usenet (B News and C News), it's virtually impossible to find source code for old news *readers*.
I'm looking especially for nn, which was my go-to at the time. The oldest version I've found so far is nn 6.4, which is too big to compile on a 3B2/400. If I could get my hands on 6.1 or earlier, I think I'd have a good chance.
I also found that trn 3.6 from 1994 works well enough, though it is fairly bloated. Earlier versions of that might be better.
Does anyone have better Google-fu than I do? Or perhaps you've got earlier sources squirreled away?
As an aside: If you were active on Usenet in 1989, what software were you using?
-Seth
--
Seth Morabito
web(a)loomcom.com
Hi,
There's a unix "awesome list". It mentions TUHS's wiki, as well as this quote:
"This is the Unix philosophy: Write programs that do one thing and do
it well. Write programs to work together. Write programs to handle
text streams, because that is a universal interface." - Douglas
McIlroy, former head of Bell Labs Computing Sciences Research Center
https://github.com/sirredbeard/Awesome-UNIX
On 08.05.18 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > My point being that ... pages are invisible to the process segments are
> > very visible. And here we talk from a hardware point of view.
>
> So you're saying 'segmentation means instructions explicitly include segment
> numbers, and the address space is a two-dimensional array', or 'segmentation
> means pointers explicitly include segment numbers', or something like that?
Not really. I'm trying to understand your argument.
You said:
"BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user
(except
for setting the total size of the process), whereas segmentation is
explicitly
visible to the user."
And then you used MERT as an example of this.
My point then is, how is MERT any different from mmap() under Unix?
Would you then say that the paging is visible under Unix, meaning that
this is then segmentation?
In my view, you are talking about a software concept. And as such, it
has no bearing on whether a machine have pages or segments, as that is a
hardware thing and distinction, while anything done as a service by the
OS is a completely different, and independent question.
> I'm more interested in the semantics that are provided, not bits in
> instructions.
Well, if we talk semantics instead of the hardware, then you can just
say that any machine is segmented, and you can say that any machine is
have pages. Because I can certainly make it appear both ways from the
software point of view for applications running under an OS.
And I can definitely do that on a PDP-11. The OS can force pages to
always be 8K in size, and the OS can (as done by lots of OSes) provide a
mechanism that gives you something you call segments.
> It's true that with a large address space, one can sort of simulate
> segmentation. To me, machines which explicitly have segment numbers in
> instructions/pointers are one end of a spectrum of 'segmented machines', but
> that's not a strict requirement. I'm more concerned about how they are used,
> what the system/user gets.
So, again. Where does mmap() put you then?
And, just to point out the obvious, any machine with pages have a page
table, and the page table entry is selected based on the high bits of
the virtual address. Exactly the same as on the PDP-11. The only
difference is the number of pages, and the fact that the page on the
PDP-11 do not have a fixed length, but can be terminated earlier if wanted.
So, pages are explicitly numbered in pointers on any machine with pages.
> Similarly for paging - fixed sizes (or a small number of sizes) are part of
> the definition, but I'm more interested in how it's used - for demand loading,
> and to simplify main memory allocation purposes, etc.
I don't get it. So, in which way are you still saying that a PDP-11
don't have pages?
> >> the semantics available for - and_visible_ to - the user are
> >> constrained by the mechanisms of the underlying hardware.
>
> > That is not the same thing as being visible.
>
> It doesn't meet the definition above ('segment numbers in
> instructions/pointers'), no. But I don't accept that definition.
I'm trying to find out what your definition is. :-)
And if it is consistent and makes sense... :-)
> > All of this is so similar to mmap() that we could in fact be having this
> > exact discussion based on mmap() instead .. I don't see you claiming
> > that every machine use a segmented model
>
> mmap() (and similar file->address space mapping mechanisms, which a bunch of
> OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
> orthagonal - although it clearly needs support from memory management hardware.
Can you explain how mmap() is any different from the service provided by
MERT?
> And one can add 'sharing memory between two processes' here, too; very similar
> _mechanisms_ to mmap(), but different goals. (Although I suppose two processes
> could map the same area of a file, and that would give them IPC mapping.)
That how a single copy of shared libraries happen under Unix.
Exactly what happens if you modify the memory depends on what flags you
give to mmap().
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I started with roff (the simplest but utterly frozen) and moved up to nroff. It was a few years later I was involved with a rather project to make a CAT phototypesetter emulator for the Versatec printer-plotter (similar to the BSD vcat, which we had not seen yet). My friend George Toth went to the Naval Research Laboratory and printed out the entire typeface on their CAT on transparent film. Then he set out to figure out a way to digitize it.
Well, next door the the EE building (where the UNIX work took place at JHU) was the biophysics department. They had Scanning Transmission Electron Microscope there, quite an impressive machine. The front end of the thing was a PDP-11/20 with some digital to analog converters and vice versa and a frame buffer. The software would control the positioning of the beam and read back how much came through the material and was detected. Essentially, you were making a raster picture of the sample in the microscope.
George comes up with this great idea. He takes a regular oscilloscope. He takes the deflection wires from the 11/20 off the microscope and puts them in the X and Y amplifiers of the scope. He then put a photomultiplier tube in the shell of an old scope camera. He'd cut out a single character and tape it the front of the scope and hang the camera on it. He'd fire up the microscope software and tell it to scan the sample. It would then put the image in the frame buffer. We'd pull the microscope RK05 pack out and boot miniunix and read the data from the frame buffer (why we didn't just write software to drive the A2D from miniunix I do not recall).
Eventually, George gets everything scanned in and cleaned up. It worked somewhat adequately.
Another amusing feature was that Michael John Muuss (my mentor) wrote a macro package tmac.jm. Some people were someone peeved that we now had a "nroff -mjm" option.
Years later after ditroff was in vogue, my boss was always after me to switch to some modern document prep (Framemaker or the like). On one rough job I told him I'd do it but I didn't have time to learn framemaker.
I write one page of this proposal, print it and then go on. My boss would proof it and then my coworker would come behind me and make the corrections. I ended up rewriting a million dollar (a lot of money back in 1989 or so) proposal in two days, complete with 25 pages of narrative and may be 50 pages of TBL-based tables showing compliance with the RFP. We won that contract and got several follow ons.
Years later I was reading a published book. I noted little telltale bumps on the top of some of the tables. I wrote the author..."Did you use tbl and pic to typeset this book?" Sure enough he had. But it was way after I thought anybody was still using such technology. Of course, I was happy when Springer-Verlag suddenly learned out to typeset books. I had a number of their texts in college that didn't even look like the put a new ribbon in the typewriter when setting the book.
> From: Clem Cole
> I agree with Tannenbaum that uKernel's make more sense to me in the long
> run - even if they do cost something in speed
There's a certain irony in people complaining that ukernel's have more
overhead - while at the same time time mindlessly, and almost universally,
propogating such pinheaded computing hogs as '_mandating_ https for everything
under the sun' (even things as utterly pointless to protect as looking at
Wikipedia articles on mathematics), while simultaneously letting
Amazon/Facebook/Google do the 'all your data are belong to us' number; the
piling of Pelion upon Ossa in all the active content (page after page of
JavaScript, etc) in many (most?) modern Web sites that does nothing more than
'eye candy'; etc, etc.
Noel
> https://en.wikipedia.org/wiki/TeX#Pronunciation_and_spelling
Yes, TeX is supposed to be pronounced as Germans do Bach. And
Knuth further recommends that the name be typeset as a logo with
one letter off the base line. Damned if an awful lot of people,
especially LaTeX users, don't follow his advice. I've known
and admired Knuth for over 50 years, but part ways with him
on this. If you use the ready-made LaTeX logo in running text,
so should you also use flourished cursive for Coca-Cola and
Ford; and back in the day, discordantly slanted letters for
Holiday Inn. It's mad and it's a pox on the page.
Doug
> From: Johnny Billquist
> My point being that ... pages are invisible to the process segments are
> very visible. And here we talk from a hardware point of view.
So you're saying 'segmentation means instructions explicitly include segment
numbers, and the address space is a two-dimensional array', or 'segmentation
means pointers explicitly include segment numbers', or something like that?
I seem to recall machines where that wasn't so, but I don't have the time to
look for them. Maybe the IBM System 38? The two 'spaces' in the KA10/KI10,
although a degenerate case (fewer even than the PDP-11) are one example.
I'm more interested in the semantics that are provided, not bits in
instructions.
It's true that with a large address space, one can sort of simulate
segmentation. To me, machines which explicitly have segment numbers in
instructions/pointers are one end of a spectrum of 'segmented machines', but
that's not a strict requirement. I'm more concerned about how they are used,
what the system/user gets.
Similarly for paging - fixed sizes (or a small number of sizes) are part of
the definition, but I'm more interested in how it's used - for demand loading,
and to simplify main memory allocation purposes, etc.
>> the semantics available for - and _visible_ to - the user are
>> constrained by the mechanisms of the underlying hardware.
> That is not the same thing as being visible.
It doesn't meet the definition above ('segment numbers in
instructions/pointers'), no. But I don't accept that definition.
> All of this is so similar to mmap() that we could in fact be having this
> exact discussion based on mmap() instead .. I don't see you claiming
> that every machine use a segmented model
mmap() (and similar file->address space mapping mechanisms, which a bunch of
OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
orthagonal - although it clearly needs support from memory management hardware.
And one can add 'sharing memory between two processes' here, too; very similar
_mechanisms_ to mmap(), but different goals. (Although I suppose two processes
could map the same area of a file, and that would give them IPC mapping.)
Noel
> From: Johnny Billquist
>> "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
>> words long [which can grow in increments of 32 words]"
> But then it is not actually giving programs direct access and
> manipulation of the hardware. It is a software construct and service
> offered by the OS, and the OS might fiddly around with various hardware
> to give this service.
I don't understand how this is significant: most time-sharing OS's don't give
the users access to the memory management control hardware?
> So the hardware is totally invisible after all.
Not quite - the semantics available for - and _visible_ to - the user are
constrained by the mechanisms of the underlying hardware.
Consider a machine with a KT11-B - it could not provide support for very small
segments, or be able to adjust the segment size with such small quanta. On the
other side, the KT11-B could support starting a 'software segment' at any
512-byte boundary in the virtual address space, unlike the KT11-C which only
supports 8KB boundaries.
Noel
> From: Johnny Billquist
>> in MERT 'segments' (called that) were a basic system primitive, which
>> users had access to.
> the OS gives you some construct which can easily be mapped on to the
> hardware.
Right. "A logical segment is a piece of contiguous memory, 32 to 32K 16-bit
words long ... Associated with each segment are an internal segment
identifiern and an optional global name." So it's clear how that maps onto the
PDP-11 memory management hardware - and a MERT 'segment' might use more than
one 'chunk'.
>> I understand your definitions, and like breaking things up into
>> 'virtual addressing' (which I prefer as the term, see below),
>> 'non-residence' or 'demand loaded', and 'paging' (breaking into
>> smallish, equal-sized chunks), but the problem with using "virtual
>> memory" as a term for the first is that to most people, that term
>> already has a meaning - the combination of all three.
Actually, after some research, it turns out to be only the first two. But I
digress...
> It's actually not my definition. Demand paging is a term that have been
> used for this for the last 40 years, and is not something there is much
> contention about.
I wasn't talking about "demand paging", but rather your use of the term
"virtual memory":
>>> Virtual memory is just *virtual* memory. It's not "real" or physical
>>> in the sense that it has a dedicated location in physical memory
>>> ... Instead, each process has its own memory, which might be mapped
>>> somewhere in physical memory, but it might also not be. And one
>>> processes address 0 is not the same as another processes address
>>> 0. They both have the illusion that they have the full memory address
>>> range to them selves, unaware of the fact that there are many
>>> processes who also have that same illusion.
I _like_ having an explicit term for the _concept_ you're describing there; I
just had a problem with the use of the _term_ "virtual memory" for it - since
that term already has a different meaning to many people.
Try Googling "virtual memory" and you turn up things like this: "compensate
for physical memory shortages by temporarily transferring data from RAM to
disk". Which is why I proposed calling it "virtual addressing" instead.
> I must admit that I'm rather surprised if the term really is unknown to
> you.
No, of course I am familiar with "demand paging".
Anyway, this conversation has been very helpful in clarifying my thinking
about virtual memory/paging. I have updated the CHWiki article based on it:
http://gunkies.org/wiki/Virtual_memory
including the breakdown into three separate (but related) concepts: i) virtual
addressing, ii) demand loading, and iii) paging. I'd be interested in any
comments people have.
> Which also begs the question - was there also a RK11-A?
One assumes there much have been RK11-A's and -B's, otherwise they wouldn't
have gotten to RK11-C... :-)
I have no idea if both existed in physical form - one might have been just a
design exercise). However, the photo of the non-RK11-C indicator panel
confirms that at least one of them was actually implemented.
> And the "chunks" on a PDP-11, running Unix, RSX or RSTS/E, or something
> similar is also totally invisible.
Right, but not under MERT - although there clearly a single 'software' segment
might use more than one set of physical 'chunks'.
Actuallly, Unix is _somewhat_ similar, in that processes always have separate
stack and text/data 'areas' (they don't call them 'segments', as far as I
could see) - and separate text and data 'areas' too, when pure code is in
use; and any area might use more than one 'chunk'.
The difference is that Unix doesn't support 'segments' as an OS primitive, the
way MERT does.
Noel
> That would be a pretty ugly way to look at the world.
'Beauty is in the eye of the beholder', and all that! :-)
> Not to mention that one segment silently slides over into the next one
> if it's more than 8K.
Again, precedent; IIRC, on the GE-645 Multics, segments were limited to 2^N-1 pages,
precisely because otherwise incrementing an inter-segment pointer could march off
the end of one, and into the next! (The -645 was implemented as a 'bag on the side'
of the non-segmented -635, so things like this were somewhat inevitable.)
> wouldn't you say that the "chunks" on a PDP-11 are invisible to the
> user? Unless you are the kernel of course. Or run without protection.
No, in MERT 'segments' (called that) were a basic system primitive, which
users had access to. (Very cool system! I really need to get moving on trying
to recover that...)
> *Demand* paging is definitely a separate concept from virtual memory.
Hmmm. I understand your definitions, and like breaking things up into 'virtual
addressing' (which I prefer as the term, see below), 'non-residence' or
'demand loaded', and 'paging' (breaking into smallish, equal-sized chunks),
but the problem with using "virtual memory" as a term for the first is that to
most people, that term already has a meaning - the combination of all three.
(I have painful memories of this sort of thing - the term 'locator' was
invented after we gave up trying to convince people one could have a network
architecture in which not all packets contained addresses. That caused a major
'does not compute' fault in most people's brains! And 'locator' has since been
perverted from its original definition. But I digress.)
> There is no real connection between virtual memory and memory
> protection. One can exist with or without the other.
Virtual addressing and memory protection; yes, no connection. (Although the
former will often give you the latter - if process A can't see, or name,
process B's memory, it can't damage it.)
> Might have been just some internal early attempt that never got out of DEC?
Could be; something similar seems to have happened to the 'RK11-B':
http://gunkies.org/wiki/RK11_disk_controller
>> I don't have any problem with several different page sizes, _if it
>> engineering sense to support them_.
> So, would you then say that such machines do not have pages, but have
> segments?
> Or where do you draw the line? Is it some function of how many different
> sized pages there can be before you would call it segments? ;-)
No, the number doesn't make a difference (to me). I'm trying to work out what
the key difference is; in part, it's that segments are first-class objects
which are visible to the user; paging is almost always hidden under the
sheets.
But not always; some OS's allow processes to share pages, or to map file pages
into address spaces, etc. Which does make it complex to separate the two..
Noel
> From: Johnny Billquist
> This is where I disagree. The problem is that the chunks in the PDP-11
> do not describe things from a zero offset, while a segment do. Only
> chunk 0 is describing addresses from a 0 offset. And exactly which chunk
> is selected is based on the virtual address, and nothing else.
Well, you have something of a point, but... it depends on how you look at it.
If you think of a PDP-11 address as holding two concatenated fields (3 bits of
'segment' and 13 bits of 'offset within segment'), not so much.
IIRC there are other segmented machines that do things this way - I can't
recall the details of any off the top of my head. (Well, there the KA10/KI10,
with their writeable/write-protected 'chunks', but that's a bit of a
degenerate case. I'm sure there is some segmented machine that works that way,
but I can't recall it.)
BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user (except
for setting the total size of the process), whereas segmentation is explicitly
visible to the user.
I think there is at least one PDP-11 OS which makes the 'chunks' visible to
the user - MERT (and speaking of which, I need to get back on my project of
trying to track down source/documentation for it).
> Demand paging really is a separate thing from virtual memory. It's a
> very bad thing to try and conflate the two.
Really? I always worked on the basis that the two terms were synonyms - but
I'm very open to the possibility that there is a use to having them have
distinct meanings.
I see a definition of 'virtual memory' below, but what would you use for
'paging'?
Now that I think about it, there are actually _three_ concepts: 'virtual
memory', as you define it; what I will call 'non-residence' - i.e. a process
can run without _all_ of its virtual memory being present in physical memory;
and 'paging' - which I would define as 'use fixed-size blocks'. (The third is
more of an engineering thing, rather than high-level architecture, since it
means you never have to 'shuffle' core, as systems that used variable-sized
things seem to.)
'Non-residence' is actually orthogonal to 'paging'; I can imagine a paging
system which didn't support non-residence, and vice versa (either swapping
the entire virtual address space, or doing it a segment at a time if the
system has segments).
> There is nothing about virtual memory that says that you do not have to
> have all of your virtual memory mapped to physical memory when the
> process is running.
True.
> Virtual memory is just *virtual* memory. It's not "real" or physical in
> the sense that it has a dedicated location in physical memory, which
> would be the same for all processes talking about that memory
> address. Instead, each process has its own memory, which might be mapped
> somewhere in physical memory, but it might also not be.
OK so far.
> each process would have to be aware of all the other processes that use
> memory, and make sure that no two processes try to use the same memory,
> or chaos ensues.
There's also the System 360 approach, where processes share a single address
space (physical memory - no virtual memory on them!), but it uses protection
keys on memory 'chunks' (not sure of the correct IBM term) to ensure that one
process can't tromp on another's memory.
>> a memory management device for the PDP-11 which provided 'real' paging,
>> the KT11-B?
> have never read any technical details. Interesting read.
Yes, we were lucky to be able to retrieve detailed info on it! A PDP-11/20
sold on eBay with a fairly complete set of KT11-B documentation, and allegedly
a "KT11-B" as well, but alas, it turned out to 'only' be an RK11-C. Not that
RK11-C's aren't cool, but on the 'cool scale' they are like 3, whereas a
KT11-B would have been, like, 17! :-) Still, we managed to get the KT11-B
'manual' (such as it is) and prints online.
I'd love to find out equivalent detail for the KT11-A, but I've never seen
anything on it. (And I've appealed before for the KS11, which an early PDP-11
Unix apparently used, but no joy.)
> But how do you then view modern architectures which have different sized
> pages? Are they no longer pages then?
Actually, there is precedent for that. The original Multics hardware, the
GE-645, supported two page sizes. That was dropped in later machines (the
Honeywell 6000's) since it was decided that the extra complexity wasn't worth
it.
I don't have any problem with several different page sizes, _if it makes
engineering sense to support them_. (I assume that the rationale for their
re-introduction is that in the age of 64-bit machines, page tables for very
large 'chunks' can be very large if pages of ~1K or so are used, or something
like.)
It does make real memory allocation (one of the advantages of paging) more
difficult, since there would now be small and large page frames. Although I
suppose it wouldn't be hard to coalesce them, if there are only two sizes, and
one's a small power-of-2 multiple of the other - like 'fragments' in the
Berkeley Fast File System for BSD4.2.
I have a query, though - how does a system with two page sizes know which to
use? On Multics (and probably on the x86), it's a per-segment attribute. But
on a system with a large, flat address space, how does the system know which
parts of it are using small pages, and which large?
Noel
> From: Johnny Billquist
> Gah. If I were to try and collect every copy made, it would be quite a
> collection.
Well, just the 'processor handbook's (the little paperback things), I have
about 30. (If you add devices, that probably doubles it.) I think my
collection is complete.
> So there was a total change in terminology early in the 11/45 life, it
> would appear. I wonder why. ... I probably would not blame some market
> droids.
I was joking, but also serious. I really do think it was most likely
marketing-driven. (See below for why I don't think it was engineering-driven,
which leaves....)
I wonder if there's anything in the DEC archives (a big chunk of which are now
at the CHM) which would shed any light? Some of the archives are online there,
e.g.:
http://www.bitsavers.org/pdf/dec/pdp11/memos/
but it seems to be mostly engineering (although there's some that would be
characterized as marketing).
> one of the most important differences between segmentation and pages are
> that with segmentation you only have one contiguous range of memory,
> described by a base and a length register. This will be a contiguous
> range of memory both in virtual memory, and in physical memory.
I agree completely (although I extend it to multiple segments, each of which
has the characterstics you describe).
Which is why I think the original DEC nomenclature for the PDP-11's memory
management was more appropriate - the description above is _exactly_ the
functionality provided for each of the 8 'chunks' (to temporarily use a
neutral term) of PDP-11 address space, which don't quack like most other
'pages' (to use the 'if it quacks like a duck' standard).
One query I have comes from the usual goal of 'virtual memory' (which is the
concept most tightly associated with 'pages'), which is to allow a process to
run without all of its pages in physical memory.
I don't know much about PDP-11 DEC OS's, but do any of them do this? (I.e.
allow partial residency.) If not, that would be ironic (in view of the later
name) - and, I think, evidence that the PDP-11 'chunks' aren't really pages.
BTW, did you know that prior to the -11/45, there was a memory management
device for the PDP-11 which provided 'real' paging, the KT11-B? More here:
http://gunkies.org/wiki/KT11-B_Paging_Option
I seem to recall some memos in the memo archive that discussed it; I _think_
it mentioned why they decided not to go that way in doing memory management
for the /45, but I forget the details? (Maybe the performance hit of keeping
the page tables in main memory was significant?)_
> With segmentation you cannot have your virtual memory split up and
> spread out over physical memory.
Err, Multics did that; the process' address space was split up into many
segments (a true 2-dimensional naming system, with 18 bits of segment number),
which were then split up into pages, for both virtual memory ('not all
resident'), and for physical memory allocation.
Although I suppose one could view that as two separate, sequential steps -
i.e. i) the division into segments, and ii) the division of segments into
pages. In fact, I take this approach in describing the Multics memory system,
since it's easier to understand as two independent things.
> You can also have "holes" in your memory, with pages that are invalid,
> yet have pages higher up in your memory .. Something that is impossible
> with segmentation, since you only have one set of registers for each
> memory type (at most) in a segmented memory implementation.
You seem to be thinking of segmentation a la Intel 8086, which is a hack they
added to allow use of more memory (although I suspect that PDP-11 chunks were
a hack of a similar flavour).
At the time we are speaking of, the Intel 8086 did not exist (it came along
quite few years later). The systems which supported segmentation, such as
Multics, the Burroughs 5000 and successors, etc had 'real' segmentation, with
a full two-dimensional naming system for memory. (Burroughs 5000 segment
numbers were 10 bits wide.)
> I mean, when people talk about segmented memory, what most everyone
> today thinks of is the x86 model, where all of this certainly is true.
It's also (IMNSHO) irrelevant to this. Intel's brain-damage is not the
entirety of computer science (or shouldn't be).
(BTW, later Intel xx86 machines did allow you have to 'holes' in segments, via
the per-segment page tables.)
> it would be very wrong to call what the PDP-11 have segmentation
The problem is that what PDP-11 memory management does isn't really like
_either_ segmentation, or paging, as practised in other machines. With only 8
chunks, it's not like Multics etc, which have very large address spaces split
up into many segments. (And maybe _that_'s why the name was changed - when
people heard 'segments' they thought 'lots of them'.)
However, it's not like paging on effectively all other systems with paging,
because in them paging's used to provide virtual memory (in the sense of 'the
process runs with pages missing from real memory'), and to make memory
allocation simple by use of fixed-size page frames.
So any name given PDP-11 'chunks' is going to have _some_ problems. It just
thing 'segmentation' (as you defined it at the top) is a better fit than the
alternative...
Noel
Depending on the system PS may or may not need to be setuid to work by non-root users.
Ping needs to be setuid because it uses raw sockets which are restricted (much like opening listens on low number ports) in many systems.
> From: William Corcoran
> I think it's a bit more interesting to uncover why rm does not remove
> directories by default thereby obviating the need for rmdir
On early PDP-11 Unixes, 'rm' is an ordinary program, and 'rmdir' is
setuid-root, since it has to do special magic (writing into directory files,
etc). Given that, it made sense to have 'rm' run with the least amount of
privilege needed to do its job.
Noel
> From: Johnny Billquist
> For 1972 I only found the 11/40 handbook.
I have a spare copy of the '72 /45 handbook; send me your address, and I'll
send it along. (Every PDP-11 fan should have a copy of every edition of every
model's handbooks... :-)
In the meantime, I'm too lazy to scan the whole thing, but here's the first
page of Chapter 6 from the '72:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/jpg/tmp/PDP11145ProcHbook72pg6-1.j…
> went though the 1972 Maintenance Reference Manual for the 11/45. That
> one also says "page". :-)
There are a few remnant relics of the 'segment' phase, e.g. here:
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s
which has this comment:
/ turn on segmentation
Also, if you look at the end, you'll see SSR0, SSR1 etc (as per the '72
handbook), instead of the later SR0, SR1, etc.
Noel
> From: Paul Winalski <paul.winalski(a)gmail.com>
> Regarding the Winchester code name, I've argued about this with Clem
> before. Clem claims that the code name refers to various advances in
> disk technology first released in the 3330's disk packs. Wikipedia and
> my own memory agree with you that Winchester referred to the 3340.
And you believe anything in Wikipedia? If so, I have a bridge to sell you! :-)
But, in this case, it's correct. According to "IBM's 360 and Early 370
Computers" (Pugh, Johnson and Palmer - a very good book, BTW), pg. 507, the
first Winchester was the 3340. The confusion comes from the fact that it had
two spindles, each of 30MB capacity, making it a so-called "30-30" system -
that being the name of Winchester's rifle.
Noel
> From: Johnny Billquist
>> Well, the 1972 edition of the -11/45 processor handbook
^^
> It would be nice if you actually could point out where this is the
> case. I just went through that 1973 PDP-11/45 handbook
^^
Yes, the '73 one (red/purple cover) had switched. It's only the '72 one
(red/white cover) that says 'segments'.
Noel
Blimey, but I nearly missed this one (I was sick in bed).
On this day in 1981, some little company called Xerox PARC introduced
something called a "mouse" (mostly because it has a tail), but I'm
struggling to find more information about it; wasn't there a photo of a
big boxy device?
--
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)
People who fail to / understand security / surely will suffer. (tks: RichardM)
On 2018-04-26 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > if you hadn't had the ability for them to be less than 8K, you wouldn't
> > even try that argument.
>
> Well, the 1972 edition of the -11/45 processor handbook called them segments..:-)
I think we had this argument before as well. It would be nice if you
actually could point out where this is the case. I just went through
that 1973 PDP-11/45 handbook, and all it says are "page" everywhere I look.
I also checked the 1972 PDP-11/40 handbook, and except for one mention
of "segment" in the introduction part of the handbook, which is not even
clear if it actually specifically refers to the MMU capabilities, that
handbook also use the word "page" everywhere.
I also checked the PDP-11/20 handbook, but that one does not even cover
any MMU, so no mention of neither "page" nor "segment" can be found.
> I figure some marketing droid found out that 'paging' was the new buzzword, and
> changed the name...:-) :-)
Somehow I doubt it, but looking forward to your references... :-)
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I am sure I remember a machine which had this (which would have been running a BSD 4.2 port). Is my memory right, and what was it for (something related to swap?)?
It is stupidly hard to search for (or, alternatively, there are just no hits and the memory is false).
--tim
> From: Dave Horsfall <dave(a)horsfall.org>
> I am constantly bemused by the number of "setuid root" commands, when a
> simple "setgid whatever" will achieve the same task.
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/ken/sys4.c
/*
* Unlink system call.
*/
unlink()
{ ...
if((ip->i_mode&IFMT)==IFDIR && !suser())
goto out;
For many things, yes. Not in this particular case.
Noel
Hello all,
I recently wrote a 3B2/400 simulator on the SIMH platform. It emulates the core system board and peripherals quite well, but I am now turning my attention to the emulating the 3B2 IO expansion boards. The first board I've emulated is the PORTS 4-port serial card, which came together fairly easily because I have the full source code for the SVR3 driver.
Other cards, though, are more challenging because I do not have source code for them. I would like to emulate the following two cards:
* The CTC cartridge tape controller
* The NI 10base5 Ethernet controller
Of these two, I have partial source code for the CTC driver (ct.c, ct.h, ct_lla.h, ct_deps.h), but I am missing a core file (ct_lla.c) that would greatly help explain what's going on. And I have NO source code at all for the NI driver.
There was a source code package for the NI driver called "nisrc", probably distributed on tape or floppy, but I have never seen it.
If you or anyone you know happens to have these source packages and a way to get at them, could you please let me know? I would be grateful.
-Seth
--
Seth Morabito
web(a)loomcom.com