Sorry if this has been asked before, but:
"Welcome to Eighth Edition Unix. You may be sure that it
is suitably protected by ironclad licences, contractual agreements,
living wills, and trade secret laws of every sort. A trolley car is
certain to grow in your stomach if you violate the conditions
under which you got this tape. Consult your lawyer in case of any doubt.
If doubt persists, consult our lawyers.
Please commit this message to memory. If this is a hardcopy terminal,
tear off the paper and affix it to your machine. Otherwise
take a photo of your screen. Then delete /etc/motd.
Thank you for choosing Eighth Edition Unix. Have a nice day."
was this one person or a group effort. It's wonderful.
six years later…
A note for the list:
Warren (in. IMHO, a stroke of genius) changed the Repo from xv6-minix to xv6-freebsd.
<https://github.com/DoctorWkt/xv6-freebsd>
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
At the risk of releasing more heat than light (so if you feel
compelled to flame by this message, please reply just to me
or to COFF, not to TUHS):
The fussing here over Python reminds me very much of Unix in
the 1980s. Specifically wars over editors, and over BSD vs
System V vs Research systems, and over classic vs ISO C, and
over embracing vendor-specific features (CSRG and USG counting
as vendors as well as Digital, SGI, Sun, et al) vs sticking
to the common core. And more generally whether any of the
fussing is worth it, and whether adding stuff to Unix is
progress or just pointless complication.
Speaking as an old fart who no longer gets excited about this
stuff except where it directly intersects something I want to
do, I have concluded that nobody is entirely right and nobody
is entirely wrong. Fancy new features that are there just to
be fancy are usually a bad idea, especially when they just copy
something from one system to a completely different one, but
sometimes they actually add something. Sometimes something
brand new is a useful addition, especially when its supplier
takes the time and thought to fit cleanly into the existing
ecosystem, but sometimes it turns out to be a dead end.
Personal taste counts, but never as much as those of us
brandishing it like to think.
To take Python as an example: I started using it about fifteen
years ago, mostly out of curiousity. It grew on me, and these
days I use it a lot. It's the nearest thing to an object-
oriented language that I have ever found to be usable (but I
never used the original Smalltalk and suspect I'd have liked
that too). It's not what I'd use to write an OS, nor to do
any sort of performance-limited program, but computers and
storage are so fast these days that that rarely matters to me.
Using white space to denote blocks took a little getting used
to, but only a little; no more than getting used to typing
if ...: instead of if (...). The lack of a C-style for loop
occasionally bothers me, but iterating over lists and sets
handles most of the cases that matter, and is far less cumbersome.
It's a higher-level language than C, which means it gets in the
way of some things but makes a lot of other things easier. It
turns out the latter is more significant than the former for
the things I do with it.
The claim that Python doesn't have printf (at least since ca. 2.5,
when I started using it) is just wrong:
print 'pick %d pecks of %s' % (n, fruit)
is just a different spelling of
printf("pick %d pecks of %s\n", n, fruit)
except that sprintf is no longer a special case (and snprintf
becomes completely needless). I like the modern
print(f'pick {n} pecks of {fruit}')
even better; f strings are what pushed me from Python 2 to
Python 3.
I really like the way modules work in Python, except the dumbass
ways you're expected to distribute a program that is broken into
modules of its own. As a low-level hacker I came up with my own
way to do that (assembling them all into a single Python source
file in which each module is placed as a string, evaled and
inserted into the module table, and then the starting point
called at the end; all using documented, stable interfaces,
though they changed from 2 to 3; program actually written as
a collection of individual source files, with a tool of my
own--written in Python, of course--to create the single file
which I can then install where I need it).
I have for some years had my own hand-crafted idiosyncratic
program for reading mail. (As someone I know once said,
everybody writes a mailer; it's simple and easy and makes
them feel important. But in this case I was doing it just
for myself and for the fun of it.) The first edition was
written 20 years ago in C. I rewrote it about a decade ago
in Python. It works fine; can now easily deal with on-disk
or IMAP4 or POP3 mailboxes, thanks both to modules as a
concept and to convenient library modules to do the hard work;
and updating in the several different work and home environments
where I use it no longer requires recompiling (and the source
code need no longer worry about the quirks of different
compilers and libraries); I just copy the single executable
source-code file to the different places it needs to run.
For me, Python fits into the space between shell scripts and
awk on one hand, and C on the other, overlapping some of the
space of each.
But personal taste is relevant too. I didn't know whether I'd
like Python until I'd tried it for a few real programs (and
even then it took a couple of years before I felt like I'd
really figured out out to use it). Just as I recall, about
45 years ago, moving from TECO (in which I had been quite
expert) to ed and later the U of T qed and quickly feeling
that ed was better in nearly every way; a year or so later,
trying vi for a week and giving up in disgust because it just
didn't fit my idea of how screen editors should work; falling
in love with jim and later sam (though not exclusively, I still
use ed/qed daily) because they got the screen part just right
even if their command languages weren't quite such a good match
for me.
And I have never cottoned on to Perl for, I suspect, the same
reason I'd be really unhappy to go back to TECO. Tastes
evolve. I bet there's a lot of stuff I did in the 1980s that
I'd do differently could I have another go at it.
The important thing is to try new stuff. I haven't tried Go
or Rust yet, and I should. If you haven't given Python or
Perl a good look, you should. Sometimes new tools are
useless or cumbersome, sometimes they are no better than
what you've got now, but sometimes they make things easier.
You won't know until you try.
Here endeth today's sermon from the messy office, which I ought
to be cleaning up, but preaching is more fun.
Norman Wilson
Toronto ON
I’m re-reading Brian Kernighan’s book on Early Unix (‘Unix: A History & Memoir’)
and he mentions the (on disk) documentation that came with Unix - something that made it stand out, even for some decades.
Doug McIlroy has commented on v2-v3 (1972-73?) being an extremely productive year for Ken & Dennis.
But as well, they wrote papers and man pages, probably more.
I’ve never heard anyone mention keyboard skills with the people of the CSRC - doesn’t anyone know?
There’s at least one Internet meme that highly productive coders necessarily have good keyboard skills,
which leads to also producing documentation or, at least, not avoiding it entirely, as often happens commercially.
Underlying this is something I once caught as a random comment:
The commonality of skills between Writing & Coding.
Does anyone has any good refs for this crossover?
Is it a real effect or a biased view.
That great programmers are also “good writers”:
takes time & focus, clarity of vision, deliberate intent and many revisions, chopping away the cruft that’s isn’t “the thing” and “polishing”, not rushing it out the door.
Ken is famous for his brevity and succinct statements.
Not sure if that’s a personal preference, a mastered skill or “economy in everything”.
steve j
=========
A Research UNIX Reader: Annotated Excerpts from the Programmer's Manual, 1971-1986
M.D. McIlroy
<https://www.cs.dartmouth.edu/~doug/reader.pdf>
<https://archive.org/details/a_research_unix_reader/page/n13/mode/2up>
pg 10
3.4. Languages
CC (v2 page 52)
V2 saw a burst of languages:
a new TMG,
a B that worked in both core-resident and software-paged versions,
the completion of Fortran IV (Thompson and Ritchie), and
Ritchie's first C, conceived as B with data types.
In that furiously productive year Thompson and Ritchie together
wrote and debugged about
100,000 lines of production code.
=========
Programming's Dirtiest Little Secret
Wednesday, September 10, 2008
<http://steve-yegge.blogspot.com/2008/09/programmings-dirtiest-little-secret…>
It's just simple arithmetic. If you spend more time hammering out code, then in order to keep up, you need to spend less time doing something else.
But when it comes to programming, there are only so many things you can sacrifice!
You can cut down on your documentation.
You can cut down on commenting your code.
You can cut down on email conversations and
participation in online discussions, preferring group discussions and hallway conversations.
And... well, that's about it.
So guess what non-touch-typists sacrifice?
All of it, man.
They sacrifice all of it.
Touch typists can spot an illtyperate programmer from a mile away.
They don't even have to be in the same room.
For starters, non-typists are almost invisible.
They don't leave a footprint in our online community.
=========
--
Steve Jenkin, IT Systems and Design
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA
mailto:sjenkin@canb.auug.org.au http://members.tip.net.au/~sjenkin
The discussion about the 3B2 triggered another question in my head: what were the earliest multi-processor versions of Unix and how did they relate?
My current understanding is that the earliest one is a dual-CPU VAX system with a modified 4BSD done at Purdue. This would have been late 1981, early 1982. I think one CPU was acting as master and had exclusive kernel access, the other CPU would only run user mode code.
Then I understand that Keith Kelleman spent a lot of effort to make Unix run on the 3B2 in a SMP setup, essentially going through the source and finding all critical sections and surrounding those with spinlocks. This would be around 1983, and became part of SVr3. I suppose that the “spl()” calls only protected critical sections that were shared between the main thread and interrupt sequences, so that a manual review was necessary to consider each kernel data structure for parallel access issues in the case of 2 CPU’s.
Any other notable work in this area prior to 1985?
How was the SMP implementation in SVr3 judged back in its day?
Paul
I do not know whether it is right, and i am surely not the right
person to mourn in public, but i wanted to forward this.
I only spoke to him once, he mailed me in private, but i could not
help him supporting Uganda, even though i admired what he was
doing, and often looked.
Local capacity building, and (mutual) respect for local culture
and practice, despite all sharp spikes and edges the storm causes
over time, that is surely the way to do it right.
--- Forwarded from Bram Moolenaar <Bram(a)Moolenaar.net> ---
Sender: vim_announce(a)googlegroups.com
To: vim_announce(a)googlegroups.com
Subject: Message from the family of Bram Moolenaar
Message-Id: <20230805121930.4AA8F1C0A68(a)moolenaar.net>
Date: Sat, 5 Aug 2023 13:19:30 +0100 (WEST)
From: Bram Moolenaar <Bram(a)Moolenaar.net>
Reply-To: vim_announce(a)googlegroups.com
List-ID: <vim_announce.googlegroups.com>
Dear all,
It is with a heavy heart that we have to inform you that Bram Moolenaar passed away on 3 August 2023.
Bram was suffering from a medical condition that progressed quickly over the last few weeks.
Bram dedicated a large part of his life to VIM and he was very proud of the VIM community that you are all part of.
We as family are now arranging the funeral service of Bram which will take place in The Netherlands and will be held in the Dutch lanuage. The extact date, time and place are still to be determined.
Should you wish to attend his funeral then please send a message to funeralbram(a)gmail.com. This email address can also be used to get in contact with the family regarding other matters, bearing in the mind the situation we are in right now as family.
With kind regards,
The family of Bram Moolenaar
--
--
You received this message from the "vim_announce" maillist.
For more information, visit http://www.vim.org/maillist.php
---
You received this message because you are subscribed to the Google Groups "vim_announce" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vim_announce+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/vim_announce/20230805121930.4AA8F1C0A68%4….
-- End forward <20230805121930.4AA8F1C0A68(a)moolenaar.net>
--steffen
|
|Der Kragenbaer, The moon bear,
|der holt sich munter he cheerfully and one by one
|einen nach dem anderen runter wa.ks himself off
|(By Robert Gernhardt)
Doug McIlroy:
This reminds me of how I agonized over Mike Lesk's refusal to remove
remote execution from uucp.
====
Uux, the remote-execution mechanism I remember from uucp, had
rather better utility than the famous Sendmail back-door: it
was how uucp carried mail, by sending a file to be handed to
mailer on the remote system. It was clearly dangerous if
the remote site accepted any command, but as shipped in V7
only a short list of remote commands was allowed: mail rmail
lpr opr fsend fget. (As uucp was used to carry other things
like netnews, the list was later extended by individual sites,
and eventually moved to a file so reconfiguration needn't
recapitulate compilation).
Not the safest of mechanisms, but at least in V7 it had a use
other than Mike fixing your system for you.
Is there some additional history here? e.g. was the list of
permitted commands added after arguments about safety, or
some magic command that let Mike in removed? Or was there a
different remote-execution back door I don't remember and don't
see in a quick look at uuxqt.c?
Norman Wilson
Toronto ON
> From: Will Senn
> when did emacs arrive in unix and was it a full fledged text editor
> when it came or was it sitting on top of some other subssystem
Montgomery Emacs was the first I knew of; it started on PDP-11 UNIX.
According to:
https://github.com/larsbrinkhoff/emacs-history/blob/sources/docs/Montgomery…
Montgomery Emacs started in 1980 or so; here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/emacs/emacs.doc
is a manual from May, 1981.
It had pretty full EMACS functionality, but the editor was not written in an
implementation language of any kind (like the original, and like much later
GNU Emacs); it was written in C. It did have macros for extensions, but they
were written in Emacs commands, so, like the TECO that the original was
written in, their source looks kind of like line noise. (Does anyone young
even know what line noise looks like any more? I feel so old - and I'm a
youngster compared to McIlroy!)
> Was TECO ever on unix?
I don't think it was widespread, but there was a TECO on the PDP-11 UNIXes at
MIT; until Montgomery Emacs arrived, it was the primary editor used on those
machines.
Not that most people used TECO commands for editing; early on, they added '^R
mode' to the UNIX TECO, similar to the one on ITS TECO, and a macro package
was written for it (in TECO - so again, the source looks like line noise);
the command set was like a stripped down EMACS - about a dozen command
characters total; see the table about a page down here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/teco/help
All the source, and documentation, such as it is, it available, here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/teco/
but don't even think about running it. It's written in MACRO-11, and it used
a version of that hacked at MIT to run on UNIX. To build new versions of
that, you need a special linker - written in BCPL. So you also need the UNIX
BCPL compiler.
Noel
> From: Rob Pike
> There was a guy in production at Google using Unix TECO as his main
> editor when I joined in 2002.
Do you happen to know which version it was, or what it was written in?
It must have been _somebody_'s re-implementation, but I wonder who or where
(or why :-).
Noel
Steve thank you for the recollections, that is precisely the sort of story I was hoping to hear regarding your Interdata work. I had found myself quite curious why it would have wound up on a shelf after the work involved, and that makes total sense. That's a shame too, it sounds like the 8/32 could've picked up quite some steam, especially beating the VAX to the punch as a UNIX platform. But hey, it's a good thing so much else precipitated from your work!
Also, those sorts of microarchitectural bugs keep me up at night. For all the good in RISC-V there are also now maaaaany fabs with more license than ever to pump out questionable ICs. Combine that with questionable boards with strange bus architectures, and gee our present time sure does present ripe opportunities to experiment with tackling those sorts of problems in software. Can't say I've had the pleasure but it would be nice to still be able to fix stuff with a wire wrap in the field...
- Matt G.
P.S. TUHS cc as promised, certainly relevant information re: Interdata 8/32 UNIX.
------- Original Message -------
On Friday, August 4th, 2023 at 6:17 PM, scj(a)yaccman.com <scj(a)yaccman.com> wrote:
> Sorry for the year's delay in responding... I wrote the compiler for the Interdata, and Dennis and I did much of the debugging. The Interdata had much easier addressing for storage: the IBM machine made you load a register, and then you had a limited offset from that register that you could use. I think IBM was 10 bits, maybe 12. But all of it way too small to run megabyte-sized programs. The Interdata allowed a larger memory offset and pretty well eliminated the offsets as a problem. I seem to recall some muttering from Dennis and Ken about the I/O structure, which was apparently somewhat strange but much less weird than the IBM.
>
> Also, IBM and Interdata were big-endian, and the PDP was little-endian. This gave Dennis and Ken some problems since it was easy to get the wrong endian, which blew gaskets when executed or copied into the file system. Eventually, we got the machine running, and it was quite nice: true 32-bit computing, it was reasonably fast, and once we got the low-level quirks out (including a famous run-in with the "you are not expected to understand this" code in the kernel, which, it turned out, was a prophecy that came true. On the whole, the project was so successful that we set up a high-level meeting with Interdata to demo and discuss cooperation. And then "the bug" hit. The machine would be running fine, and then Blam! it has lept into low memory and aborted with no hint as to what or where the fault was.
>
> We finally tracked down the problem. The Interdata was a microcode machine. And older Unix system calls would return -1 if they failed. In V7, we fixed this to return 0, but there was still a lot of user code that used the old convention. When the Interdata saw a request to load -1 it first noticed that the integer load was not on an address divisible by 4, and jumped to a location in the microcode and executed a couple of microinstructions. But then it also noticed that the address was out of range and entered the microcode again, overwriting the original address that caused the problem and freezing the machine with no indication of where the problem was. It took us only a day or two to see what the problem was, and it was hardware, and they would need to fix it. We had our meeting with Interdata, gave a pretty good sales pitch on Unix, and then said that the bug we had found was fatal and needed to be fixed or the deal was off. The bottom line, they didn't want to fix the bug in the hardware. They did come out with a Unix port several years later, but I was out of the loop for that one, and the Vax (with the UCB paging code) had become the machine of choice...
>
> ---
>
> On 2023-07-25 16:23, segaloco via COFF wrote:
>
>> So I've been studying the Interdata 32-bit machines a bit more closely lately and I'm wondering if someone who was there at the time has the scoop on what happened to them. The Wikipedia article gives some good info on their history but not really anything about, say, failed follow-ons that tanked their market, significant reasons for avoidance, or anything like that. I also find myself wondering why Bell didn't do anything with the Interdata work after springboarding further portability efforts while several other little streams, even those unreleased like the S/370 and 8086 ports seemed to stick around internally for longer. Were Interdata machines problematic in some sort of way, or was it merely fate, with more popular minis from DEC simply spacing them out of the market? Part of my interest too comes from what influence the legacy of Interdata may have had on Perkin-Elmer, as I've worked with Perkin-Elmer analytical equipment several times in the chemistry-side of my career and am curious if I was ever operating some vague descendent of Interdata designs in the embedded controllers in say one of my mass specs back when.
>>
>> - Matt G.
>>
>> P.S. Looking for more general history hence COFF, but towards a more UNIXy end, if there's any sort of missing scoop on the life and times of the Bell Interdata 8/32 port, for instance, whether it ever saw literally any production use in the System or was only ever on the machines being used for the portability work, I'm sure that could benefit from a CC to TUHS if that history winds up in this thread.
> Most of the time I'd rather not have to care whether the thing
> I'm printing is a string, or a pointer, or an integer, or whatever:
> I just want to see its value.
> Go has %v for exactly this. It's very nice for debugging.
Why so verbose? In Basic, PRINT required no formatting directives at all.
Doug
> From: Clem Cole
> first two subsystems for the 11 that ran out of text space were indeed
> vi and Pascal subsystems
Those were at Berkeley. We've established that S-I&D were in V6 when it was
released in May, 1975 - so my question is 'what was Bell doing in 1975 that
needed more than 64KB?'
The kernel, yeah, it could definitely use S-I&D on a larger system
(especially when you remember that stock V6 didn't play any tricks with
overlays, and also dedicated one segment - the correct term, used in the 1972
-11/45 processor manual - to the user structure, and one to the I/O page,
limiting the non-S-I&D kernel to 48KB). But what user commands?
It happens that I have a complete dump of one of the MIT systems, so I had a
look to see what _we_ were running S-I&D on. Here's the list from /bin (for
some reason that machine doesn't have a /usr/bin):
a68
a86
c86
emacs
lisp
ndd
send
teco
The lisp wasn't a serious use; I think the only thing we ever used it for was
'doctor'. So, two editors, a couple of language tools, an email tool (not
sure why that one got it - maybe for creating large outgoing messages). (The
ndd is probably to allow the biggest possible buffers.)
Nothing in /etc, and in /lib, just lint1 and lint2 (lint, AFAICT, post-dates
V6). Not a lot.
So now I'm really curious what Bell was using S-I&D for. (If I weren't lazy,
I'd pull the V6 distro - which is only available as RK images, and individual
files, alas - and look in /bin and everywhere and see if I can find anything.
I suspect not, though.)
Anyone have any guesses/suggestions? Maybe some custom applications?
Noel
Does anyone know if there are any surviving examples of SVR2 for the PDP-11? Various SVR2 manuals still make mention of the assembler, linker, etc. and the pdp11 variable is present in machid(1)*. And on the note of the later years of the PDP-11, was there any hope for SVR3 on the PDP? I presume the introduction of demand paging was the end of things but I would be curious for anyone's recollections on the final years of System V on the PDP-11.
- Matt G.
P.S. *interesting little 3B5 side note, found as I was checking references that machid(1) in the "System V" branded manual from the initial System V commercial release mentions the pdp11, vax, and u3b machines, the latter being the 3B20S. However, the "Release 5.0" branded manuals also make mention of the u3b5 machine, the 3B5. The System V manuals are from January 1983 and the Release 5.0 manuals are June 1982. There isn't an earlier reference to cite as machid(1) was introduced in Release 5.0, at least from the literature I have available. The 4.x series ran on at least the PDP-11, VAX, and 3B20S computers at least, matching those listed in the System V manual. From what I have available initial 3B5 literature was distributed in the form of small binders a little different from the grey-on-black 1984 DEC Processor SVR2 binders, possibly right on the cusp of the split of p_man from u_man as this 3B5 User's Manual that I have contains sections 1-6 rather than just 1 and 6. They're dark grey with a large orange square in the middle (I believe I've sent a photograph of the manual before).
As a longtime user and lover of ed/ex/vi, I don't know much about emacs,
but lately I've been using it more (as it seems like any self-respecting
lisper, has to at least have a passing acquaintance with it). I recently
went off and got MACLISP running in ITS. As part of that exploration, I
used EMACS, but not just any old emacs, emacs in it's first incarnation
as a set of TECO macros. To me, it just seemed like EMACS. I won't bore
you with the details - imagine lots of control and escape sequences,
many of which are the same today as then. This was late 70's stuff.
My question for the group is - when did emacs arrive in unix and was it
a full fledged text editor when it came or was it sitting on top of some
other subssystem in unix? Was TECO ever on unix?
Will
> From: Clem Cole
> A new hire in 1976, Jeff Mitchell supposedly had a bet with Bill
> Strecker that he could implement an 11 on a single"hex high" CPU board
> if he got rid of the lights and switches. He ran out of room to
> implement seperate I/D, so it became an 11/40 class [and it has an
> 8008-1 that runs the front panel].
I don't know about the Strecker story, but the first PDP-11 CPU on a single
card (a hex card) was the KD11-D:
https://gunkies.org/wiki/KD11-D_CPU
of the -11/04. It didn't have _any_ memory management, though (or a front
panel; to get that, you had to use the KY"11-LB:
https://gunkies.org/wiki/KY11-LB_Programmer%27s_Console
which added another quad card). The first -11 CPU i) on a single card and
ii) with memory management was the FDF11-A:
https://gunkies.org/wiki/KDF11-A_CPU
The first -11 CPU i) on a single card and ii) with split I+D memory
management was the KDJ11-A.
> It was not until 11/44 that DEC was able to make a hex height
> implementation of the 11 that managed to cram a full 11/70 into that
> system.
I'm not sure what your point is here? The KD11-Z CPU of the -11/44:
https://gunkies.org/wiki/KD11-Z_CPU
was a _minimum_ of five hex boards; a little smaller than a KB11-B (15 hex
cards). Floating point was an extra card; CIS was 2 more.
> if you look at the link line of sys/run the 45 does not have -i
Split I+D for the kernel was not supported by the linker in V6; a combination
of 'sysfix' (a special post-processor, which took as input a relocatable
linked image) and code in m45.s was needed.
https://gunkies.org/wiki/Upgrading_UNIX_Sixth_Editionhttps://gunkies.org/wiki/UNIX_V6_memory_layout
The code in m45.s to handle split I+D in the kernel:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/sys/conf/m45.s
starts at 'start:' and is adequately commented to tell you what it's doing
when it plays around with kernel memory.
> From: Will Senn
> with I/D, you can use 64k for I and 64k for D. Was that it, or were
> there other tricks to get even more allocated
I have this vague memory that someone (Berkeley, I think?) added support for
automatic code overlays in user processes. The programmer had to decide which
modules went in which overlays, but after that it was all automagic. There
was a 4xx code allocated to them.
I think the support for that (in e.g. the linker) was somehow involved with
the use of overlays in the kernel, but I don't know the details (nothing
after V6 is at all interesting to me).
> didn't the 11 max out at 256k)?
You need to distinguish between i) the amount of memory the machine could
handle, and ii) the amount of memory that running code could 'see' at any
instant. The latter was always either 64KB, or 64KB+64KB (with split I+D
turned on, on CPUs which supported it).
The former, it's complicated. Mostly, on UNIBUS only machines, it was 256KB.
(Although there was the Able ENABLE:
https://gunkies.org/wiki/Able_ENABLE
which added an Extended UNIBUS, and could take them up to 4MB.) The -11/70,
as mentioned, had a Main Memory Bus, and could handle up to 4MB. The -11/44
had an Extended UNIBUS, and could also handle up to 4MB (but only after the
MS11-P became available; there were only 4 main memory slots in the
backplane, and MS11-M cards were only 256KB.) On QBUS achines, after the
KB11-A (Revision A), which only suppported 256 KB, all later revisions and
CPUs could also handle up to 4MB.
Noel
not quite split i and d but i do have a memory of some code which could run in three parts as overlays.
this could have been through exec’ing the next overlay with an appropriate argv and a pipe or file of data, or, perhaps there was some kernel support for overlays in early unix.
anyone seen evidence of this?
sadly i cannot remember where i saw it, i want to say it was a versatex printer driver but i am pretty sure that is rubbish.
-Steve
> From: Will Senn
> Does unix (v7) know about the PDP-11 45's split I/D space through
> configuration or is it convention and programmer's responsibility to
> know and manage what's actually available?
There are two different cases: i) support of split I+D in the kernel, and
ii) support of split I+D in user processes. Both arrived with V6; the
V5 source:
https://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/sys/conf/mch.shttps://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/usr/sys/ken/main.c
(former for kernel; later for users) shows no sign of it.
> From: Kenneth Goodwin <kennethgoodwin56(a)gmail.com>
> 1. I don't think the 11/45 had split I & d.
> But I could be wrong.
> That did not appear until the 11/70
You are wrong.
The chief differences between the KB11-A&-D of the -11/45 and the -B&-C of
the -11/70 were i) the latter had a cache, and ii) the latter used the 32-bit
wide Main Memory Bus, which also allowed up to 4 Mbytes of main memory.
Detail here:
https://gunkies.org/wiki/PDP-11/70
along with a couple of lesser differences.
> From: "Ronald Natalie"
> with only 8 segment registers combined for code, data, and stack
I think you meant for code, data, and user block.
> The 55 (just a tweaked 45)
The /50 and /55 had the identical KB11-A&-D of the /45; the difference was
that they came pre-configured with Fastbus memory.
> In addition the 23/24/J-11 and those derived processors did.
No; the F-11 processors did not support I&D, the J-11 did.
Noel
> And I think even V7 make supported what you described, as well as implicit rules for compiling .c into a .o or into a binary.
>
> Warner Losh
You're right, I just tried it out. Been avoiding that pattern for years because I swear some make implementation I used at one point was very unhappy with that, but if V7 does it, then whatever implementation that was is probably not what I want to be using anyway.
Also shows how little I've used specifics of BSD, I've never made a Makefile using bsd.prog.mk, although I have this desire for a write-once-build-everywhere Makefile that the preponderance of build systems that generate them imply is an exercise in futility...
On that note, one quirk I've found with the implicit .c.o rule on older UNIX, just tested on V7 and System III, is that they render:
cc -c $<
rather than
cc -c -o $@ $<
If you have an object list with objects in several different directories, it spits them all out in the CWD, causing problems if you have similarly named files in multiple directories, and then outright failing on the final compilation if it's something like $(CC) -o $(BIN) $(OBJS) because $(OBJS) is a string of object pathnames with the full nested path, not just the resulting *.o files.
Granted, I could be anti-patterning here for all I know, I haven't worked closely with a whole lot of Make-based projects that aren't my own. Maybe I just haven't read these darn papers I'm always hunting down enough.
- Matt G.
"Lessons learned" overlooked the Morris worm, which exploited not only
the unpardonable gets interface, but also the unpardonable back door
that Allman built into sendmail.
This reminds me of how I agonized over Mike Lesk's refusal to remove
remote execution from uucp. (Like Eric, Mike created the feature to
help fix the myriad trouble reports these communication facilities
stimulated.) It seemed irresponsible to distribute v7 with the feature
present, yet the rest of uucp provided an almost indispensable
service. The fig leaf for allowing uucp in the distribution was that
remote execution was described in the manual. If you didn't like it
you could delete or fix uucp. (Sendmail's Trojan horse was
undocumented, though visible in the code.)
Doug
Hi All,
I got some questions recently about getting v7 working, so I fired up
OBS to create a video walkthrough of the install process and first steps
(it's basically following my v7 note, but hey some folks dig video). The
video is totally amateur hour, but it was fun. I never get tired of
logging in as dmr, writing hello.c, running cc, running hello and watch
the magic of
hello, world
appear on "screen".
As a reminder - the note (and thus, the video) walks the user through
installing OpenSIMH (including pdp11), building a tape image, installing
to disk from tape, booting off the disk, building and using a DZ-11 as a
telnet listener on 16 lines, adding a user, running learn, and piddly
stuff like setting baud, delays, and such. Not a lot of hand-holding,
but some.
When I get around to it, I'll probably update the note to add additional
test environments (I'm pretty sure it works anywhere OpenSIMH does, but
some folks like to see there system or one kinda like it in the list of
tested systems). I'm running LMDE5 and Debian 12 Bookworm these days, so
I know they work there in addition to pretty much any Linux Mint, MX
Linux, FreeBSD, Mac OS, etc.
I'm still in awe of Hayle and Ritchie's Setting Up Unix - Seventh
Edition as the basis of my note - 44 y.o. and counting... for holding up
so well.
The blog:
https://decuser.github.io
The blog post:
https://decuser.github.io/unix/research-unix/v7/videos/2023/07/14/installin…
The note blog post:
https://decuser.github.io/unix/research-unix/v7/2022/10/28/installing-and-u…
Later,
Will
Good day all, I'm emailing to offer a duplicate UNIX document I picked up free of charge to whoever speaks for it first, I'll even cover the shipping. What I've got is a second copy of the Document Processing Guide shipped with the initial System V version, code 341-920, from 1983.
Now for the caveat: I ordered this second copy as it was in much rougher shape than the one I currently have. I intend to chop the spine off so I can get quality scans of the pages rather than having to deal with creasing the hell out of the binding on my other copy (unlike the manuals and some of the other guides, this one is a typical paperback glue binding.) Once I'm done with the scans I'm just going to put all of the pages in a binder (as they're already Bell-style 7-hole punched.) That to say, I'm not ready to ship it right now, just got it today, still need to do the scans.
As for contents, this contains the System V-era versions of the following papers (titles paraphrased):
- Advanced Editing on UNIX
- Sed
- NROFF/TROFF User's Manual
- TROFF Tutorial
- Tbl
- Eqn
- MM Macros Manual
- View Graphs and Slide Macros
Anywho, figured I'd see if anyone was interested in this after I'm done with it. Otherwise I'll just see if a library around here is interested.
- Matt G.