Hi,
I'm running 2.9BSD on a pdp11/34 with an Emulex sc21 controller to some
Fuji160 disks. Booting with root on RL02 for now, but want to eventually
have the whole system on the Fujis and disconnect the rl02s.
While the previous owner of the disks appears to have suffered a
headcrash near cylinder 0, I'm having an impressive degree of success
writing to other parts of the disk.
However, when I try to mkfs, I can see the heads trying to write on the
headcrashed part of the disk. (Nice having those plexiglass covers!)
Is there a way to tell mkfs (or perhaps some other program) to not try to
write on the damaged cylinders?
thx
jake
So, I have a chance to buy a copy of a Version 5 manual, but it will be a
lot. I looked, and the Version 5 manual doesn't appear to be online. So while
normally at the price this is at, I would pass, it might be worth it for me
to buy it, and scan it to make it available.
But, I looked in the "FAQ on the Unix Archive and Unix on the PDP-11", and it
says:
5th Edition has its on-line manual pages missing. ... Fortunately, we do
have paper copies of all the research editions of UNIX from 1st to 7th
Edition, and these will be scanned in and OCR'd.
Several questions: First, when it says "we do have paper copies of all the
research editions of UNIX", I assume it means 'we do have paper copies of
_the manuals for_ all the research editions of UNIX', not 'we do have paper
copies of _the source code for_ all the research editions of UNIX'?
Second, if it is 'manuals', did the scan/OCR thing ever happen, or is it
likely to anytime in the moderate future (next couple of years)?
Third, would a scanned (which I guess we could OCR) version of this manual be
of much use (it would not, after all, be the NROFF source, although probably
a lot of the commands will be identical to the V6 ones, for which we do have
the NROFF)?
Advice, please? Thanks!
Noel
> From: Tom Ivar Helbekkmo
> There was no fancy I/O order juggling, so everything was written in the
> same chronological order as it was scheduled.
> ...
> What this means is that the second sync, by waiting for its own
> superblock writes, will wait until all the inode and file data flushes
> scheduled by the first one have completed.
Ah, I'm not sure this is correct. Not all disk drivers handled requests in a
'first-come, first-served' order (i.e. where a request for block X, which was
scheduled before a request for block Y, actually happened before the
operation on block Y). It all depends on the particular driver; some drivers
(e.g. the RP driver) re-organized the waiting request queue to optimize head
motion, using a so-called 'elevator algorithm'.
(PS: For a good time, do "dd if=/dev/[large_partition] of=/dev/null" on a
running system with such a disk, and a lot of users on - the system will
apparently come to a screeching halt while the 'up' pass on the disk
completes... I found this out the hard way, needless to say! :-)
Since the root block is block 1 in the partition, one might think that even
with an elevator algorithm, this would tend to guarantee that doing it would
more or less guarantee that all other pending operations would have completed
(since it could only happen at the end of 'down' pass); _but_ the elevator
algorithm is in terms of actual physical block numbers, so blocks in another
lower partition might still remain to be written.
But now that I think about it a bit, if such blocks existed, that partition's
super-block would also need to be written, so when that one completed, the
disk queue would be empty.
But the point remains - because there's no guarantee of _overall_ disk
operation ordering in V6, scheduling a disk request and waiting for it to
complete does not guarantee that all previously-requested disk operations
will have completed before it does.
I really think the whole triple-sync thing is mythology. Look through the V6
documentation and although IIRC there are instructions on how to shut the
system down, it's not mentioned. We certainly never used it at MIT (and I
still don't), and I've never seen a problem with disk corruption _when the
system was deliberately shut down_.
Noel
Yo Jacob,
I'm ex-sun but I don't know too much about Illumos. Care to give us
the summary of why I might care about it?
On Wed, Dec 31, 2014 at 01:16:00AM -0500, Jacob Ritorto wrote:
> Hey, thanks, Derrik.
> I don't mess with Linux much (kind of an Illumos junkie by trade ;), but
> I bet gcc would. I did out of curiosity do it with the Macintosh cc (Apple
> LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)) and it throws
> warnings about our not type-defining functions because you're apparently
> supposed to do this explicitly these days, but it dutifully goes on to
> assume int and compiles our test K&R stuff mostly fine. It does
> unfortunately balk pretty badly at the naked returns we initially had,
> though. Wish it didn't because it strikes me as being beautifully simple..
>
> thx again for the encouragement!
> jake
>
>
> On Wed, Dec 31, 2014 at 1:02 AM, Derrik Walker v2.0 <dwalker(a)doomd.net>
> wrote:
>
> > On Wed, 2014-12-31 at 00:44 -0500, Jacob Ritorto wrote:
> >
> > >
> > > P.S. if anyone's bored enough, you can check out what we're up to at
> > > https://github.com/srphtygr/dhb. I'm trying to get my 11yo kid to
> > > spend a little time programming rather than just playing video games
> > > when he's near a computer. He'a actually getting through this stuff
> > > and is honestly interested when he understands it and sees it work --
> > > and he even spotted a bug before me this afternoon! Feel free to
> > > raise issues, pull requests, etc. if you like -- I'm putting him
> > > through the git committing and pair programming paces, so outside
> > > interaction would be kinda fun :)
> > >
> > >
> > > P.P.S. We're actually using 2.11bsd after all..
> > >
> > I'm curious, will gcc on a modern Linux system compile K&R c?
> >
> > Maybe when I get a little time, I might try to see if I can compile it
> > on a modern Fedora 21 system with gcc.
> >
> > BTW: Great job introducing him to such a classic environment. A few
> > years ago, my now 18 year old had expressed some interest in graphics
> > programming and was in awe over an SGI O2 I had at the time, so I got
> > him an Indy. He played around with a bit of programming, but
> > unfortunately, he lost interest.
> >
> > - Derrik
> >
> >
> > _______________________________________________
> > TUHS mailing list
> > TUHS(a)minnie.tuhs.org
> > https://minnie.tuhs.org/mailman/listinfo/tuhs
> >
> _______________________________________________
> TUHS mailing list
> TUHS(a)minnie.tuhs.org
> https://minnie.tuhs.org/mailman/listinfo/tuhs
--
---
Larry McVoy lm at mcvoy.comhttp://www.mcvoy.com/lm
> when you - say - run less to display a file, it switches to a dedicated
> region in the terminal memory buffer while printing its output, then
> restores the buffer to back where you were to begin with when you exit
> the pager
Sorry for veering away from Unix history, but this pushed one of the hottest
of my buttons. Less is the epitome of modern Unix decadence. Besides the
maddening behavior described above, why, when all screens have a scroll bar,
should a pager do its own scrolling? But for a quantitative measure of
decadence, try less --help | wc. It takes excess to a level not dreamed of
in Pike's classic critique, "cat -v considered harmful".
Doug
Hi all, I came across this last week:
http://svnweb.freebsd.org/
It's a Subversion VCS of all the CSRG releases. I'm not sure if it
has been mentioned here before.
Cheers, Warren
<much discussion about quadratic search removed>
All I remember (and still support to this day) is that I’ve got a TERMCAP=‘string’ in my login scripts to set termcap to the specific terminal I’m logging in with.
Long ago this made things much faster. Today I think that it is just a holdover that I’m not changing due to inertia, rather than any real need for it.
David
—
David Barto
david(a)kdbarto.org
> Noel Chiappa
> The change is pretty minor: in this piece of code:
>
> case reboot:
> termall();
> execl(init, minus, 0);
> reset();
>
> just change the execl() line to say:
>
> execl(init, init, 0);
I patched init in v5 and now ps shows /etc/init as expected, even
after going from multi to single to multi mode.
Looks like init.c was the same in v5 and v6.
> Noel Chiappa:
> Just out of curiousity, why don't you set the SR before you boot the machine?
> That way it'll come up single user, and you can icheck/dcheck before you go
> to multi-user mode. I prefer doing it that way, there's as little as possible
> going on, in case the disk is slightly 'confused', so less chance any bit-rot
> will spread...
I actually do file system checks on v5 as it's the early unix I use the most:
check -l /dev/rk0
check -u /dev/rk0
same for rk1, rk2.
The v5 manual entry for check references the 'restor' command,
although the man page for that is missing.
Your idea of starting up in single user mode is a good one although
I'm not sure if it's necessary to check the file system on each boot
up. I've been running this disk image of v5 for about two years and no
blow-ups as yet. I also keep various snapshots of v5, v6 and v7 disk
images for safety reasons.
And there are text files of all the source code changes I've made, so
if disaster strikes I can redo it all.
Mark
> From: Clem Cole
> ps "knew" about some kernel data structures and had to compiled with
> the same sources that your kernel used if you want all the command
> field in particular to look reasonable.
Not just the command field!
The real problem was that all the parameters (e.g. NPROC) were not stored in
the kernel anywhere, so if you wanted to have one copy of the 'ps' binary
which would work on two different machines (but which were running the same
version of the kernel)... rotsa ruck.
I have hacked my V6 to include lines like:
int ninode NINODE;
int nfile NFILE;
int nproc NPROC;
etc so 'ps' can just read those variables to find the table sizes in the
running kernel. (Obviously if you modify a table format, then you're SOL.)
> From: Ronald Natalie
> The user structure of the currently running process is the only one
> that is guaranteed to be in memory ... Any processes that were swapped
> you could read the user structure so things that were stored there were
> often unavailable (particularly the command name).
Well, 'ps' (even the V6 stock version) was actually prepared to poke around
on the swap device to look at the images of swapped-out processes. And the
command name didn't come from the U area (it wasn't saved there in stock V6),
'ps' actually had to look on the top of the user stack (which is why it
wasn't guaranteed to be accurate - the user process could smash that).
> From: Clem cole
> IIRC we had a table of sleep addresses so that ps could print the
> actual thing you were waiting for not just an address.
I've hacked my copy of 'ps' to grovel around in the system's symbol table,
and print 'wchan' symbolically. E.g. here's some of the output of 'ps' on
my system:
TTY F S UID PID PPID PRI NIC CPU TIM ADDR SZ TXT WCHAN COMMAND
?: SL S 0 0 0-100 0 -1 127 1676 16 runout <swapper>
?: L W 0 1 0 40 0 0 127 1774 43 0 proc+26 /etc/init
?: L W 0 11 1 90 0 0 127 2405 37 tout /etc/update
8: L W 0 12 1 10 0 0 127 2772 72 2 kl11 -
a: L W 0 13 1 40 0 0 127 3122 72 2 proc+102 -
a: L R 0 22 13 100 0 10 0 3422 138 3 ps axl
b: L W 0 14 1 10 0 0 127 2120 41 1 dz11+40 - 4
It's pretty easy to interpret this to see what each process is waiting for.
Noel