'skeeve' is my domain name. Robbins is my surname.
Sorry about that; up too late with too many balls
in the air (packing, finishing a tax return, listening
to our provincial election results).
At least I didn't further truncate it to skeev, as
Ken might have done.
UNIX/WORLD started in 1984 and was renamed UnixWorld Magazine: Open
Systems Computing in 1991 and then UnixWorld's Open Computing in 1994
and it folded in 1995.
SunExpert started in 1989 was renamed to Server/Workstatsion Expert in
1999 and it folded in 2001. I always enjoyed Mike OBrien's offbeat
"Ask Mr. Protocol"
> From: Dan Cross <crossd(a)gmail.com>
> There were several, starting I guess in the 80s mostly. The one I remember
> in particular was "Unix Review", but there were a few "journal" type
> magazines that also specialized in Unix-y things (e.g., ";login:" from
> USENIX; still published, I believe), and several associated with particular
> vendors: "SunExpert" was one, if I recall correctly.
>
> Occasionally, Unix and related things showed up in the "mainstream"
> consumer computer press of the time. I can remember in particular an issue
> of "PC Magazine" (I think June of 1993) that ran a lengthy couple of
> articles proving machines from Sun and SGI, in addition to version of Unix
> that ran on PCs (interestingly, Linux was omitted despite really starting
> to capture a lot of the imagination in that space; similarly I don't recall
> any mention of BSD).
>
> Some of these old magazines are definitely blasts from the past.
>
> - Dan C.
>
>
>
> On Wed, Jun 11, 2014 at 11:10 PM, Sergey Lapin <slapinid(a)gmail.com> wrote:
>
> > Hi, all!
> >
> > I've read recently published link to byte article and got an idea....
> > Was there a magazine related to UNIX systems in 70s-80s?
> > I had so much fun reading that Byte issue, even ads (especially ads!)
> > It is so fun...
> >
> > _______________________________________________
> > TUHS mailing list
> > TUHS(a)minnie.tuhs.org
> > https://minnie.tuhs.org/mailman/listinfo/tuhs
> >
> ;login: is alive and well.
For a few years Usenix even published a refereed technical
journal, "Computing Systems", quite different in tone from
;login: It had some nice content. Does anyone know why
it folded?
Doug
Dan Cross:
... there were a few "journal" type
magazines that also specialized in Unix-y things (e.g., ";login:" from
USENIX; still published, I believe) ...
======
;login: is alive and well. So is USENIX. It's no longer
the UNIX user's group it started as many decades ago; the
focus has broadened to advanced computing and systems
research, though the descendants of UNIX are still prominent
in those areas.
For an old-fashioned programmer/systems hack/systems generalist
like me, it's still quite a worthwhile journal and a worthwhile
organization. They've even been known to have a talk or two
about resurrecting old versions of UNIX.
I'm just off to the federation of medium-sized conferences
and workshops that has grown out of the former USENIX
Annual Technical Conference. I'm looking forward to it.
Norman Wilson
Toronto ON
Hi, all!
I've read recently published link to byte article and got an idea....
Was there a magazine related to UNIX systems in 70s-80s?
I had so much fun reading that Byte issue, even ads (especially ads!)
It is so fun...
Phil Garcia wrote:
I've always wondered about something
else, though: Were the original Unix authors annoyed when they learned that
some irascible young upstart named Richard Stallman was determined to make
a free Unix clone? Was he a gadfly, or just some kook you decided to
ignore? The fathers of Unix have been strangely silent on this topic for
many years. Maybe nobody's ever asked?
Gnu was always taken as a compliment. And of course the Unix clone
was pie in the sky until Linus came along. I wonder about the power
relationship underlying "GNU/Linux", as rms modestly styles it.
There are certain differences in taste between Unix and Gnu, vide
emacs and texinfo. (I grit my teeth every time a man page tells me,
"The full documentation for ___ is maintained as a Texinfo file.")
But all disagreement is swept away before the fact that the old
familiar environment is everywhere, from Cray to Apple, with rms
a very important contributor.
Doug
Does anyone have that running on anything? If so, I'd like a copy of the
lint libraries, probably /usr/lib/ll* or something like that.
It's not well known but I spent a pile of time creating lint libraries for
pure BSD, System V, etc, so you could lint your code against a target and
know if you let some non-standard stuff creep in.
I suppose I could fire up a Sun3 emulator like this and find them:
http://www.abiyo.net/retrocomputing/installingsunos411tosun3emulatedintme08…
If someone has a SunOS 4.1.1 box on the net and can give me a login (non-root)
that would be appreciated.
Thanks,
--lm
I noted just as I sent my previous posting with two references to
fuzz-test papers that the abstract of the second mentions two earlier
ones.
I've just tracked them down, and added them to various bibliographies.
Here are short references to them:
Fuzz Revisited: A Re-examination of the Reliability of UNIX
Utilities and Services
ftp://ftp.cs.wisc.edu/pub/techreports/1995/TR1268.pdf
An Empirical Study of the Robustness of MacOS Applications
Using Random Testing
http://dx.doi.org/10.1145/1228291.1228308
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
> Ken and Dennis and the other guys behind
> the earliest UNIX code were smart guys and good programmers,
> but they were far from perfect; and back in those days we
> were all a lot sloppier.
The observation that exploits may be able to parlay
mundane bugs into security holes was not a commonplace
back then--even in the Unix room. So input buffers were
often made "bigger than ever will be needed" and left
that way on the understanding that crashes are tolerable
on outlandish data. In an idle moment one day, Dennis fed
a huge line of input to most everything in /bin. To the
surprise of nobody, including Dennis, lots of programs
crashed. We WERE surprised a few years later, when a journal
published this fact as a research result. Does anybody
remember who published that deep new insight and/or where?
Doug
> From: norman(a)oclsc.org (Norman Wilson)
> SP&E published a paper by Don Knuth discussing all the many bugs found
> in TeX, including some statistical analysis.
> From: John Cowan <cowan(a)mercury.ccil.org>
> "The Errors of TeX" was an excellent article.
Thanks for the pointer; it sounds like a great paper, but alas the only
copies I could fine online were behind paywalls.
> From: Clem Cole <clemc(a)ccc.com>
> btw. there is a v6 version of fsck floating around.
Yes, we had it at MIT.
> I'm wonder if I can find a readable copy.
As I've mentioned, I have this goal of putting the MIT Unix (the kernel was
basically PWB1, with a host of new applications) sources online.
I have recently discovered (in my basement!) two sets of full dump tapes
(1/2" magtape) of what I think are the whole filesystem, so if I can find a
way to get them read, we'll have the V6 fsck - and much more besides (such
as a TCP/IP for V6). So I think you may soon get your wish!
Noel
> From: "Ron Natalie" <ron(a)ronnatalie.com>
> The variable in question was a global static, 'ino' (the current inode
> number),
> Static is a much overloaded word in C, it's just a global variable.
Sorry; I was using 'static' in the general CS sense, not C-specific!
> in the version 7 version of icheck .. they appear to have fixed it.
Actually, they seem to have got all three bugs I saw (including the one I
hadn't actually experienced yet, which would cause a segmentation violation).
> From: Tim Newsham <tim.newsham(a)gmail.com>
> There are bugs to be found .. Here are some more (security related, as
> thats my inclination):
> ...
> http://minnie.tuhs.org/pipermail/unix-jun72/2008-May/000126.html
Fascinating mailing list! Thanks for the pointer.
Noel
A. P. Garcia <a.phillip.garcia(a)gmail.com> wrote:
> Were the original Unix authors annoyed when they learned that
> some irascible young upstart named Richard Stallman was determined to make
> a free Unix clone?
A deeper, more profound question would be: how did these original Unix
authors feel about their employer owning the rights to their creation?
Did they feel any guilt at all for having had to sign over all rights
in exchange for their paychecks?
Did Dennis and/or Ken personally wish their creation were free to the
world, public domain, or were they personally in agreement with the
licensing policies of their employer? I argue that this question is
far more important than how they felt about RMS (if they cared at all).
Ronald Natalie <ron(a)ronnatalie.com> wrote:
> [RMS] If you read his earlier manifesto rants he hated UNIX =
> with a passion.
> Holding out the TOPS operating systems as the be-all and end-all of user =
> interface.
I wish more people would point out this aspect of RMS and GNU. While
I wholeheartedly agree with Richard on the general philosophy of free
software, i.e., the *ethics* part and the Four Freedoms, when it comes
to GNU as a specific OS, in technical terms, I've always disliked
everything about it. I love UNIX, and as Ron pointed it out like few
people do, GNU was fundamentally born out of hatred for the thing I
love.
SF
So it turns out the 'dcheck' distributed with V6 has two (well, three, but
the third one was only a potential problem for me) bugs it.
The first was a fence-post error on a table clearing operation; it could
cause the entry for the last inode of the disk in the constructed table of
directory entry counts to start with a non-zero count when a second disk was
scanned. However, it was only triggered in very specific circumstances:
- A larger disk was listed before a smaller one (either in the command line,
or compiled in)
- The inode on the larger disk corresponding to the last inode on the smaller
one was in use
I can understand how they never ran across this one.
The other one, however, which was an un-initalized variable, should have
bitten them anytime they had more than one disk listed! It caused the
constructed table of directory entry counts to be partially or wholly
(depending on the size of the two disks) blank in all disks after the first
one, causing numerous (bogus) error reports.
(It was also amusing to find an un-used procedure in the source; it looks
like dcheck was written starting with the code for 'icheck' - which explains
the second bug; since the logic in icheck is subtly different, that variable
_is_ set properly in icheck.)
How this bug never bit them I cannot understand - unless they saw it, and
couldn't be bothered to find and fix it!
To me, it's completely amazing to find such a serious bug in such a critical
piece of widely-distributd code! A lesson for archaeologists...
Anyway, a fixed version is here:
http://ana-3.lcs.mit.edu/~jnc/tech/unix/ucmd/dcheck.c
if anyone cares/needs it.
Noel
Larry McVoy scripsit:
> I love Rob Pike, he's spot on on a lot of stuff. I'm a big fan of
> "if you think you need threads then your processes are too fat".
Oh, he's a brilliant fellow. I don't know him personally, but I know
people who do, and I don't think I'd love him if I knew him. Humanity has
always found it useful to keep its (demi)gods at arm's length at least.
--
John Cowan http://www.ccil.org/~cowan cowan(a)ccil.org
Barry thirteen gules and argent on a canton azure fifty mullets of five
points of the second, six, five, six, five, six, five, six, five, and six.
--blazoning the U.S. flag
> From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
> the second (the un-initialized variable) should have happened every
> time.
OK, so I was wrong! The variable in question was a global static, 'ino' (the
current inode number), so the answer isn't something simple like 'it was an
auto that happened to be cleared for each disk'. But now that I look closely,
I think I see a way it might have worked.
'dcheck' is a two-pass per disk thing: it begins each disk by clearing its
'inode link count' table; then the first pass does a pass over all the inodes,
and for ones that are directories, increments counts for all the entries; the
second pass re-scans all the inodes, and makes sure that the link count in the
inode itself matches the computed count in the table.
'ino' was cleared before the _second_ pass, but not the _first_. So it was
zero for the first pass of the first disk, but non-zero for the first pass on
the second disk.
This looks like the kind of bug that should almost always be fatal, right?
That's what I thought at first... (and I tried the original version on one of
my machines to make sure it did fail). But...
The loop in each pass has two index variables, one of which is 'ino', which it
compares with the maximum inode number for that disk (per the super-block),
and bails if it reaches the max:
for(i=0; ino<nfiles; i =+ NIBLK)
If the first disk is _larger_ than the second, the first pass will never
execute at all for the second desk (producing errors).
However, if the _second_ is larger, then the second disk's first pass will in
fact examine the starting (nfilesSUBsecond - nfilesSUBfirst) inodes of the
second disk to see if they are directories (and if so, count their links).
So if the last nfilesSUBfirst inodes of the second disk are empty (which is
often the case with large drives - I had modified 'df' to count the free
inodes as well as disk blocks, and after doing so I noticed that Unix seems to
be quite generous in its default inode allocations), it will in fact work!
The fact that 'ino' is wrong all throughout the first pass of the second disk
(it counts up from nfilesSUBfirst to nfilesSUBsecond) turns out to be
harmless, because the first pass never uses the current inode number, it only
looks at the inode numbers in the directories.
Note that with two disks of _equal size_, it fails. Only if the second is
larger does it work! (And this generalizes out to N disks - as long as each
one is enough larger than the one before!) So for the config they were
running (rk2, dp0) it probably did in fact work!
Noel
Noel Chiappa:
To me, it's completely amazing to find such a serious bug in such a critical
piece of widely-distributd code! A lesson for archaeologists...
======
To me it's not surprising at all.
On one hand, current examples of widely-distributed critical
code containing serious flaws are legion. What, after all,
were the Heartbleed and OS X goto fail; bugs? What is every
version of Internet Explorer?
On the other hand, Ken and Dennis and the other guys behind
the earliest UNIX code were smart guys and good programmers,
but they were far from perfect; and back in those days we
were all a lot sloppier.
So surprising? No. Interesting? Certainly. All bugs are
interesting.
(To me, anyway. Back in the 1980s, when I was at Bell Labs,
SP&E published a paper by Don Knuth discussing all the many
bugs found in TeX, including some statistical analysis. I
thought it fascinating and revealing and think reading it
made me a better programmer. Rob Pike thought it was terribly
boring and shouldn't have been published. Decidedly different
viewpoints.)
Norman Wilson
Toronto ON
> From: Ronald Natalie <ron(a)ronnatalie.com>
> If I understand what you are saying, it only occurs when you run dcheck
> with mutliple volumes at one time?
Right, _both_ bugs have that characteristic. But the first one (the
fence-post) only happens in very particular circumstances; the second (the
un-initialized variable) should have happened every time.
> From: norman(a)oclsc.org (Norman Wilson)
> To me it's not surprising at all.
> On one hand, current examples of widely-distributed critical code
> containing serious flaws are legion.
What astonished me was not that there was a bug (which I can easily believe),
but that it was one that would have happened _every time they ran it_.
'dcheck' has this list of disks compiled into it. (Oh, BTW, my fixed version
now reads a file, /etc/disks; I am running a number of simulated machines,
and the compiled-in table was a pain.)
So I would have thought they must have at least tried that mode of operation
once? And running it that way just once should have shown the bug. Or did
they try it, see the bug, and 'dealt' with it by just never running it that
way?
Noel
> From: asbesto <asbesto(a)freaknet.org>
> We have about 40 disks, with RT-11 on them
Ah. You should definitely try Unix - a much more pleasant computing/etc
environment!
Although without a video editor... although I hope to have one available
'soon', from the MIT V6+ system (I think I have found some backup tapes from
it).
> This PDP-11/34 was used for a medical CAT equipment
As, so it probably has the floating point, then. If so, you should be able to
use the Shoppa V6 Unix disk as it is, then - that has a Unix on it which will
work on an 11/23 (which don't have the switch register that V6 normally
requires).
But if not, let me know, and I can provide a V6 Unix for it (I already have
the tweaked version running on a /23 in the simulator).
Noel
PS: For those who downloaded the 'fixed' ctime.c (if anyone :-), it turns out
there was a bug in my fix - in some cases, one variable wasn't initialized
properly. There's a fixed one up there now.
> From: asbesto <asbesto(a)freaknet.org>
> Just in these days we restored a PDP-11/23PLUS here at our Museum! :)
> ...
> CPU is working
That is good to hear! You all seem to have been very resourceful in making
the power supply for it!
> and we're trying to boot from a RL02 unit :)
Is your RL02 drive and RLV11 controller all working? Here are some
interesting pages:
http://www.retrocmp.com/pdp-11/pdp-1144/my-pdp-1144/rl02-disk-troublehttp://www.retrocmp.com/pdp-11/pdp-1144/my-pdp-1144/more-on-rl01rl02
from someone in Germany about getting their RL11 and RL02 to work.
Also, when you say "boot from an RL02", what are you trying to boot? Do you
have an RL02 pack with a working system on it? If so, what kind - a Unix
of some sort, or some DEC operating system?
> From: SPC <spedraja(a)gmail.com>
> I'll keep a reference of this message and try it as soon as possible...
Speaking of getting Unix to run on an 11/23 with an RL02... I just realized
that the hard part of getting a Unix running, for you, will not be getting V6
to run on a machine without a switch register (which is actually pretty easy
- I have worked out a way to do it that involves changing one line in
param.h, and adding two lines of code to main.c).
The hard part is going getting the bits onto the disk! If all you have is an
RL02, you are going to have to load bits into the computer over a serial line.
WKT has done this for V7 Unix:
http://www.tuhs.org/Archive/PDP-11/Tools/Tapes/Vtserver/
but V7 really wants a machine with split I/D (which the /23 does not have). I
guess V7 'sort of' works on a machine without I/D, but I'm not a V7 expert,
so I can't say for sure.
It would not be hard to do something similar to the VTServer thing for V6,
though. If you would like to go this way, let me know, I would be very
interested in helping with this.
Also, do you only have one working RL02 drive, or more than one? If you only
have one, you will not be able to do backups (unless you have something else
connected to the machine, e.g. some sort of tape drive, or something).
Noel
> From: SPC <spedraja(a)gmail.com>
> I'll keep a reference of this message and try it as soon as possible...
No rush! Take your time...
> the disruptive fact (in terms of time) here is to put up-to-date both
> the PDP-11/23-PLUS and RL02.
My apologies, I just now noticed that you have an 11/23-PLUS (it is slightly
different from a plain 11/23).
I am not very familiar with the 11/23-PLUS (I never worked with one), but from
documentation I just dug out, it seems that they normally come with the MMU
chip, so we don't need to worry about that. However, the FPP is not standard,
so that is still an issue for bringing up Unix.
In fact, there are two different FPP options for the 11/23-PLUS (and,
actually, for the 11/23 as well): one is the KEF-11AA chip which goes on the
CPU card (on the 11/23-PLUS, in the middle large DIP holder), and the other is
something called the FPF-11 card, which is basically hardware floating point
(the KEF-11A is just microcode), for people who are doing serious number
crunching. It's a quad-size card which has a cable with a DIP header on the
end which plugs into the same DIP holder on the CPU card as the KEF-11A. They
look the same to software; one is just faster than the other.
Anyway, if you don't have either one, we'll have to produce a new Unix
load for you (not a big problem, if it is needed).
Noel
Does anyone know if the source for an early PDP-11 version of MERT is
available anywhere?
(For those who aren't familiar with MERT, it was a micro-kernel [as we would
name it now] which provided message-passing and [potentially shared] memory
segments, intended for real-time applications; it supported several levels of
protection, using the 'Supervisor' mode available in the 11/45 and 11/70. One
set of supervisor processes provided a Unix environment; the combination was
called UNIX/RT - hence my asking about it here.)
Thanks!
Noel
>> I got one PDP-11/23-PLUS without any kind of disk (by now, I got one
>> RL12 board plus one RL02 drive pending of cleaning and arrangement)...
>> I guess if could be possible to run V6 in this machine. There's any
>> kind of adaptation of this Unix version (or whatever) to run under ?
> IIRC the README page for that set of disk images indicates that in fact
> they originally came off an 11/23, so they should run fine on yours.
So I was idly looking through main.c for the Shoppa Unix (because it printed
some unusual messages when it started, and I wanted to see that code), and I
noticed it had some fancy code for dealing with the clock, and that tickled a
very dim memory that LSI-11's had some unusual clock thing. So I decided I
had better check up on that...
I got out an LSI-11 manual, and it looked like the 23 should work, even for
the 'vanilla' V6 from the Bell distro. But I decided I had better check it to
be sure, so I fired up the simulator, mounted a Bell disk, set the cpu type
to '23', and booted 'rkunix'. Which promptly halted!
After a bit of digging, it turned out that the problem is that the 11/23
doesn't have a switch register! It hit a kernel NXM trying to touch it -
and then another trying to read it in the putchar() routine trying to do a
panic(), at which point it died a horrible death.
So I added a SR (you can create all sorts of bizarre hybrids like that with
Ersatz-11, like 11/40's with 11/45 type floating point :-), and then it
booted fine. The clock even worked!
So you will have to use the Shoppa disk to boot (but see below), or we'll
have to spin you a special vanilla V6 Unix that doesn't try to touch the SR -
that shouldn't be much work, I only found two place in the code that touch it.
I did try the Shoppa 'unix', and it booted fine on an 11/23.
Two things to check for, though: first, your 11/23 _has_ to have the MMU chip
(that's the large DIP package with one chip on it nearest the edge of the
card), so if yours looks like this:
http://www.psych.usyd.edu.au/pdp-11/Images/23.jpeg
you're OK. Without the MMU chip, most variants of Unix will not run on the 23
(although there's something called MiniUnix, IIRC, which runs on an LSI-11,
which would probably run on a /23 without an MMU).
Here's the part that might be a problem: To run any of the Unixes on the
Shoppa disk, you also have to have the FPP chip too (that's the second large
DIP package with two chips on it - the image above does not include that
chip, so if yours looks like that, you have a minor problem, and I will have
to build you a Unix or something).
All of the Unixes on the Shoppa disk have to have the FPP, except one - and
that one wants an RX floppy as the root/swap device! The others will all
crash (I tried one, to make sure) if you try and boot them on an 11/23
without the FPP.
I could try patching the binary on the one that doesn't expect to use the FPP
to use the RL as the root, or either i) build you a vanilla V6 for a 23
(above), or ii) figure out how to build systems on the Shoppa disk, and build
you a Unix there which i) uses the RL as the root/swap, and ii) does not
expect to have the FPP.
But let's first find out exactly what you have...
Noel
> From: SPC <spedraja(a)gmail.com>
> I got one PDP-11/23-PLUS without any kind of disk (by now, I got one
> RL12 board plus one RL02 drive pending of cleaning and arrangement)...
> I guess if could be possible to run V6 in this machine. There's any
> kind of adaptation of this Unix version (or whatever) to run under ?
As I mentioned in a previous message on this thread, when I took that root
pack image from the Shoppa group, I could get it to boot to Unix right off.
All it needs is a single RL02 drive (RL/0) (and the console terminal, of
course).
I looked at the 'unix' on it, and it's for an 11/40 type machine (which
includes 11/23's); IIRC the README page for that set of disk images indicates
that in fact they originally came off an 11/23, so they should run fine on
yours.
That Unix has a couple of other devices built into it (looks like an RX and
some sort of A-D), but as long as you don't try and touch them, they will not
be an issue.
Let me know if you need any help getting it up (once you have a working RL02).
Noel
> From: John Cowan <cowan(a)mercury.ccil.org>
> Well, provided the compiler is honest, contra [Ken].
A thought on this:
The C compiler actually produces assembler, which can be (fairly easily)
visually audited; yes, yes, I know about disassembly, but trust me, having
done some recently (the RL bootstrap(s)), disassembled code is a lot harder
to grok!
So, really, to find the Thompson hack, we'd have to edit the binaries of the
assembler!
For real grins, we could write a program to convert .s format assembler to
.mac syntax, run the results through Macro-11, and link it with the other
linker... :-)
Also, I found on what's going on here:
> What was wierd was that in the new one, the routine csv is one word
> shorter (and so is csv.o). So now I don't understand what made them the
> same sizes!? The new ones should have been one word shorter!? Still
> poking into this...
The C compiler is linked with the -n flag, which produces pure code. What
the linker documentation doesn't say (and I never realized this 'back in the
day') is that when this option is used, it rounds up the size of the text
segment to the nearest click (0100).
So, in c2 (which is what I was looking at), the last instruction is at
015446, _etext is at 015450, but if you look at the executable header, it
lists a text size of 015500 - i.e. 030 more bytes. And indeed there are 014
words of '0' in the executable file before the data starts.
And if you link c2 _without_ the -n flag, it shows 015450 in the header as
the text size.
So that's why the two versions of all the C compiler phases were the same
size (as files); it rounded up to the same place in both, hiding the one-word
difference in text size.
Noel