I've assembled some notes from old manuals and other sources
on the formats used for on-disk file systems through the
Seventh Edition:
http://www.cita.utoronto.ca/~norman/old-unix/old-fs.html
Additional notes, comments on style, and whatnot are welcome.
(It may be sensible to send anything in the last two categories
directly to me, rather than to the whole list.)
All,
Is there a good source of information about the Unix v6 filesystem
outside of the source code itself? Also, is there a source for the
history of the early Unix filesystems from v6 onward?
Thanks,
Will
At www.skeeve.com/Usenet.tar.bz2 is a copy of UUNET's archives of the
various USENET source newsgroups. I created this file on September 2 2004.
I made it for myself, since it was clear that uu.net would disappear
sometime soon...
It's 139 Meg - Warren maybe you can put it into the archives and everyone
else can get it from there? I think the person who hosts www.skeeve.com
has some monthly limits on data transfer and I don't want him blown
out of the water.
Thanks,
Arnold
> I'm still trying to get my around about how a program such as "egrep"
> which handles complex patterns can be faster than one that doesn't
> Is there a simple explanation, involving small words?
First, the big-word explanation. Grep is implemented as a nondeterministic
finite automaton, egrep as a deterministic finite automaton. Theory folk
abbreviate these terms to "NFA" and "DFA", but these are probably not
what you had in mind as "small words".
The way grep works is to keep a record of possible partial parsings
as it scans the subject text; it doesn't know which (if any) will
ultimately be the right one, hence "nondeterministic". For example,
suppose grep seeks pattern '^a.*bbbc' in text 'a.*bbbbbbc'. When
it has read past 3 b's, the possible partial parses are 'a.*', 'a.*b',
'a.*bb' and 'a.*bbb'. If it then reads another b, it splits the
first partial parse into 'a.*' and 'a.*b', updates the next two
to 'a.*bb' and 'a.*bbb', and kills off the fourth. If instead it
reads a c, recognition is successful; if anything else, all partials
are killed and recognition fails.
Egrep, by preprocessing the expression, produces separate code for
each of several possible states: "no b's", "1 b", 2 b's" and "3 bs".
When it's in state "1 b", for example, it switches on the next
character into "2 b's" or fails, depending on whether the
character is b or not--very fast. Grep, on the other hand, has to
update all the live parses.
So if egrep is so fast, why do we have grep? One reason is that
grep only needs to keep a list of possible progress points
in the expression. This list can't be longer than the expression.
In egrep, though, each state is a subset of progress points.
The number of possible subsets is exponential in the length
of the expression, so the recognition machine that egrep constructs
before attempting the parse can explode--perhaps overflowing
memory, or perhaps taking so long in construction that grep
can finish its slow parse before egrep even begins its fast parse.
To revert to the words of theory folks, grep takes time O(m*n) and
space O(m) worst case, while egrep takes time and space O(2^m+n).
(2^m is an overestimate, but you get the idea.)
That's the short story. In real life egrep overcomes the exponential
by lazily constructing the machine--not generating a state until it
is encountered in the parse, so no more than n states get constructed.
It's a complex program, though, for the already fancy preprocessing
must be interleaved with the parsing.
Egrep is a tour de force of classical computer science, and pays
off big on scanning big files. Still, there's a lot to say for
the simple elegance of grep (and the theoretical simplicity
of nondeterministic automata). On small jobs it can often win.
And it is guaranteed to work in O(m) memory, while egrep may need
O(n).
-------------------------------------------------
Ken Thompson invented the grep algorithm and published it in CACM.
A pointy-headed reviewer for Computing Reviews scoffed: why would
anybody want to use the method when a DFA could do the recognition
in time O(n)? Of course the reviewer overlooked the potentially
exponential costs of constructing a DFA.
Some years later Al Aho wrote the more complicated egrep in the
expectation that bad exponential cases wouldn't happen in everyday life.
But one did. This job took 30 seconds' preprocessing to prepare
for a fraction of a second's parsing. Chagrined, Al conceived
the lazy-evaluation trick to overcome the exponential and
achieved O(n) run time, albeit with a big linear constant.
In regard to the "short history of grep", I have always thought
my request that Ken liberate regular expressions from ed caused
him to write grep. I asked him one afternoon, but I can't remember
whether I asked in person or by email. Anyway, next morning I
got an email message directing me to grep. If Ken took it
from his hip pocket, I was unaware. I'll have to ask him.
Doug
> Norman Wilson is going to try and get us some higher quality scans which
will help a great deal in deciphering some of the hard to read characters.
A second scan, high or low quality, is a tremendous help. Diffing them
is a really good way to spot trouble.
Doug
On Thu, Feb 25, 2016 at 01:43:03PM -0500, Robert Swierczek wrote:
> Do you know if anybody has taken up the challenge of transcribing and
> simulating the PDP-7 Unix source code you have uncovered in your
> post http://minnie.tuhs.org/pipermail/tuhs/2016-February/006622.html
> If not, I would love to get started on it as a project.
Hi Robert, yes there is a project underway to type it all in and bring it
up on SimH and hopefully on a real PDP-7. I've set up a mailing list for
the project, so let me know if you would like to join: I'll add you.
The repository is at https://github.com/DoctorWkt/pdp7-unix. I've started
on the S1 section (in scans/), and I've also started work on an assembler
and a user-mode simulator (in tools/)
Norman Wilson is going to try and get us some higher quality scans which
will help a great deal in deciphering some of the hard to read characters.
Cheers, Warren
> From: Random832
> They're 24 bits, aren't they?
Not according to the source:
typedef long daddr_t;
daddr_t s_fsize; /* size in blocks of entire volume */
short s_nfree; /* number of addresses in s_free */
daddr_t s_free[NICFREE];/* free block list */
(from param.h and filsys.h respectively).
> From: Ron Natalie
> The V6 block numbers were 24 bits.
Maybe you're thinking of the byte number within the file? The file length
was stored in an word plus a byte in the inode in V6:
char i_size0;
char *i_size1;
but the block number in the device was a word:
int s_fsize; /* size in blocks of entire volume */
int s_nfree; /* number of in core free blocks (0-100) */
int s_free[100]; /* in core free blocks */
"Use the source, Luke!"
Noel
> From: Will Senn
> Is there a good source of information about the Unix v6 filesystem
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/man/man5/fs.5
> Also, is there a source for the history of the early Unix filesystems
> from v6 onward?
I don't know of one (although there is that article on the 4.2 filesystem),
but would love to hear of one.
I gather that V7 is basically V6 except the block numbers are 32 bits, not 16.
Noel
On Sun, Feb 21, 2016 at 09:21:14PM -0600, Will Senn wrote:
> Thanks for the link. The tools look useful. But, they appear to be extract from tape rather than create tape utils? I am away from a computer but will try them out later to make sure.
No, my bad. I thought they would make tapes, but I read the Readme files
and it doesn't look so. You could modify the mkfs.c tool that I wrote at
https://github.com/DoctorWkt/unix-jun72/blob/master/tools/mkfs.c
to write V6 filesystems. It shouldn't be too hard.
Cheers, Warren
Sometime back before the turn of the century, I remember
writing up a summary of the evolution of the UNIX file
system, starting with the earliest system I could find
information for (possibly the PDP-7 system) and running
through the printed manuals as things changed, up to
the Seventh Edition.
I think I've found it; I'll look it over and try to put
it somewhere on the web in the next day or two.
Norman Wilson
Toronto ON