[TUHS] The evolution of Unix facilities and architecture
ron at ronnatalie.com
Fri May 12 06:37:29 AEST 2017
I remember the pre-fsck days. It was part of my test to become an operator at the UNIX site at JHU that I could run the various manual checks.
The V6 file system wasn’t exactly stable during crashes (lousy database behavior), so there was almost certainly something to clean up.
The first thing we’d run was icheck. This runs down the superblock freelist and all the allocated blocks in the inodes. If there were missing blocks (not in a file or the free list), you could use icheck –s
to rebuild it. Similarly, if you had duplicated allocations in the freelist or between the freelist and a single file. Anything more complicated required some clever patching (typically, we’d just mount readonly, copy the files, and then blow them away with clri).
Then you’d run dcheck. As mentioned dcheck walks the directory path from the top of the disk counting inode references that it reconciles with the link count in the inode. Occasionally we’d end up with a 0-0 inode (no directory entires, but allocated…typically this is caused by people removing a file while it is still open, a regular practice of some programs for their /tmp files.). clri again blew these away.
Clri wrote zeros all over the inode. This had the effect of wiping out the file, but it was dangerous if you got the i-number wrong. We replaced it with “clrm” which just cleared the allocated bit, a lot easy to reverse.
If you really had a mess of a file system, you might get a piece of the directory tree broken off from a path to the root. Or you’d have an inode that icheck reported dups. ncheck would try to reconcile an inumber into an absolute path.
After a while a program called fsdb came around that allowed you to poke at the various file system structures. We didn’t use it much because by the time we had it, fsck was fast on its heals.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the TUHS