I was wondering, what was the /crp mount point in early UNIX used for?
And what does "crp" mean? Does it mean what I think it does?
It is only mentioned in V3 it seems:
./v4man/manx/unspk.8:unspk lives in /crp/vs (v4/manx means pre-v4)
./v3man/man6/yacc.6:SYNOPSIS /crp/scj/yacc [ <grammar ]
./v3man/man4/rk.4:/dev/rk3 /crp file system
I suppose scj, doug or ken can help out.
aap
Peter Adams, who photographed many Unix folks for his
"Faces of open source" series (http://facesofopensource.com/)
found trinkets from the Unix lab in the Bell Labs archives:
http://www.peteradamsphoto.com/unix-folklore/.
One item is more than a trinket. Belle, built by
Ken Thompson and Joe Condon, won the world computer
chess championship in 1980 and became the first
machine to gain a chess master rating. Physically,
it's about a two-foot cube.
Doug
Spurred by the recent discussion of NIS, NIS+, LDAP et al, I'm curious what
the landscape was like for distributing administrative information in early
Unix networks.
Specifically I'm thinking about things like the Newcastle Connection, etc.
I imagine that PDP-11's connected to the ARPAnet running Unix would (e.g.,
RFC 681 style) would have adapted the HOSTS.TXT format somehow. What about
CHAOS? Newcastle? Datakit?
What was the introduction of DNS into the mix like? I can imagine that that
changed all sorts of assumptions about failure modes and the like.
NIS and playing around with Hesiod are probably the earliest such things I
ever saw, but I know there must have been prior art.
Supposedly field 5 from /etc/passwd is the GECOS username for remote job
entry (or printing)? How did that work?
- Dan C.
> I have a vague intuition right now that the hyphenation decisions
> ...
> should be accessible without having to invoke the output driver.
Would't that require some way to detect a hyphenation event?
Offhand, I can't think of a way to do that.
But if you know in advance what word's hyphenation is in
question, you could switch environments, use the .ll 1u
trick in a diversion, and base your decision on the result.
Doug
UNIX was half a billion (500000000) seconds old on Tue Nov 5 00:53:20
1985 GMT (measuring since the time(2) epoch).
-- Andy Tannenbaum
Hmmm... According to my rough calculations, it hit a billion (US) seconds
around 2000.
-- Dave
Does anyone have any experience with YP / NIS / NIS+ / LDAP as a central
directory on Unix?
I'm contemplating playing with them for historical reasons.
As such, I'm wondering what the current evolution is for a pure Unix
environment. Read: No Active Directory. Is there a current central
directory service for Unix (or Linux)? If so, what is it?
I'm guessing it's LDAP combined with Kerberos, but I'm not sure.
--
Grant. . . .
unix || die
Interesting. /crp was a regular part of the Research world
in the mid-1980s when I joined up. It was nothing special,
just an extra file system for extra junk, which might or might
not be backed up depending on the system.
I had no idea its roots ran so far back in time.
I always thought it was an abbreviation for `crap,' though,
oddly, the conventional pronunciation seemed to be creep.
Norman Wilson
Toronto ON
A. P. Garcia:
I'd be interested in knowing where a pure unix environment
exists, beyond my imagination and dreams that is.
====
For starters, the computing facility used for teaching
in the Department of Computer Science at the University
of Toronto. Linux workstations throughout our labs; Linux
file servers and other back-ends, except OpenBSD for the
Kerberos KDCs and firewalls.
And yes, we use Kerberos, including Kerberized NFS for
(almost) all exports to lab workstations, which cannot
be made wholly secure against physical breakins by students.
(There's no practical way to prevent that entirely.)
Except we also use traditional UNIX /etc/shadow files
and non-Kerberized NFS for systems that are physically
secure, including the host to which people can ssh from
outside. If you don't type a password when you log in,
you cannot get a Kerberos TGT, so you wouldn't have access
to your home directory were it Kerberized there; and we
aren't willing to (and probably couldn't) forbid use of
.ssh/authorized_keys for users who know how to do that.
Because we need to maintain the password in two places,
and because we create logins automatically in bulk from
course-registration data, we've had to write some of our
own tools. PAM and the ssh GSSAPI support suffice for
logging in, but not for password changes or account
creation and removal.
Someday we will have time to look at LDAP. Meanwhile we
distribute /etc/passwd and /etc/shadow files (the latter
mostly blanked out to most systems) via our configuration-
management system, which we need to have to manage many
other files anyway.
Norman Wilson
Toronto ON
I was just reading this book review:
http://www.pathsensitive.com/2018/10/book-review-philosophy-of-software.html
and came across these paragraphs:
<book quote>
The mechanism for file IO provided by the Unix operating system
and its descendants, such as Linux, is a beautiful example of a
deep interface. There are only five basic system calls for I/O,
with simple signatures:
int open(const char* path, int flags, mode_t permissions);
ssize_t read(int fd, void* buffer, size_t count);
ssize_t write(int fd, const void* buffer, size_t count);
off_t lseek(int fd, off_t offset, int referencePosition);
int close(int fd);
</book quote>
The POSIX file API is a great example, but not of a deep
interface. Rather, it’s a great example of how code with a very
complicated interface may look deceptively simple when reduced to C-style
function signatures. It’s a stateful API with interesting orderings
and interactions between calls. The flags and permissions parameters
of open hide an enormous amount of complexity, with hidden requirements
like “exactly one of these five bits should be specified.” open may
return 20 different error codes, each with their own meaning, and many
with references to specific implementations.
The authors of SibylIFS tried to write down an exact description of the
open interface. Their annotated version[1] of the POSIX standard is over
3000 words. Not counting basic machinery, it took them over 200 lines[2]
to write down the properties of open in higher-order logic, and another
70 to give the interactions between open and close.
[1]: https://github.com/sibylfs/sibylfs_src/blob/8a7f53ba58654249b0ec0725ce38878…
[2]: https://github.com/sibylfs/sibylfs_src/blob/8a7f53ba58654249b0ec0725ce38878…
I just thought it was a thought-provoking comment on the apparent elegance
of the Unix file API that actually has some subtle complexity.
Cheers, Warren
> From: Lars Brinkhoff
> Let's hope it's OK!
Indeed! It will be fun to see that code.
> I suppose I'll have to add a simulation of the Unibus CH11 Chaosnet
> interface to SIMH.
Why? Once 10M Ethernet hardware was available, people switched pretty rapidly
to using that, instead of the CHAOS hardware. (It was available off the shelf,
and the analog hardware was better designed.) That's part of the reason ARP is
multi-protocol.
Some hard-to-run cables (e.g. under the street from Tech Sq to main campus)
stayed CHAOS hardware because it was easier to just keep using what was there,
but most new machines got Ethernet cards.
Noel
> From: Chris Hanson
> you should virtually never use read(2), only ever something like this:
> ...
> And do this for every classic system call, since virtually no client
> code should ever have to care about EINTR.
"Virtually". Maybe there are places that want to know if their read call was
interrupted; if you don't make this version available to them, how can they
tell? Leaving the user as much choice as possible is the only way to go, IMO;
why force them to do it the way _you_ think is best?
And it makes the OS simpler; any time you can move functionality out of the
OS, to the user, that's a Good Thing, IMO. There's nothing stopping people
from using the EINTR-hiding wrapper. (Does the Standard I/O library do this,
does anyone know?)
Noel
PS: Only system calls that can block can return EINTR; there are quite a few
that don't, not sure what the counts are in modern Unix.
On Sun, 4 Nov 2018, Chris Hanson wrote:
> Every piece of code that wants to call, say, read(2) needs to handle
> not only real errors but also needs to special-case EINTR and retry
> the read. Thus you should virtually never use read(2), only ever
> something like this:
> ...
> And do this for every classic system call, since virtually no client
> code should ever have to care about EINTR. It was early an
> implementation expediency that became API and that everyone now has
> to just deal with because you can’t expect the system call interface
> you use to do this for you.
>
>This is the sort of wart that should’ve been fixed by System V and/or BSD 4 at latest.
But it *was* fixed in BSD, and it's in POSIX as the SA_RESTART flag to
sigaction (which gives you BSD signal semantics).
POSIX supports both the original V7 and BSD signal semantics, because
by then there were programs which expected system calls to be
interrupted by signals (and to be fair, there are times when that's
the more convenient way of handling an interrupt, as opposed to using
setjump/longjump to break out of a restartable system call).
- Ted
P.S. The original implementation of ERESTARTSYS / ERESTARTNOHAND /
ERESTARTNOINTR errno handling in Linux's system call return path was
my fault. :-)
The last couple of days I worked on re-setting the V3-V6 manuals.
I reconstructed V5 from the scan as best I could, unfortunately some
pages were missing.
You can find everything I used to do this here,
please read the BUGS section:
https://github.com/aap/unixman
The results can be found here, as HTML and PDF:
http://squoze.net/UNIX/v3man/http://squoze.net/UNIX/v4man/http://squoze.net/UNIX/v5man/http://squoze.net/UNIX/v6man/
Reconstructing V1 and V2 n?roff source and converting the tty 37 output
to ps is something I want to do too, but for now this was exhausting
enough.
Now for the questions that I arose while I was doing this:
Are there scans of the V4 and V6 manual to check my pdfs against?
Where does the V5 manual come from? As explained in the README,
some pages are missing and some pages seem to be earlier than V4.
Is there another V5 manual that one could check against?
Why is lc (the LIL compiler) not in the TOC but has a page?
And most importantly: is the old troff really lost?
I would love to set the manual on the original systems
at some point (and write a CAT -> ps converter, which should be fun).
Doing all this work made me wish we still had earlier versions
of UNIX and its tools around.
Have fun with this!
aap
> From: Clem Cole
> (probably because Larry Allen implemented both UNIX Chaos and Aegis IIRC).
Maybe there are two Larry Allen's - the one who did networking stuff at
MIT-LCS was Larry W. Allen, and I'm pretty sure he didn't do Unix CHAOS code
(he was part of our group at LCS, and we only did TCP/IP stuff; someone over
in EE had a Unix with CHAOS code at the time, so it pre-dated his time with
us).
Noel
Hello,
Which revisions of the "C Reference Manuals" are known to be out there?
I found this:
https://www.bell-labs.com/usr/dmr/www/cman.pdf
Which seems to match the one from V6:
https://github.com/dspinellis/unix-history-repo/tree/Research-V6-Snapshot-D…
"C is also available on the HIS 6070 computer at Murray Hill and and on
the IBM System/370 at Holmdel [3]."
But then there's this:
https://www.princeton.edu/ssp/joseph-henry-project/unix-and-c/bell_labs_136…
"C is also available on the HIS 6070 computer ar Hurray Hill, using a
compiler written bu A. Snyder and currently maintained by S. C. Johnson.
A compiler for the IBM System/360/370 series is under construction."
Due to the description of the IBM compiler, it seems to predate the V6
revision.
Both above revisions use the =+ etc operators.
Finally, this version edited by Snyder:
https://github.com/PDP-10/its/blob/master/doc/c/c.refman
"In addition to the UNIX C compiler, there exist C compilers for the HIS
6000 and the IBM System/370 [2]."
This version documents both += and =+ operators.
Of interest to the old farts here...
At 22:30 (but which timezone?) on this day in 1969 the first packet got as
far as "lo" (for "login") then crashed on the "g".
More details over on http://en.wikipedia.org/wiki/Leonard_Kleinrock#ARPANET
(with thanks to Bill Cheswick for the link).
-- Dave
> From: Steve Johnson
> references that were checked using the pointer type of the structure
> pointer. My code was a nightmare, and some of the old Unix code was at
> least a bad dream.
I had a 'fun' experience with this when I went to do the pipe splice() system
call (after the discussion here). I elected to do it in V6, which I i) had
running, and ii) know like the back of my hand.
Alas! V6 uses 'int *' everywhere for pointers to structures. It also, in the
pipe code, uses constructs like '(p+1)' to provide wait channels. When I wrote
the new code, I naturally declared my pointers as 'struct inode *ip', or
whatever. However, when I went to do 'sleep(ip+1)', the wrong thing happened!
And since V6 C didn't have coercions, I couldn't win that way. IIRC, I finally
resorted to declaring an 'int *xip', and doing an 'xip = ip' before finally
doing my 'sleep(xip+1)'. Gack!
Noel
> From: Dave Horsfall
> We lost ... on this day
An email from someone on a related topic has reminded me of someone else you
should make sure is only your list (not sure if you already have him):
J. C. R. Licklider; we lost him on June 26, 1990.
He didn't write much code himself, but the work of people he funded (e.g.
Doug Engelbart, the ARPANet guys, Multics, etc, etc, etc) to work on his
vision has led to today's computerized, information-rich world. For people who
only know today's networked world, the change from what came before, and thus
his impact on the world (since his ideas and the work of people he sponsored
led, directly and indirectly, to much of it), is probably hard to truly
fathom.
He is, in my estimation, one of the most important and influential computer
scientists of all. I wonder how many computer science people had more of an
impact; the list is surely pretty short. Babbage; Turing; who else?
Noel
> From: Dave Horsfall
> We lost Jon Postel, regarded as the Father of the Internet
Vint and Bob Kahn might disagree with that that... :-)
> (due to his many RFCs)
You need to distinguish between the many for which he was an editor (e.g. IP,
TCP, etc), and the (relatively few, compared to the others) which he actually
wrote himself, e.g. RFC-925, "Multi-LAN address resolution".
Not that he didn't make absolutely huge contributions, but we should be
accurate.
Noel
> Now it could be that v7 troff is perfectly capable of generating the
> manual just like older troff would have.
On taking over editorship for v7, I added some macros to the -man
package. I don't specifically recall making any incompatible
changes. If there were any, they'd most likely show up in
the title and synopsis and should be fixable by a minor tweak
to -man. I'm quite confident that there would be no problems
with troff proper.
Doug
Angelo Papenhoff <aap(a)papnet.eu> writes about the conversion of
printer points to other units:
>> >From my experience in the world of prepress 723pts == 10in.
>>
>> Then Adobe unleashed PostScript on us and redefined the point
>> so that 72pt == 1in.
>>
>> Ibunaware of any other definitions of a point.
The most important other one is that used by the TeX typesetting
system: 72.27pt is one inch. TeX calls the Adobe PostScript one a big
point: 72bp == 1in. Here is what Don Knuth, TeX's author, wrote on
page 58 of The TeXbook (Addison-Wesley, 1986, ISBN 0-201-13447-0):
>> ...
>> The units have been defined here so that precise conversion to sp
>> is efficient on a wide variety of machines. In order to achieve
>> this, TeX's ``pt'' has been made slightly larger than the official
>> printer's point, which was defined to equal exactly .013837in by
>> the American Typefounders Association in 1886 [cf. National Bureau
>> of Standards Circular 570 (1956)]. In fact, one classical point is
>> exactly .99999999pt, so the ``error'' is essentially one part in
>> 10^8. This is more than two orders of magnitude less than the
>> amount by which the inch itself changed during 1959, when it
>> shrank to 2.54cm from its former value of (1/0.3937)cm; so there
>> is no point in worrying about the difference. The new definition
>> 72.27pt=1in is not only better for calculation, it is also easier
>> to remember.
>> ...
Here sp is a scaled point: 65536sp = 1pt. The distance 1sp is smaller
than the wavelength of visible light, and is thus not visible to
humans.
TeX represents physical dimensions as integer numbers of scaled
points, or equivalently, fixed-point numbers in points, with a 16-bit
fraction. With a 32-bit word size, that leaves 16 bits for the
integer part, of which the high-order bit is a sign, and the adjacent
bit is an overflow indicator. That makes TeX's maximum dimension on
such machines 1sp below 2^14 (= 16,384) points, or about 5.75 meters
or 18.89 feet.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------