> From: Larry McVoy
> The DOS file system, while stupid, was very robust in the face of
> crashes
I'm not sure it's so much the file system (in the sense of the on-disk
format), as how the system _used_ it (although I suppose one could consider
that part of the FS too).
The duplicated FAT, and the way file sectors are linked using it, is I suppose
a little more robust than other designs (e.g. the way early Unixes did it,
with indirect blocks and free lists), but I think a lot of it must have been
that DOS wrote stuff out quickly (as opposed to e.g. the delayed writes on
early Unix FS's, etc). That probably appoximated the write-ordering of more
designed-robust FS's.
Noel
> From: Diomidis Spinelli
> Arguably, the same can also be claimed for the networking system calls.
Well, it depends on exactly what you mean by "networking system calls". If
you mean networking a la BSD, perhaps.
However, I can state (from personal experience :-) that the I/O architecture
circa V6/V7 was not very suitable for TCP/IP internetworking (with its
emphasis on an un-reliable network, and smart endpoints). The reason is that
such networking doesn't really fit well into the 'start one I/O operation and
then block the process until it completes' model.
Yes, if you have an application running on top of a reliable stream, you
might be able to coerce that into the 'uni-directional, blocking' I/O model
(if the reliable stream implementation is in, or routed through, the kernel),
but lots of other thing don't work so well. (Think, e.g. an interface with
asynchronous, un-predictable, RPC calls in both directions.)
Noel
> Linus had the qualities of being a good programmer, a good architect,
> and a good manager. I've never seen all 3 in a person before or since.
No comment about Linus, but Vic Vyssotsky is my pick for the title.
He created the first dataflow language (in 1960!). He invented
bit-parallel flow analysis and put it into Fortran 2 years later.
He was one of the technical triumvirs for Multics. Ran several
big development groups at Bell Labs, and was 2 levels up from
the Unix team in Research. I could go on and on. What he
didn't do was publish; he got ahead on pure innate ability
and brilliant insight--a profound influence on almost all]
the original Unix crowd.
Doug
I dont know if it's worth even trying to find and mirror pre 1993 ( IE when cheap CD-ROM mastering was possible) GNU software?
Things like binutils, gas, and GCC can be tremendously useful, along with binaries for long "dead" platforms?
I know that I've always been super thankful of the GNAT people for having some pre-compiled version of the ADA translator which would also include GCC. Sometimes having some kind of native toolset is a big positive, when you don't have anything, especially earlier versions that have issues cross or Canadian cross compiling.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
OMG. I don't know how many times I've consulted the Unix
Tree and blissfully ignored the cross-links that come at
the top of every file--I'm so intent on the content.
Apologies for cluttering the mailing list about a solved topic.
Doug
>Date: Sun, 19 Feb 2017 20:58:59 -0500
>From: Clem Cole <clemc(a)ccc.com>
>To: Nick Downing <downing.nick(a)gmail.com>
>Cc: Jason Stevens <jsteve(a)superglobalmegacorp.com>,
> "tuhs(a)minnie.tuhs.org" <tuhs(a)minnie.tuhs.org>
>Subject: Re: [TUHS] Mach for i386 / Mt Xinu or other
>Message-ID: <CAC20D2NM_oyDz0tAM2o5_vJ8Ky_3fHoAmPHn8+DOqNwKoMyqfQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
>
>On Sun, Feb 19, 2017 at 7:29 PM, Nick Downing <downing.nick(a)gmail.com> wrote:
...
>Anyway, Tru64 is based on OSF/1 but also has a lot of DEC proprietary
>things (like TruClusters and anything Alpha specific) that goes beyond the
>based OSF license, so you need the HP clearance before any of that can be
>made available [same is true for HP/UX of course]. To my knowledge,
>DEC/Compaq/HP never released the sources to Tru64 (or HP/UX) to the world
>they way Sun did for Solaris, which in the case of Tru64 is sort of shame.
>there is some every good stuff in there like the file systems, the lock
>managers, cluster scaling, messaging, etc - which would be nice to compare
>to today's solutions. Since HP did have a bought out AT&T license, that
>clearly could have done so, but I do not think anyone left there to make
>that decision - sigh.
As far as I know only the TRU64 Advanced File System (aka AdvFS) has
been released to the OpenSource community, in 2008. Status now unknown
(to me)
See also
. http://advfs.sourceforge.net
. https://www.cyberciti.biz/tips/download-tru64-unix-advanced-filesystem-advf…
Cheers,
rudi
Wow that'd be incredible!!!
I'd love to see how Mach 2.5/4.3BSD compared to the Mach 3.0/Lites 1.1 that
is as close as I've been able to find... I know about the NeXT stuff, as I
have NS 3.3 installed although running it on 'white' hardware gets harder
and harder as PC's get newer and the IDE controllers just are too feature
ful, and too new for NS to deal with, beyond it can only use 2GB disks
properly. Obviously with no source or any way to get in to write drivers or
update the FFS on NeXTSTEP it's basically stuck in those P1 era machines, or
emulation. There is even previous a 68030/68040 cube based emulator for
running all the 'native' versions.
archive what you can, I can only contribute minro things I stubmle uppon,
mostly by accident.
> ----------
> From: Atindra Chaturvedi
> Reply To: Atindra Chaturvedi
> Sent: Friday, February 17, 2017 11:47 PM
> To: jsteve(a)superglobalmegacorp.com; tuhs(a)minnie.tuhs.org
> Subject: Re: [TUHS] Mach for i386 / Mt Xinu or other
>
> Amazing - brings back memories. I was a Unix "enterprise IT user" not a
> "kernel developer guru" back in the day working at a pharmaceutical
> company and was responsible for moving the company off IBM 3090 and SNA to
> Unix and TCP/IP.
>
> Used to buy the new Unix-like releases as they were available to stay
> current - including the Mt. Xinu Mach 386 distro. I still have it and will
> happily send it to the archives - if I can be guided a bit.
>
> Ran the Mt. Xinu for many years as my home machine - it is pre-SCSI for
> booting ( needs ESDI disks ) but was very stable. So will need tweaking to
> boot/install.
>
> Happy to have worked in the mid-70 - 80's era when there were huge changes
> in computer hardware and software technology. I have my books and the
> software for all the cool stuff as it came out in those days - some day I
> will compile it and send it to where it can be better used or archived as
> history.
>
> Atindra.
>
>
>
> -----Original Message-----
> From: jsteve(a)superglobalmegacorp.com
> Sent: Feb 17, 2017 6:30 AM
> To: "tuhs(a)minnie.tuhs.org"
> Subject: [TUHS] Mach for i386 / Mt Xinu or other
>
>
>
> While testing a crazy project I wanted to get working I came across
> this ancient link:
>
>
>
>
> http://altavista.superglobalmegacorp.com/usenet/b182/comp/os/mach/542.txt
>
>
>
> --------8<--------8<--------8<--------8<
>
>
>
> Newsgroups: comp.os.mach
>
> Subject: Mach for i386 - want to beta?
>
> Message-ID: <1364(a)mtxinu.UUCP>
>
> Date: 2 Oct 90 17:12:19 GMT
>
> Reply-To: scherrer(a)mtxinu.COM (Deborah Scherrer)
>
> Organization: mt Xinu, Berkeley
>
> Lines: 24
>
>
>
> Mt Xinu is currently finishing up its release of 2.6 MSD for the
> i386.
>
> 2.6 MSD is a CMU-funded standard distribution of the Mach kernel,
>
> release-engineered with the following:
>
> 2.5 Mach kernel, with NFS & BSD-tahoe enhancements
>
> Transarc's AFS
>
> X11R4
>
> most of the 4.3-tahoe BSD release
>
> Andrew Tool Kit
>
> Camelot transaction processing system
>
> Cornell's ISIS distributed programming environment
>
> most of the FSF utilities
>
> a few other nifty things
>
>
>
> --------8<--------8<--------8<--------8<
>
>
>
> Was any of this stuff ever saved? I know on the CSRG CD there is
> some buried source for Mach 2.5 although I haven't seen anything on where
> to even start to compile it, how or even how to boot it... I know Mach is
> certainly not fast, nor all that 'small' but it'd be interesting to see a
> 4.3BSD on a PC!
>
>
> That's what the Unix Tree is for!
Yes, but it doesn't have cross links as far as I know.
What I have in mind is effectively one more entry in
the root. Call it "union" perhaps. In a leaf of that
tree, say /union/usr/src/cmd/find, will ne a page that
links to all the "find sources in the other systems.
I don't know the range of topologies in the Unix Tree.
For example, some systems may have /src while others
have /usr/src. That could be hidden completely by
simply not revealing the path names. Alternatively
every level in the union tree could record its cousins
in the various systems, as well as its children
in the union system.
Doug
If things are filed by provenance, one useful kind of
cross-linking would be a generic tree whose "leaves"
link to all the versions of the "same" file. All the
better if it could also indicate the degree of
relatedness of the versions--perhaps an inferred
evolutionary tree or a shaded grid, where the
intensity of grid point x,y shows the relatedness
of x and y.
doug
Hi all, I think the current layout of the Unix Archive at
http://www.tuhs.org/Archive/ is starting to show its limitations as we get
more systems and artifacts that are not specifically PDP-11 and Vax.
I'm after some suggestions on how to reorganise the archive. Obviously
there are many ways to do this. Just off the top of my head, top level:
- Applications: things which run at user level
- Systems: things which have a kernel
- Documentation
- Tools: tools which can be used to deal with systems and files
Under Applications, several directories which hold specific things. If
useful, perhaps directories that collect similar things.
Under Systems, a set of directories for specific organisations (e.g. Research,
USL, BSD, Sun, DEC etc.). In each of these, directories for each system.
Under Documentation, several directories which hold specific things. If
useful, perhaps directories that collect similar things.
Under Tools, subdirectories for Disk, Tape, Emulators etc., then subdirs
for the specific tools.
Does this sound OK? Any refinements or alternate suggestions?
Cheers, Warren
Hi all, to quickly answer one recent question. If you want to upload
something Unix-related for me to archive, you can anonymous ftp upload to
ftp://minnie.tuhs.org/incoming/
Nobody can list the directory contents, so it's good for sensitive files.
If you upload something called xyz, can you also add xyz_Readme which might
describe e.g. what the thing is, where it came from, file format (e.g.
floppy images), how to install it, any other useful information.
If you think it can be added to the public Unix Archive at
http://www.tuhs.org/Archive/, or if the file definitely can't be added
and I should move it to the hidden archive, also say so. Also feel free
not to disclose your identity.
Cheers, Warren
P.S Work has become busy this year. I might call for people to help
out with the curation. Any volunteers? Discretion is a pre-requisite.
> From: Atindra Chaturvedi
> including the Mt. Xinu Mach 386 distro. I still have it and will happily
> send it to the archives
Oh, that's fantastic. It's so important that everyone who has these chunk of
computing history make sure they make it into repositories!
> I have my books and the software for all the cool stuff as it came out
> in those days - some day I will compile it and send it to where it can
> be better used or archived as history.
Please do! And everyone else, please emulate! (I'm already doing my bit! :-)p
Noel
> OK, we're starting to get through all the clearances needed to release
> the non-MIT Unix systems
We have now completed (as best we can) the OK's for the 'BBN TCP/IP V6 Unix',
and I finally bestirred myself to add in the documentation I found for it,
and crank out a tarball, available here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/tmp/bbn.tar
It includes all the documentation files I found for the Rand and BBN code (in
the ./doc directory); included are the original NROFF source to the two Rand
publications about ports, and several BBN reports.
This is an early TCP/IP Unix system written at BBN. It was not the first
TCP/IP Unix; that was one done at BBN in MACRO-11, based on a TCP done in
MACRO-11 by Jim Mathis at SRI for the TIU (Terminal nterface Unit).
This networking code is divided into three main groups. First there is
code for the kernel, which includes IPC enhancements to Unix, including
Rand ports, as well as further extensions to that done at BBN for the
earlier TCP - the capac() and await() calls. It also includes a IMP
interface driver (the code only interfaced to the ARPANET at this point in
time). Next, TCP is implemented as a daemon which ran as a single process
which handled all the connections. Finally, other programs implement
applications; TELNET is the only one provided at this point in time.
The original port code was written by Steven Zucker at Rand; the extensions
done at BBN were by Jack Haverty. The TCP was mostly written by Mike
Wingfield, apparently with some assistance by Jon Dreyer. Dan Franklin
apparently wrote the TELNET.
Next, I'll be working on the MIT-CSR machine. That's going to take quite a
while - it's a whole system, with a lot of applications. It does include FTP,
SMTP, etc, though, so it will be a good system for anyone who wants to run V6
with TCP on a /23. We'll have to write device drivers for whatever networking
cards are out there, though.
Noel
> From: Larry McVoy
> Are you sure? Someone else said moshi was hi and mushi was bug. Does
> mushi have two meanings?
Yes:
http://www.nihongodict.com/?s=mushi
Actually, more than two! Japanese is chock-a-block with homonyms. Any
given Japanese word will probably have more than one meaning.
There's some story I don't quite recall about a recent Prime Minister who
made a mistake of this sort - although now that I think about it, it was
probably the _other_ kind of replication, which is that a given set of kanji
(ideograms) usually has more than one pronunciation. (I won't go into why,
see here:
http://mercury.lcs.mit.edu/~jnc/prints/glossary.html#Reading
for more.) So he was reading a speech, and gave the wrong reading for a word.
There is apparently a book (or more) in Japanese, for the Japanese, that lists
the common ones that cause confusion.
A very complicated language! The written form is equally complicated; there
are two syllabaries ('hiragana' and 'katakana'), and for the kanji, there are
several completely different written forms!
Noel
Follow-up to Larry's "Mushi! Mushi!" story
(http://minnie.tuhs.org/pipermail/tuhs/2017-February/008149.html)
I showed this to a Japanese acquaintance, who found it hilarious for a
different reason. He told me that a s/w bug is "bagu" -- a
semi-transliteration -- and "mushi" is "I ignore you". So corporate
called, asked for status, and the technical guy said "I am going to
ignore you!" and then hung up.
N.
I have found a video by Sandy Fraser from 1994 which discusses the Spider network (but not the related Unix software). The first 30 min or so are about Spider and the ideas behind it, then it moves on to Datakit and ATM:
https://www.youtube.com/watch?v=ojRtJ1U6Qzw
Although the thinking behind them is very different, the "switch" on the Spider network seems to have been somewhat similar to an Arpanet IMP.
Paul
==
On page 3 of the Research Unix reader (http://www.cs.dartmouth.edu/~doug/reader.pdf)
"Sandy (A. G.) Fraser devised the Spider local-area ring (v6) and the Datakit switch (v7) that have served in the lab for over a decade. Special services on Spider included a central network file store, nfs, and a communication package, ufs."
I do not recall ever seeing any SPIDER related code in the public V6 source tree. Was it ever released outside Bell Labs?
From a bit of Googling I understand that SPIDER was a ATDM ring network with a supervisor allocating virtual circuits. Apparently there was only ever one SPIDER loop with 11 hosts connected, although Fraser reportedly intended to create multiple connected loops as part of his research.
The papers that Fraser wrote are hard to find: lots of citations, but no copies, not even behind pay walls. The base report seems to be:
A. G. FRASER, " SPIDER-a data communication experiment", Tech Report 23 , Bell Lab, 1974.
Is that tech report available online somewhere?
Tanks!
Paul
> From: Random832
> You could return the address of the last character read, and let the
> user code do the math.
Yes, but that's still 'design the system call to work with interrupted and
re-started system calls'.
> If the terminal is in raw/cbreak mode, the user code must handle a
> "partial" read anyway, so returning five bytes is fine.
As in, if a software interrupt happens after 5 characters are read in, just
terminate the read() call and have it return 5? Yeah, I suppose that would
work.
> If it's in canonical mode, the system call does not copy characters into
> the user buffer until they have pressed enter.
I didn't remember that; that TTY code makes my head hurt! I've had to read it
(to add 8-bit input and output), but I can't remember all the complicated
details unless I'm looking at it!
> Maybe there's some other case other than reading from a terminal that it
> makes sense for, but I couldn't think of any while writing this post.
As the Bawden paper points out, probably a better example is _output_ to a
slow device, such as a console. If the thing has already printed 5 characters,
you can't ask for them back! :-)
So one can neither i) roll the system call back to make it look like it hasn't
started yet (as one could do, with input, by stuffing the characters back into
the input buffer with kernel ungetc()), or ii) wait for it to complete (since
that will delay delivery of the software interrupt). One can only interrupt
the call (and show that it didn't complete, i.e. an error), or have
re-startability (i.e. argument modification).
Noel
> From: Paul Ruizendaal
> There's an odd comment in V6, in tty.c, just above ttread():
> ...
> That comment is strange, because it does not describe what the code
> does.
I can't actually find anyplace where the PC is backed up (except on a
segmentation fault, when extending the stack)?
So I suspect that the comment is a tombstone; it refers to what the code did
at one point, but no longer does.
> The comment isn't there in V5 or V7.
Which is consistent with it documenting a temporary state of affairs...
> I wonder if there is a link to the famous Gabriel paper
I suspect so. Perhaps they tried backing up the PC (in the case where a system
call is interrupted by a software interrupt in the user's process), and
decided it was too much work to do it 'right' in all instances, and punted.
The whole question of how to handle software interrupts while a process is
waiting on some event, while in the kernel, is non-trivial, especially in
systems which use the now-universal approach of i) writing in a higher-level
stack oriented language, and ii) 'suspending' with a sub-routine call chain on
the kernel stack.
Unix (at least, in V6 - I'm not familiar with the others) just trashes the
whole call stack (via the qsav thing), and uses the intflg mechanism to notify
the user that a system call was aborted. But on systems with e.g. locks, it
can get pretty complicated (try Googling Multics crawl-out). Many PhD theses
have looked at these issues...
> Actually, research Unix does save the complete state of a process and
> could back up the PC. The reason that it doesn't work is in the syscall
> API design, using registers to pass values etc. If all values were
> passed on the stack it would work.
Sorry, I don't follow this?
The problem with 'backing up the PC' is that you 'sort of' have to restore the
arguments to the state they were in at the time the system call was first
made. This is actually easier if the arguments are in registers.
I said 'sort of' because the hard issue is that there are system calls (like
terminal I/O) where the system call is potentially already partially executed
(e.g. a read asking for 10 characters from the user's console may have
already gotten 5, and stored them in the user's buffer), so you can't just
simply completely 'back out' the call (i.e. restore the arguments to what they
were, and expect the system call to execute 'correctly' if retried - in the
example, those 5 characters would be lost).
Instead, you have to modify the arguments so that the re-tried call takes up
where it left off - in the example above, tries to read 5 characters, starting
5 bytes into the buffer). The hard part is that the return value (of the
number of characters actually read) has to count the 5 already read! Without
the proper design of the system call interface, this can be hard - how does
the system distinguish between the _first_ attempt at a system call (in which
the 'already done' count is 0), and a _later_ attempt? If the user passes in
the 'already done' count, it's pretty straightforward - otherwise, not so
much!
Alan Bawden wrote a good paper about PCLSR'ing which explores some of these
issues.
Noel
There's an odd comment in V6, in tty.c, just above ttread():
/*
* Called from device's read routine after it has
* calculated the tty-structure given as argument.
* The pc is backed up for the duration of this call.
* In case of a caught interrupt, an RTI will re-execute.
*/
That comment is strange, because it does not describe what the code does. The comment isn't there in V5 or V7.
I wonder if there is a link to the famous Gabriel paper about "worse is better" (http://dreamsongs.com/RiseOfWorseIsBetter.html) In arguing its points, the paper includes this story:
---
Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called PC loser-ing because the PC is being coerced into loser mode, where loser is the affectionate name for user at MIT.
The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.
The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix -- namely, implementation simplicity was more important than interface simplicity.
---
Actually, research Unix does save the complete state of a process and could back up the PC. The reason that it doesn't work is in the syscall API design, using registers to pass values etc. If all values were passed on the stack it would work. As to whether it is the right thing to be stuck in a read() call waiting for terminal input after a signal was received...
I always thought that this story was entirely fictional, but now I wonder. The Unix guru referred to could be Ken Thompson (note how he is first referred to as "from Berkeley but working on Unix" and then as "the New Jersey guy").
Who can tell me more about this? Any of the old hands?
Paul
> From: Lars Brinkhoff
> Nick Downing <downing.nick(a)gmail.com> writes:
>> By contrast the MIT guy probably was working with a much smaller/more
>> economical system that didn't maintain a kernel stack per process.
I'm not sure I'd call ITS 'smaller'... :-)
> PCLSRing is a feature of MIT' ITS operating system, and it does have a
> separate stack for the kernel.
I wasn't sure if there was a separate kernel stack for each process; I checked
the ITS source, and there is indeed a separate stack per process. There are
also three other stacks in the kernel that are used from time to time (look
for 'MOVE P,' for places where the SP is loaded).
Oddly enough, it doesn't seem to ever _save_ the SP - there are no 'MOVEM P,'
instructions that I could find!
Noel
On page 3 of the Research Unix reader (http://www.cs.dartmouth.edu/~doug/reader.pdf)
"Sandy (A. G.) Fraser devised the Spider local-area ring (v6) and the Datakit switch (v7) that have served in the lab for over a decade. Special services on Spider included a central network file store, nfs, and a communication package, ufs."
I do not recall ever seeing any SPIDER related code in the public V6 source tree. Was it ever released outside Bell Labs?
From a bit of Googling I understand that SPIDER was a ATDM ring network with a supervisor allocating virtual circuits. Apparently there was only ever one SPIDER loop with 11 hosts connected, although Fraser reportedly intended to create multiple connected loops as part of his research.
The papers that Fraser wrote are hard to find: lots of citations, but no copies, not even behind pay walls. The base report seems to be:
A. G. FRASER, " SPIDER-a data communication experiment", Tech Report 23 , Bell Lab, 1974.
Is that tech report available online somewhere?
Tanks!
Paul
> we just read the second tape, which read without error. ... at this
> point we have access to everything that was on that machine.
OK, we're starting to get through all the clearances needed to release the
non-MIT Unix systems on the machine. (The MIT one is going to take more
work - I have to curate out all the personal files.)
We have now completed the OK's for the 'Network Unix' (the one done at the
University of Illinois for use on the ARPANET, with NCP). A tarball is
available here:
http://ana-3.lcs.mit.edu/~jnc/tech/pdp11/tmp/nosc.tar
(It's called 'nosc.tar' because it came through NOSC, and then SRI,
on the way to MIT.)
In addition to all the UIllinois code, it also contains early versions of the
MH mail reader (from Rand) and the MMDF mailer (from UDel).
Enjoy!
Noel
With no offense intended, I can't help noting the irony of the
following paragraph appearing in a message in the company of
others that address Unix "bloat".
>'\cX' A mechanism that allows usage of the non-printable
> (ASCII and compatible) control codes 0 to 31: to cre-
> ate the printable representation of a control code the
> numeric value 64 is added, and the resulting ASCII
> character set code point is then printed, e.g., BEL is
> '7 + 64 = 71 = G'. Whereas historically circumflex
> notation has often been used for visualization pur-
> poses of control codes, e.g., '^G', the reverse
> solidus notation has been standardized: '\cG'. Some
> control codes also have standardized (ISO 10646, ISO
> C) alias representations, as shown above (e.g., '\a',
> '\n', '\t'): whenever such an alias exists S-nail will
> use it for display purposes. The control code NUL
> ('\c@') ends argument processing without producing
> further output.
Except for the ISO citations, this paragraph says the same
thing more succinctly.
'\cX' represents a nonprintable character Y in terms of the
printable character X whose binary code is obtained
by adding 0x40 (decimal 64) to that for Y. (In some
historical contexts, '^' plays the role of '\c'.)
Alternative standard representations for certain
nonprinting characters, e.g. '\a', '\n', '\t' above,
are preferred by S-nail. '\c@' (NUL) serves as a
string terminator regardless of following characters.
And this version, 1/3 the length of the original, tells all
one really needs to know.
'\cX' represents a nonprintable character Y in terms of the
printable character X whose binary code is obtained
by adding 0x40 (decimal 64) to that for Y. '\c@'
(NUL) serves as a string terminator regardless of
following characters.
Doug]