> Wouldn't the -man macros have predated -ms?
Indeed. My error.
The original -man package was quite weak. It got a major face
lift for v7 and once more at v9 or so. And further man-page
packages are still duking it out today. -ms has lots of rivals,
too, but its continued popularity attests to Mike Lesk's fine
sense of design.
Doug
> From: Nemo
> I have read that one of the first groups in AT&T to use early Unix was
> the legal dep't, specifically to use *roff to write patent applications.
> Can anyone elaborate on this or supply references?
Are you familiar with the description in Dennis M. Ritchie, "The Evolution of
the Unix Time-sharing System":
https://www.bell-labs.com/usr/dmr/www/hist.htm
(in the section "The first PDP-11 system")? Not a great deal of detail, but...
> It …
[View More]would also be interesting to learn how the writers were taught *roff,
> what editors were used
I'm pretty sure 'ed' was the only editor available at that point.
Noel
[View Less]
> From: Clem Cole
> Programmer's Workbench - aka PWB was John Mashey and team in Whippany.
> They took a V6 system and make some changes
I was suprised to find, reading the article on it in the Unix BSTJ issue, that
the system changes were less than I'd thought. Some of the stuff in the PWB1
release that we have (see previous message) is _not_ described in that article
(which is fairly detailed), which further compounds the lack of clarity over
who/what/when between V6 and V7.…
[View More]
> Noel may know how it made it to MIT
That I _do_ know! There was some sort of Boy Scouts group at Bell (not sure
exactly where) and one of the members went to MIT. I think he was doing
undergraduate research work in the first group at MIT to have Unix (Steve
Ward's), but anyway he had some connection there; and I think also had a
summer job at Bell. He was the Bell->MIT conduit.
> PWB 2.0 was released a few years later and was based on the UNIX/TS
> kernel and some other changes and it was around this time that the UNIX
> Support Group was formed
??? If PWB1 was in July '77, and PWB2 was some years later, USG couldn't have
been formed 'around [that] time' because there's that USG document from
January '76?
Noel
[View Less]
> From: Jon Forrest <nobozo(a)gmail.com>
> John Mashey had a lot to do with PWB so maybe he can say a few words
> about it if he's on here.
It would be great to have some inside info about the relationship among the
Research, USG and PWB systems. Clearly there was comunication, and things got
passed around, but we know so little about what was happening during the
period between V6 and V7 when a lot happened (e.g. the changes to C, just
mentioned).
E.g. check out the …
[View More]PWB1 version of exec():
https://minnie.tuhs.org//cgi-bin/utree.pl?file=PWB1/sys/sys/os/sys1.c
It's been changed from V6 to copy the arguments into swap space, _not_ buffers
allocated from the system buffer pool (which is how V6 does it). So, who did
this originally - did the PWB people do it, or was it something the research
people did, that PWB picked up?
I used to think it was USG, but there's a 'Unix Program Description' document
prepared by USG, dated January 1976, and it's still clearly using the V6
approach. The PWB1 release was allegedly July, 1977:
https://minnie.tuhs.org//cgi-bin/utree.pl?file=PWB1
(Which is, AFAIK, the _only_ set of sources we have for after V6 and before V6
- other than the MIT system, which seems to be basically PWB1.)
So who did the exec() changes, originally?
And I could list a bunch more like this...
Noel
[View Less]
I never really learned VI. I can stumbled through it in ex mode if I have
to. If there's no EMACS on the UNIX system I'm using, I use ed.
You get real good at regular expressions. Some of my employees were
pretty amazed at how fast I could make code changes with just ed.
Here's part of the story.
> From: "Doug McIlroy" <doug(a)cs.dartmouth.edu>
> To:<tuhs(a)minnie.tuhs.org>
> Sent:Fri, 16 Dec 2016 21:09:16 -0500
> Subject:[TUHS] How Unix made it
to the top
>
> It has often been told how the Bell Labs law department became the
> first non-research department to use Unix, displacing a newly acquired
> stand-alone word-processing system that fell short of the department's
> hopes because it couldn't number the lines on …
[View More]patent applications,
> as USPTO required. When Joe Ossanna heard of this, he told them about
> nroff and promised to give it line-numbering capability the next day.
> They tried it and were hooked. Patent secretaries became remote
> members of the fellowship of the Unix lab. In due time the law
> department got its own machine.
Come to think of it, they must already have had a machine, probably
leased from the commercial word-processing company, for they had DEC
tapes they needed to convert to Unix format. Several of us in the Unix
lab spent a memorable afternoon decoding the proprietary format. It was
finally broken when we computed a bitwise autocorrelation function. It
had a big peak at seven. The tapes were pure ASCII rather than bytewise
ASCII--a lot of work for very little data compression.
As for training, the secretaries had to learn nroff and ed plus the
usual lot of ls, mkdir, mv, cp, rm. The patent department had to invest
in modems and order phone lines to plug them into. I don't know what
terminals they used.
>From this distant point in time it seems that it all happened in a couple
of weeks. Joe Ossanna did most of the teaching, and no doubt supplied
samples to copy. As far as I know the only other instructional materials
would have been man pages and the nroff manual (forbiddingly terse,
though thorough). He may have made a patent-macro package, but I doubt
it; I think honor for the first real macro package goes to Lesk's -ms.
Doug
[View Less]
Larry’s question about PWB made me think it might be useful to this list
for some of this to be written down.
When you write the story of UNIX, licensing is a huge part of it (both good
and bad). As I have tried to explain before the 1956 consent decree and
the later 1980 Judge Green ruling, as well as how the AT&T legal department
set up the licenses really cast huge shadows that almost seem trite today;
but seem to have been forgotten.
In fact later licensing would become so …
[View More]important, one of the more infamous
UNIX wars was based on it (if you go back to the original OSF Founding
Principles – two of them are ‘Fair and Stable Licensing Terms’). As we
all know, because of the original 1956 decree, AT&T was not allowed to be
in the computer business and so when people came calling both to use it
(Academically and Commercially) and to relicense it; history has shown that
AT&T’s management killed the golden goose. I’d love to hear the views of
Doug, Steve, Ken and other who were inside looking out.
FWIW: These are my thoughts from an Academic and Commercial user back in
the day. AT&T’s management was clearly concerned about the consent decree
and the original licenses show it. UNIX was licensed for free to academic
institutions for research use (sans a small tape copying fee) and the bits
were ‘abandoned on your door step.’ This style of license, along with
the publishing of the ideas behind really did get the ideas out and the
academic community loved it. We used it and we were able to share
everything.
The academic license was fine until people want to start to use in a
commercial setting (Rand Corp). Again, AT&T legal is worried about being
perceived in the computer business, so the original commercial use license
shows it. AT&T licensing basically uses the academic license but add the
ability to actually use it for commercial purposes. Then the first
Universities start to want to use UNIX more like a commercial system [Dan
Klein and I would go on strike and force CMU to purchase first commercial
use license for an Academic setting, following by Case Western].
As AT&T management realized the UNIX IP did seem to be some value, just
like the transistor had been earlier, it seems like they wanted to find a
way to keep it under their control. I remember having a dinner
conversation with Dennis at a USENIX about this topic. Steve has expressed
they told many folks to treated it as a ‘trade secret’ (which is strange to
me since the cat as already out of the bag by then and the ideas (IP)
behind UNIX had already been extensively published (we even had USENIX
formed to discuss ideas).
By the time Judge Green allows AT&T to be in the computer business I think
AT&T management completely misunderstood the value of what they had. The
AT&T legal team had changed the commercial rules in every new UNIX release
a new license was created, and thus firms like DEC, HP, IBM *et al* were
getting annoyed because they had begun to invest in the technology
themselves and the feeling inside of those firms was that AT&T management
was changing the ground rules after the game started.
IMO a funny thing happened (bad karma), it seems like the tighter AT&T
management seems to try to control things in the UNIX community, the less
control the community gave them. Clearly, the new features of the
technology started to be driven by BSD. But the license was the one place
they could control and they tried. In fact, by the time of the SVR4 it
all came to a head and OSF was formed because most firms were unwilling to
give AT&T the kind of control they were asking in the that license [as
Larry has previously expressed, Sun made a Faustian deal WRT to SVR4]. In
the end, the others were shipping from an SVR3 license or had bought it
out.
[View Less]
> From: Clem cole
> Thinking about this typesetter C may have been later with ditroff.
Not so sure about that; we had that C at MIT, but only regular troff (which
had been hacked to drive a Varian).
> From: Arnold Skeeve
> It seems to be shortly after the '78 release of V7.
No, typesetter C definitely pre-dated V7. The 'PWB1' system at MIT had the new
C.
Looking at the documentation files for the extension (e.g. addition of
'long's), none of them have dates in them …
[View More](alas), but my hard-copy printout of
one is dated "May 8 1978", and it was several years old at that point.
(Also, several sources give '79 for V7 - Salus says 'June 1979').
Noel
[View Less]
> From: Clem Cole
> Their is a open question about the need to support self modifying code
> too. I personally don't think of that as important as the need for
> conditional instructions which I do think need to be there before you
> can really call it a computer.
Here's one way to look at it: with conditional branching, one can always
write a program to _emulate_ a machine with self-modifying code (if that's
what floats your boat, computing-wise) - because …
[View More]that's exactly what older,
simple microcoded machines (which don't, of course, have self-modifying code
- their programs are in ROM) do.
Noel
[View Less]
Way back on this day in 1941, Conrad Zuse unveiled the Z3; it was the
first programmable automatic computer as we know it (Colossus 1 was not
general-purpose). The last news I heard about the Z3 was that she was
destroyed in an air-raid...
This pretty much started computing, as we know it.
-- Dave
All, in case you haven't seen it:
https://www.ioccc.org/2018/mills/
This is a PDP-7 emulator in C, enough to run PDP-7 Unix. But the author
has written a PDP-11 emulator in PDP-7 assembly, and uses this to run
2.9BSD on the emulated PDP-7 :-)
Cheers, Warren
>From: jnc(a)mercury.lcs.mit.edu (Noel Chiappa)
>To: tuhs(a)tuhs.org
>Cc: jnc(a)mercury.lcs.mit.edu
>Subject: Re: [TUHS] Who used *ROFF?
>Message-ID: <20180512110127.0B81418C08E(a)mercury.lcs.mit.edu>
>
<snip>
>Are you familiar with the description in Dennis M. Ritchie, "The Evolution of
>the Unix Time-sharing System":
>
> https://www.bell-labs.com/usr/dmr/www/hist.htm
>
<snip>
Please note the URL should end with ".html", not ".htm"
I …
[View More]wasted 5 minutes (insert big grin) wondering why I got an 404 like
404 Not Found
Code: NoSuchKey
Message: The specified key does not exist.
Key: hist.htm
RequestId: 454E36190753F99C
HostId: 6EJTsEdvnbnAr4VO7+mxSWH+dcX8X6AGRLZxwOLha/9q5G2CAxsVbEw6aMF+NHIPbhrAQ+/t/8o=
[View Less]
Hardly ever use notepad, hardly ever used notepad. Especially since I
discovered notepad++ many years ago ( https://notepad-plus-plus.org )
Of course, I use what is handy for what I'm doing. 'vim' I use when I
want to do some 'manipulation :-)
Does anyone know why UUCP "bag" files are called "bag"?
Seeing as this relates to unix-to-unix-copy, I figured that someone on
TUHS might have an idea.
Thanks in advance.
--
Grant. . . .
unix || die
Tomorrow, May 12, in 1941 the Z3 computer was presented by Konrad Zuse:
https://en.wikipedia.org/wiki/Z3_(computer)
I enjoyed reading the specs at the bottom of the Wikipedia page. I
never heard of this project until today, coming across it an article.
Mike Markowski
Born on this day in 1930, he gave us ALGOL, the basis of all decent
programming languages today, and the entire concept of structured
programming. Ah, well I remember the ALGOLW compiler on the 360...
There's a beaut article here:
https://www.cs.utexas.edu/users/EWD/ewd02xx/EWD215.PDF
And remember, testing can show the presence of errors, but not their
absence...
--
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)
People who fail to / understand …
[View More]security / surely will suffer. (tks: RichardM)
[View Less]
I'm curious as UNIX folks if somewhere can enlighten me. I sometimes
answer things on Quora and a few years ago the following question was
posted:
What does "Experience working in Unix/Linux environments" mean when given
as a required skill in company interviews? What do they want from us?
<https://www.quora.com/What-does-Experience-working-in-Unix-Linux-environmen…>
Why would this be considered a spam violation - which I was notified today
as being so.
It all depends the job for …
[View More]the specific experiences the hiring mangers want
to see. The #1 thing I believe they will looking for is something that does
not need to have a GUI to be useful. If you a simple user, it means you are
comfortable in a text based, shell environment and are at least familiar
with, if not proficient with the UNIX tools such as, ed, vi or emacs, grep,
tail, head, sed, awk, cut, join, tr, etc. You should be able to use one or
more of the Bourne Shell/Korn Shell/Bash family or CShell. You should be
familiar with the UNIX file tree and basic layout and its protection
scheme. It helps if you understand the differences between BSD, SysV, Mac
OSx, and Linux layout; but for many jobs in the UNIX community that may not
be required. You should also understand how to use the Unix man command to
get information on the tools you are using —* i.e.* you should have read,
if not own a copy of Kernighan and Pike The Unix Programming Environment
(Prentice-Hall Software Series): Brian W. Kernighan, Rob Pike:
9780139376818: Amazon.com: Books
<https://www.amazon.com/Unix-Programming-Environment-Prentice-Hall-Software/…>
and
be proficient in the first four chapters. If the job requires you writing
scripts to be able to write Shell script (*i.e.* program the shell) using
the Bourne Shell syntax *i.e.* Chapter 5 (Shell Programming).
If you are a programmer, then you need to be used to using the UNIX/Linux
toolchains and probably not require an IDE - again as a programmer
knowledge if not our proficiency of at least source code control system
from SCCS, RCS, CVS, SVN, Mercurial, git or the like needed. Kernighan and
Pike’s Chapter’s 6–8 should be common knowledge. But to be honest, you also
should be familiar with the contents contained within it, if not own and
keep a copy of the Rich Steven’s Advanced Programming in the UNIX
Environment, 3rd Edition: W. Richard Stevens, Stephen A. Rago:
9780321637734: Amazon.com: Books
<https://www.amazon.com/Advanced-Programming-UNIX-Environment-3rd/dp/0321637…>
(*aka* APUE) on your desk.
If you are system administrator, then the required set of tools get much
larger and besides the different way to “generate” (build) a system is a
good idea; but less tools for user maintenance. In this case you should be
familiar with, again if not own have a copy on your desk of Evi
Nemeth’s Amazon.com:
UNIX and Linux System Administration Handbook, 4th Edition (8580001058917):
Evi Nemeth, Garth Snyder, Trent R. Hein, Ben Whaley: Books
<https://www.amazon.com/UNIX-Linux-System-Administration-Handbook/dp/0131480…>
- which is and has been one of if not the best UNIX admin work for many,
many years.
Updated 05/07/18: to explain I am not shilling for anyone. I am trying to
honestly answer the question and make helpful recommendations of how to
learn what the person asked to help them be better equipped to be employed
in the Unix world. I used Amazon’s URL’s because they are global and easy
to use as a reference. But I am not suggesting you purchase from them. In
fact, if you can borrow a copy from you library to start, that might be a
good idea.
[View Less]
Grant Taylor:
(Maybe the 3' pipe wrench has something to do with it.)
=======
Real pipes don't need wrenches. Maybe those in Windows do.
Norman Wilson
Toronto ON
Hi all,
Forgive this cross-post from cctalk, if you're seeing this message twice. TUHS seems like a very appropriate list for this question.
I'm experimenting with setting up UUCP and Usenet on a cluster of 3B2/400s, and I've quickly discovered that while it's trivial to find old source code for Usenet (B News and C News), it's virtually impossible to find source code for old news *readers*.
I'm looking especially for nn, which was my go-to at the time. The oldest version I've found so far …
[View More]is nn 6.4, which is too big to compile on a 3B2/400. If I could get my hands on 6.1 or earlier, I think I'd have a good chance.
I also found that trn 3.6 from 1994 works well enough, though it is fairly bloated. Earlier versions of that might be better.
Does anyone have better Google-fu than I do? Or perhaps you've got earlier sources squirreled away?
As an aside: If you were active on Usenet in 1989, what software were you using?
-Seth
--
Seth Morabito
web(a)loomcom.com
[View Less]
Hi,
There's a unix "awesome list". It mentions TUHS's wiki, as well as this quote:
"This is the Unix philosophy: Write programs that do one thing and do
it well. Write programs to work together. Write programs to handle
text streams, because that is a universal interface." - Douglas
McIlroy, former head of Bell Labs Computing Sciences Research Center
https://github.com/sirredbeard/Awesome-UNIX
On 08.05.18 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > My point being that ... pages are invisible to the process segments are
> > very visible. And here we talk from a hardware point of view.
>
> So you're saying 'segmentation means instructions explicitly include segment
> numbers, and the address space is a two-dimensional array', or 'segmentation
> means pointers explicitly include segment …
[View More]numbers', or something like that?
Not really. I'm trying to understand your argument.
You said:
"BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user
(except
for setting the total size of the process), whereas segmentation is
explicitly
visible to the user."
And then you used MERT as an example of this.
My point then is, how is MERT any different from mmap() under Unix?
Would you then say that the paging is visible under Unix, meaning that
this is then segmentation?
In my view, you are talking about a software concept. And as such, it
has no bearing on whether a machine have pages or segments, as that is a
hardware thing and distinction, while anything done as a service by the
OS is a completely different, and independent question.
> I'm more interested in the semantics that are provided, not bits in
> instructions.
Well, if we talk semantics instead of the hardware, then you can just
say that any machine is segmented, and you can say that any machine is
have pages. Because I can certainly make it appear both ways from the
software point of view for applications running under an OS.
And I can definitely do that on a PDP-11. The OS can force pages to
always be 8K in size, and the OS can (as done by lots of OSes) provide a
mechanism that gives you something you call segments.
> It's true that with a large address space, one can sort of simulate
> segmentation. To me, machines which explicitly have segment numbers in
> instructions/pointers are one end of a spectrum of 'segmented machines', but
> that's not a strict requirement. I'm more concerned about how they are used,
> what the system/user gets.
So, again. Where does mmap() put you then?
And, just to point out the obvious, any machine with pages have a page
table, and the page table entry is selected based on the high bits of
the virtual address. Exactly the same as on the PDP-11. The only
difference is the number of pages, and the fact that the page on the
PDP-11 do not have a fixed length, but can be terminated earlier if wanted.
So, pages are explicitly numbered in pointers on any machine with pages.
> Similarly for paging - fixed sizes (or a small number of sizes) are part of
> the definition, but I'm more interested in how it's used - for demand loading,
> and to simplify main memory allocation purposes, etc.
I don't get it. So, in which way are you still saying that a PDP-11
don't have pages?
> >> the semantics available for - and_visible_ to - the user are
> >> constrained by the mechanisms of the underlying hardware.
>
> > That is not the same thing as being visible.
>
> It doesn't meet the definition above ('segment numbers in
> instructions/pointers'), no. But I don't accept that definition.
I'm trying to find out what your definition is. :-)
And if it is consistent and makes sense... :-)
> > All of this is so similar to mmap() that we could in fact be having this
> > exact discussion based on mmap() instead .. I don't see you claiming
> > that every machine use a segmented model
>
> mmap() (and similar file->address space mapping mechanisms, which a bunch of
> OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
> orthagonal - although it clearly needs support from memory management hardware.
Can you explain how mmap() is any different from the service provided by
MERT?
> And one can add 'sharing memory between two processes' here, too; very similar
> _mechanisms_ to mmap(), but different goals. (Although I suppose two processes
> could map the same area of a file, and that would give them IPC mapping.)
That how a single copy of shared libraries happen under Unix.
Exactly what happens if you modify the memory depends on what flags you
give to mmap().
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
[View Less]
I started with roff (the simplest but utterly frozen) and moved up to nroff. It was a few years later I was involved with a rather project to make a CAT phototypesetter emulator for the Versatec printer-plotter (similar to the BSD vcat, which we had not seen yet). My friend George Toth went to the Naval Research Laboratory and printed out the entire typeface on their CAT on transparent film. Then he set out to figure out a way to digitize it.
Well, next door the the EE building (where …
[View More]the UNIX work took place at JHU) was the biophysics department. They had Scanning Transmission Electron Microscope there, quite an impressive machine. The front end of the thing was a PDP-11/20 with some digital to analog converters and vice versa and a frame buffer. The software would control the positioning of the beam and read back how much came through the material and was detected. Essentially, you were making a raster picture of the sample in the microscope.
George comes up with this great idea. He takes a regular oscilloscope. He takes the deflection wires from the 11/20 off the microscope and puts them in the X and Y amplifiers of the scope. He then put a photomultiplier tube in the shell of an old scope camera. He'd cut out a single character and tape it the front of the scope and hang the camera on it. He'd fire up the microscope software and tell it to scan the sample. It would then put the image in the frame buffer. We'd pull the microscope RK05 pack out and boot miniunix and read the data from the frame buffer (why we didn't just write software to drive the A2D from miniunix I do not recall).
Eventually, George gets everything scanned in and cleaned up. It worked somewhat adequately.
Another amusing feature was that Michael John Muuss (my mentor) wrote a macro package tmac.jm. Some people were someone peeved that we now had a "nroff -mjm" option.
Years later after ditroff was in vogue, my boss was always after me to switch to some modern document prep (Framemaker or the like). On one rough job I told him I'd do it but I didn't have time to learn framemaker.
I write one page of this proposal, print it and then go on. My boss would proof it and then my coworker would come behind me and make the corrections. I ended up rewriting a million dollar (a lot of money back in 1989 or so) proposal in two days, complete with 25 pages of narrative and may be 50 pages of TBL-based tables showing compliance with the RFP. We won that contract and got several follow ons.
Years later I was reading a published book. I noted little telltale bumps on the top of some of the tables. I wrote the author..."Did you use tbl and pic to typeset this book?" Sure enough he had. But it was way after I thought anybody was still using such technology. Of course, I was happy when Springer-Verlag suddenly learned out to typeset books. I had a number of their texts in college that didn't even look like the put a new ribbon in the typewriter when setting the book.
[View Less]