Tomorrow, May 12, in 1941 the Z3 computer was presented by Konrad Zuse:
https://en.wikipedia.org/wiki/Z3_(computer)
I enjoyed reading the specs at the bottom of the Wikipedia page. I
never heard of this project until today, coming across it an article.
Mike Markowski
Born on this day in 1930, he gave us ALGOL, the basis of all decent
programming languages today, and the entire concept of structured
programming. Ah, well I remember the ALGOLW compiler on the 360...
There's a beaut article here:
https://www.cs.utexas.edu/users/EWD/ewd02xx/EWD215.PDF
And remember, testing can show the presence of errors, but not their
absence...
--
Dave Horsfall BSc DTM (VK2KFU) -- FuglySoft -- Gosford IT -- Unix/C/Perl (AbW)
People who fail to / understand security / surely will suffer. (tks: RichardM)
I'm curious as UNIX folks if somewhere can enlighten me. I sometimes
answer things on Quora and a few years ago the following question was
posted:
What does "Experience working in Unix/Linux environments" mean when given
as a required skill in company interviews? What do they want from us?
<https://www.quora.com/What-does-Experience-working-in-Unix-Linux-environmen…>
Why would this be considered a spam violation - which I was notified today
as being so.
It all depends the job for the specific experiences the hiring mangers want
to see. The #1 thing I believe they will looking for is something that does
not need to have a GUI to be useful. If you a simple user, it means you are
comfortable in a text based, shell environment and are at least familiar
with, if not proficient with the UNIX tools such as, ed, vi or emacs, grep,
tail, head, sed, awk, cut, join, tr, etc. You should be able to use one or
more of the Bourne Shell/Korn Shell/Bash family or CShell. You should be
familiar with the UNIX file tree and basic layout and its protection
scheme. It helps if you understand the differences between BSD, SysV, Mac
OSx, and Linux layout; but for many jobs in the UNIX community that may not
be required. You should also understand how to use the Unix man command to
get information on the tools you are using —* i.e.* you should have read,
if not own a copy of Kernighan and Pike The Unix Programming Environment
(Prentice-Hall Software Series): Brian W. Kernighan, Rob Pike:
9780139376818: Amazon.com: Books
<https://www.amazon.com/Unix-Programming-Environment-Prentice-Hall-Software/…>
and
be proficient in the first four chapters. If the job requires you writing
scripts to be able to write Shell script (*i.e.* program the shell) using
the Bourne Shell syntax *i.e.* Chapter 5 (Shell Programming).
If you are a programmer, then you need to be used to using the UNIX/Linux
toolchains and probably not require an IDE - again as a programmer
knowledge if not our proficiency of at least source code control system
from SCCS, RCS, CVS, SVN, Mercurial, git or the like needed. Kernighan and
Pike’s Chapter’s 6–8 should be common knowledge. But to be honest, you also
should be familiar with the contents contained within it, if not own and
keep a copy of the Rich Steven’s Advanced Programming in the UNIX
Environment, 3rd Edition: W. Richard Stevens, Stephen A. Rago:
9780321637734: Amazon.com: Books
<https://www.amazon.com/Advanced-Programming-UNIX-Environment-3rd/dp/0321637…>
(*aka* APUE) on your desk.
If you are system administrator, then the required set of tools get much
larger and besides the different way to “generate” (build) a system is a
good idea; but less tools for user maintenance. In this case you should be
familiar with, again if not own have a copy on your desk of Evi
Nemeth’s Amazon.com:
UNIX and Linux System Administration Handbook, 4th Edition (8580001058917):
Evi Nemeth, Garth Snyder, Trent R. Hein, Ben Whaley: Books
<https://www.amazon.com/UNIX-Linux-System-Administration-Handbook/dp/0131480…>
- which is and has been one of if not the best UNIX admin work for many,
many years.
Updated 05/07/18: to explain I am not shilling for anyone. I am trying to
honestly answer the question and make helpful recommendations of how to
learn what the person asked to help them be better equipped to be employed
in the Unix world. I used Amazon’s URL’s because they are global and easy
to use as a reference. But I am not suggesting you purchase from them. In
fact, if you can borrow a copy from you library to start, that might be a
good idea.
Grant Taylor:
(Maybe the 3' pipe wrench has something to do with it.)
=======
Real pipes don't need wrenches. Maybe those in Windows do.
Norman Wilson
Toronto ON
Hi all,
Forgive this cross-post from cctalk, if you're seeing this message twice. TUHS seems like a very appropriate list for this question.
I'm experimenting with setting up UUCP and Usenet on a cluster of 3B2/400s, and I've quickly discovered that while it's trivial to find old source code for Usenet (B News and C News), it's virtually impossible to find source code for old news *readers*.
I'm looking especially for nn, which was my go-to at the time. The oldest version I've found so far is nn 6.4, which is too big to compile on a 3B2/400. If I could get my hands on 6.1 or earlier, I think I'd have a good chance.
I also found that trn 3.6 from 1994 works well enough, though it is fairly bloated. Earlier versions of that might be better.
Does anyone have better Google-fu than I do? Or perhaps you've got earlier sources squirreled away?
As an aside: If you were active on Usenet in 1989, what software were you using?
-Seth
--
Seth Morabito
web(a)loomcom.com
Hi,
There's a unix "awesome list". It mentions TUHS's wiki, as well as this quote:
"This is the Unix philosophy: Write programs that do one thing and do
it well. Write programs to work together. Write programs to handle
text streams, because that is a universal interface." - Douglas
McIlroy, former head of Bell Labs Computing Sciences Research Center
https://github.com/sirredbeard/Awesome-UNIX
On 08.05.18 04:00, jnc(a)mercury.lcs.mit.edu (Noel Chiappa) wrote:
> > From: Johnny Billquist
>
> > My point being that ... pages are invisible to the process segments are
> > very visible. And here we talk from a hardware point of view.
>
> So you're saying 'segmentation means instructions explicitly include segment
> numbers, and the address space is a two-dimensional array', or 'segmentation
> means pointers explicitly include segment numbers', or something like that?
Not really. I'm trying to understand your argument.
You said:
"BTW, this reminds me of another key differentiator between paging and
segments, which is that paging was originally _invisible_ to the user
(except
for setting the total size of the process), whereas segmentation is
explicitly
visible to the user."
And then you used MERT as an example of this.
My point then is, how is MERT any different from mmap() under Unix?
Would you then say that the paging is visible under Unix, meaning that
this is then segmentation?
In my view, you are talking about a software concept. And as such, it
has no bearing on whether a machine have pages or segments, as that is a
hardware thing and distinction, while anything done as a service by the
OS is a completely different, and independent question.
> I'm more interested in the semantics that are provided, not bits in
> instructions.
Well, if we talk semantics instead of the hardware, then you can just
say that any machine is segmented, and you can say that any machine is
have pages. Because I can certainly make it appear both ways from the
software point of view for applications running under an OS.
And I can definitely do that on a PDP-11. The OS can force pages to
always be 8K in size, and the OS can (as done by lots of OSes) provide a
mechanism that gives you something you call segments.
> It's true that with a large address space, one can sort of simulate
> segmentation. To me, machines which explicitly have segment numbers in
> instructions/pointers are one end of a spectrum of 'segmented machines', but
> that's not a strict requirement. I'm more concerned about how they are used,
> what the system/user gets.
So, again. Where does mmap() put you then?
And, just to point out the obvious, any machine with pages have a page
table, and the page table entry is selected based on the high bits of
the virtual address. Exactly the same as on the PDP-11. The only
difference is the number of pages, and the fact that the page on the
PDP-11 do not have a fixed length, but can be terminated earlier if wanted.
So, pages are explicitly numbered in pointers on any machine with pages.
> Similarly for paging - fixed sizes (or a small number of sizes) are part of
> the definition, but I'm more interested in how it's used - for demand loading,
> and to simplify main memory allocation purposes, etc.
I don't get it. So, in which way are you still saying that a PDP-11
don't have pages?
> >> the semantics available for - and_visible_ to - the user are
> >> constrained by the mechanisms of the underlying hardware.
>
> > That is not the same thing as being visible.
>
> It doesn't meet the definition above ('segment numbers in
> instructions/pointers'), no. But I don't accept that definition.
I'm trying to find out what your definition is. :-)
And if it is consistent and makes sense... :-)
> > All of this is so similar to mmap() that we could in fact be having this
> > exact discussion based on mmap() instead .. I don't see you claiming
> > that every machine use a segmented model
>
> mmap() (and similar file->address space mapping mechanisms, which a bunch of
> OS's have supported - TENEX/TOP-20, ITS, etc) are interesting, but to me,
> orthagonal - although it clearly needs support from memory management hardware.
Can you explain how mmap() is any different from the service provided by
MERT?
> And one can add 'sharing memory between two processes' here, too; very similar
> _mechanisms_ to mmap(), but different goals. (Although I suppose two processes
> could map the same area of a file, and that would give them IPC mapping.)
That how a single copy of shared libraries happen under Unix.
Exactly what happens if you modify the memory depends on what flags you
give to mmap().
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt(a)softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
I started with roff (the simplest but utterly frozen) and moved up to nroff. It was a few years later I was involved with a rather project to make a CAT phototypesetter emulator for the Versatec printer-plotter (similar to the BSD vcat, which we had not seen yet). My friend George Toth went to the Naval Research Laboratory and printed out the entire typeface on their CAT on transparent film. Then he set out to figure out a way to digitize it.
Well, next door the the EE building (where the UNIX work took place at JHU) was the biophysics department. They had Scanning Transmission Electron Microscope there, quite an impressive machine. The front end of the thing was a PDP-11/20 with some digital to analog converters and vice versa and a frame buffer. The software would control the positioning of the beam and read back how much came through the material and was detected. Essentially, you were making a raster picture of the sample in the microscope.
George comes up with this great idea. He takes a regular oscilloscope. He takes the deflection wires from the 11/20 off the microscope and puts them in the X and Y amplifiers of the scope. He then put a photomultiplier tube in the shell of an old scope camera. He'd cut out a single character and tape it the front of the scope and hang the camera on it. He'd fire up the microscope software and tell it to scan the sample. It would then put the image in the frame buffer. We'd pull the microscope RK05 pack out and boot miniunix and read the data from the frame buffer (why we didn't just write software to drive the A2D from miniunix I do not recall).
Eventually, George gets everything scanned in and cleaned up. It worked somewhat adequately.
Another amusing feature was that Michael John Muuss (my mentor) wrote a macro package tmac.jm. Some people were someone peeved that we now had a "nroff -mjm" option.
Years later after ditroff was in vogue, my boss was always after me to switch to some modern document prep (Framemaker or the like). On one rough job I told him I'd do it but I didn't have time to learn framemaker.
I write one page of this proposal, print it and then go on. My boss would proof it and then my coworker would come behind me and make the corrections. I ended up rewriting a million dollar (a lot of money back in 1989 or so) proposal in two days, complete with 25 pages of narrative and may be 50 pages of TBL-based tables showing compliance with the RFP. We won that contract and got several follow ons.
Years later I was reading a published book. I noted little telltale bumps on the top of some of the tables. I wrote the author..."Did you use tbl and pic to typeset this book?" Sure enough he had. But it was way after I thought anybody was still using such technology. Of course, I was happy when Springer-Verlag suddenly learned out to typeset books. I had a number of their texts in college that didn't even look like the put a new ribbon in the typewriter when setting the book.
> From: Clem Cole
> I agree with Tannenbaum that uKernel's make more sense to me in the long
> run - even if they do cost something in speed
There's a certain irony in people complaining that ukernel's have more
overhead - while at the same time time mindlessly, and almost universally,
propogating such pinheaded computing hogs as '_mandating_ https for everything
under the sun' (even things as utterly pointless to protect as looking at
Wikipedia articles on mathematics), while simultaneously letting
Amazon/Facebook/Google do the 'all your data are belong to us' number; the
piling of Pelion upon Ossa in all the active content (page after page of
JavaScript, etc) in many (most?) modern Web sites that does nothing more than
'eye candy'; etc, etc.
Noel